Broadcast Fundamentals




       Sony Training Services




                         Version 4
                  14 February 2006
Broadcast Fundamentals


Part 1                                               Table of Contents
Part 1 Table of Contents                                                       i
Part 2 The history of television                                              1
  Multimedia timeline                                                         1
Part 3 Image perception & colour                                             29
  The human eye                                                              29
  The concept of primary colours                                             35
  Secondary and tertiary colours                                             36
  Hue saturation and luminosity                                              37
  The CIE space                                                              38
Part 4 The basic television signal                                           40
  The problem of getting a picture from A to B                               40
  Interlaced raster scanning                                                 41
  Half lines                                                                 42
  Synchronisation                                                            44
  The oscilloscope                                                           45
Part 5 The monochrome NTSC signal                                            46
  The 405 line system                                                        46
  The 525 line monochrome system                                             46
  Frame rate and structure                                                   46
  Line rate and structure                                                    47
  Bandwidth considerations                                                   47
Part 6 Colour and television                                                 50
  Using additive primary colours                                             50
  Ensuring compatability                                                     50
  Adding colour                                                              51
  Combining R-Y & B-Y                                                        54
  Video signal spectra                                                       55
  Combining monochrome and colour                                            56
  Using composite video                                                      57
Part 7 Colour NTSC television                                                58
  Similarity to monochrome                                                   58
  Choice of subcarrier frequency.                                            58
  Adding colour                                                              59
  The vectorscope                                                            61
  The gamut                                                                  61
  The gamut detector                                                         61
  Vertical interval structure                                                62
Part 8 PAL television                                                        64

i                                                Sony Broadcast & Professional Europe
Part 1 –Table of contents

  What is PAL?                                               64
  The PAL signal                                             65
  The PAL chroma signal                                      65
  Choice of subcarrier frequency                             67
  Bruch blanking                                             68
  The disadvantages of PAL                                   71
Part 9 SECAM television                                      72
The video camera                                             73
 Types of video camera                                       73
 System cameras                                              73
 Parts of a video camera                                     74
 Video camera specifications                                 76
Lenses                                                       79
 Refraction                                                  79
 The block of glass                                          80
 The prism                                                   80
 The convex lens                                             81
 The concave lens                                            83
 Chromatic aberration                                        84
 Spherical aberration                                        86
 Properties of the lens                                      86
 The concave and convex mirrors                              88
 Lens types                                                  89
 Extenders and adaptors                                      90
 Filters                                                     91
Part 10 Early image sensors                                  92
  Selenium detectors                                         92
  The Ionoscope                                              92
  The Orthicon tube                                          93
  The Image Orthicon tube                                    93
  The Vidicon tube                                           94
  Variations on the Vidicon design                           95
Part 11 Dichroic blocks                                      96
  The purpose of a dichroic block                            96
  Mirrors and filters                                        96
  Optical requirements of a dichroic block                   98
  Variation on a theme                                       98
  Using dichroic blocks in projectors                        98
Part 12 CCD sensors                                         100
  Advantages of CCD image sensors                           100


Sony Training Services                                              ii
Broadcast Fundamentals

  The basics of a CCD                                                         101
  Using the CCD as a delay line                                               102
  Using CCD’s as image sensors                                                106
  Back lit sensors                                                            109
  Problems with CCD image sensors                                             110
  CCD image sensors with stores                                               111
  HAD technology                                                              115
  HyperHAD                                                                    117
  SuperHAD sensors                                                            117
  PowerHAD sensors                                                            117
  PowerHAD EX (Eagle) sensors                                                 118
  EX View HAD sensors                                                         119
  Single chip CCD designs                                                     119
  Noise reduction                                                             123
  The future of CCD sensors                                                   127
Part 13 The video tape recorder                                               128
  A short history                                                             128
  The present day                                                             130
  Magnetic recording principles                                               133
  The essentials of helical scan                                              135
  Modern video recorder mechadeck design                                      140
  Variation in tape path designs                                              147
  Definition of a good tape path                                              148
  The servo system                                                            149
  Analogue video tape recorder signal processing                              150
  Popular analogue video recording formats                                    154
  Digital video tape recorders                                                157
  Popular digital video tape formats                                          159
Part 14 Betacam and varieties                                                 163
Part 15 The video disk recorder                                               170
  History                                                                     170
  Present day                                                                 170
  RAID technology                                                             172
  Realising RAID systems                                                      179
Part 16 Television receivers & monitors                                       187
  The basic principle                                                         187
  Input signals                                                               187
Part 17 Timecode                                                              189
  A short history                                                             189
  Timecode                                                                    190
  Timecode’s basic structure                                                  190

iii                                                Sony Broadcast & Professional Europe
Part 1 –Table of contents

  Longitudinal timecode                                               194
  Bi-phase mark coding                                                199
  Adjusting the LTC head                                              199
  Vertical Interval Timecode                                          202
  Drop frame timecode                                                 207
  Which timecode am I using ?                                         207
  Timecode use in video recorders                                     208
  Typical VTR timecode controls                                       208
  The future                                                          210
Part 18 SDI (serial digital interface)                                211
  Parallel digital television                                         211
  Serial digital television                                           220
  Serial digital audio                                                221
  SDI                                                                 226
  Video index                                                         226
Part 19 Video compression                                             227
  Traditional analogue signals                                        227
  Analogue to digital conversion                                      227
  Compressing digital signals                                         227
  Digital errors in transmission                                      228
  Compensating for digital errors                                     228
  The advantage of digital compression                                228
  Entropy and redundancy                                              228
  The purpose of any compression scheme                               230
  Lossless and lossy compression                                      230
  Inter-frame and Intra-frame                                         231
  What is DCT?                                                        232
  The church organ                                                    232
  The Fourier transform                                               233
  The Discrete Fourier Transform (DFT)                                235
  Discrete Cosine Transform (DCT) solution to judder                  237
  What does the result of DCT look like?                              238
  DCT in video                                                        238
  The mathematics of DCT as used for video                            241
  DCT in audio                                                        243
  Basis pictures                                                      243
  Why bother?                                                         243
  Huffman’s three step process                                        244
  The principle behind variable length codes                          248
  The results of discrete cosine transforms                           248
  Using bell curves for variable length coding                        248
  Decoding variable length codes                                      250


Sony Training Services                                                       iv
Broadcast Fundamentals

    Disadvantages of variable length codes                              250
The television station                                                  253
 The studio                                                             253
 The post production studio                                             253
 The edit suite                                                         254
 The news studio                                                        254
 The outside broadcast vehicle                                          254
Part 20 CCTV, security & surveillance                                   256
  What is CCTV?                                                         256
  CCTV privacy & evidence                                               256
  CCTV use                                                              257
  CCTV terminology                                                      259
  The typical CCTV chain                                                263
  CCTV cameras                                                          266
  Reading CCTV camera specifications                                    271
  CCTV lenses                                                           277
  CCTV switchers and control stations                                   283
  CCTV over IP                                                          286
  Character and shape recognition                                       286
Part 21 Numbers & equations                                             287
  Decibels                                                              287
Part 22 Things to do                                                    289




v                                            Sony Broadcast & Professional Europe
Broadcast Fundamentals


Part 2                                      The history of television
Multimedia timeline
       Prehistoric

            BC
                 45,000 Neanderthal carvings on Wooly Mammoth tooth, discovered near
                 Tata, Hungary.
                 30,000 Ivory horse, oldest known animal carving, from mammoth ivory,
                 discovered near Vogelherd, Germany.
                 28,000 Cro-Magnon notation, possibly of phases of the moon, carved
                 onto bone, discovered at Blanchard, France.
                 @ 10,000       Engraved antler baton, with seal, salmon and plants
                 portrayed, discovered at Montgaudier, France.
                 8,000 - 3100 In Mesopotamia, tokens used for accounting and record-
                 keeping
                 3500 In Sumer, pictographs (cuneiforms) of accounts written on clay
                 tablets.
                 3400 - 3100    Inscription on Mesopotamian tokens overlap with
                 pictography
                 2600    Scribes employed in Egypt.
                 2400    In India, engraved seals identify the writer.
                 2200    Date of oldest existing document written on papyrus.
                 1500    Phoenician alphabet.
                 1400    Oldest record of writing in China, on bones.
                 1270    Syrian scholar compiles an encyclopedia.
                 900     China has an organized postal service for government use.
                 775     Greeks develop a phonetic alphabet, written from left to right.
                 530     In Greece, a library.
                 500     Greek telegraph: trumpets, drums, shouting, beacon fires, smoke
                 signals, mirrors.
                 500     Persia has a form of pony express.
                 500     Chinese scholars write on bamboo with reeds dipped in pigment.
                 400     Chinese write on silk as well as wood, bamboo.
                 @ 300 Alexandria library founded by Ptolomy. At its peak the library at
                 Alexandria had about 700000 manuscripts and books and was a magnet
                 for scolary from all over the world.
                 200     Books written on parchment and vellum.
                 200     Tipao gazettes are circulated to Chinese officials.


1                                                   Sony Broadcast & Professional Europe
Part 2 – The history of television

                   59        Julius Caesar orders postings of Acta Diurna.
                   48    Alexandria library burnt during Julius Caeser’s siege of
                   Alexandria.

             AD
                   100    Roman couriers carry government mail across the empire.
                   105    T'sai Lun invents paper.
                   175    Chinese classics are carved in stone which will later be used for
                   rubbings.
                   180    In China, an elementary zoetrope.
                   250    Paper use spreads to central Asia.
                   350    In Egypt, parchment book of Psalms bound in wood covers.
                   450    Ink on seals is stamped on paper in China. This is true printing.
                   600    Books printed in China.
                   700    Sizing agents are used to improve paper quality.
                   751    Paper manufactured outside of China, in Samarkand by Chinese
                   captured in war.
                   765    Picture books printed in Japan.
                   868    The Diamond Sutra, a block-printed book in China.
                   875    Amazed travelers to China see toilet paper.
                   950    Paper use spreads west to Spain.
                   950    Folded books appear in China in place of rolls.
                   950    Bored women in a Chinese harem invent playing cards.

        1000-1499
                   1000   Mayas in Yucatan, Mexico, make writing paper from tree bark.
                   1035   Japanese use waste paper to make new paper.
                   1049   Pi Sheng fabricates movable type, using clay.
                   1116   Chinese sew pages to make stitched books.
                   1140   In Egypt, cloth is stripped from mummies to make paper.
                   1147 Crusader taken prisoner returns with papermaking art, according
                   to a legend.
                   1200   European monasteries communicate by letter system.
                   1200   University of Paris starts messenger service.
                   1241   In Korea, metal type.
                   1282   In Italy, watermarks are added to paper.
                   1298   Marco Polo describes use of paper money in China.
                   1300   Wooden type found in central Asia.


Sony Training Services                                                                        2
Broadcast Fundamentals

                 1305    Taxis family begins private postal service in Europe.
                 1309    Paper is used in England.
                 1392    Koreans have a type foundry to produce bronze characters.
                 1423    Europeans begin Chinese method of block printing.
                 1450    A few newsletters begin circulating in Europe.
                 1451 Johnannes Gutenberg uses a press to print an old German
                 poem.
                 1452    Metal plates are used in printing.
                 1453    Gutenberg prints the 42-line Bible.
                 1464    King of France establishes postal system.
                 1490    Printing of books on paper becomes more common in Europe.
                 1495    A paper mill is established in England.

       1500 – 1599
                 1500    Arithmetic + and - symbols are used in Europe.
                 1510 By now approximately 35,000 books have been printed, some 10
                 million copies.
                 1520    Spectacles balance on the noses of Europe's educated.
                 1533    A postmaster in England.
                 1545    Garamond designs his typeface.
                 1550    Wallpaper brought to Europe from China by traders.
                 1560 In Italy, the portable camera obscura allows precise tracing of an
                 image.
                 1560    Legalized, regulated private postal systems grow in Europe.
                 1556    The pencil.

       1600 – 1699
                 1609    First regularly published newspaper appears in Germany.
                 1627    France introduces registered mail.
                 1631    A French newspaper carries classified ads.
                 1639    In Boston, someone is appointed to deal with foreign mail.
                 1639    First printing press in the American colonies.
                 1640    Kirchner, a German Jesuit, builds a magic lantern.
                 1650    Leipzig has a daily newspaper.
                 1653    Parisians can put their postage-paid letters in mail boxes.
                 1659    Londoners get the penny post.
                 1661    Postal service within the colony of Virginia.
                 1673    Mail is delivered on a route between New York and Boston.


3                                                    Sony Broadcast & Professional Europe
Part 2 – The history of television

                   1689   Newspapers are printed, at first as unfolded "broadsides."
                   1696   By now England has 100 paper mills.
                   1698   Public library opens in Charleston, S.C.

        1700 - 1799
                   1704   A newspaper in Boston prints advertising.
                   1710   German engraver Le Blon develops three-color printing.
                   1714   Henry Mill receives patent in England for a typewriter.
                   1719   Reaumur proposes using wood to make paper.
                   1725   Scottish printer develops stereotyping system.
                   1727   Schulze begins science of photochemistry.
                   1732   In Philadelphia, Ben Franklin starts a circulating library.
                   1755   Regular mail ship runs between England and the colonies.
                   1770   The eraser.
                   1774   Swedish chemist invents a future paper whitener.
                   1775 Continental Congress authorizes Post Office; Ben Franklin first
                   Postmaster General.
                   1780   Steel pen points begin to replace quill feathers.
                   1784   French book is made without rags, from vegetation.
                   1785   Stagecoaches carry the mail between towns in U.S.
                   1790   In England the hydraulic press is invented.
                   1792   Mechanical semaphore signaler built in France.
                   1792   In Britain, postal money orders.
                   1792   Postal Act gives mail regularity throughout U.S.
                   1794   First letter carriers appear on American city streets.
                   1794   Panorama, forerunner of movie theaters, opens.
                   1794   Signaling system connects Paris and Lille.
                   1798   Senefelder in Germany invents lithography.
                   1799   Robert in France invents a paper-making machine.

        1800 - 1899
                   1800   Paper can be made from vegetable fibers instead of rags.
                   1800   Letter takes 20 days to reach Savannah from Portland, Maine.
                   1801   Semaphore system built along the coast of France.
                   1801   Joseph-Marie Jacquard invents a loom using punch cards.
                   1803   Fourdrinier continuous web paper-making machine.
                   1804   In Germany, lithography is invented.
                   1806   Carbon paper.

Sony Training Services                                                                       4
Broadcast Fundamentals

                   1807    Camera lucida improves image tracing.
                   1808      Turri of Italy builds a typewriter for a blind contessa.
                   1817 Jons Berzelius discovered selenium, an element shown in later
                   years to have photo-voltaic effects. The material was a bi-products of
                   chemical processes carried out in a Swedish factory. At first he though
                   the material was tellurium “earth”, but later found it to be a new element
                   and named it selenium from the Greek word “selene” meaning “moon”.
                   1831 Michael Faraday in Britain and Joseph Henry in the United
                   States experiment with electromagnetism, providing the basis for
                   research into electrical communication.
                   1844    Samuel Morse publicly demonstrates the telegraph for the first
                   time.
                   1862 Italian physicist, Abbe Giovanni Caselli, is the first to send fixed
                   images over a long distance, using a system he calls the "pantelegraph".
                   1873 Two English telegraph engineers, May and Smith, experiment
                   with selenium and light, giving inventors a way of transforming images
                   into electrical signals.
                   1880 George Carey builds a rudimentary system using dozens of tiny
                   light-sensitive selenium cells.
                   1884 In Germany, Paul Nipkow patents the first mechanical television
                   scanning system, consisting of a disc with a spiral of holes. As the disc
                   spins, the eye blurs all the points together to re-create the full picture.
                   1895 Italian physicist Guglielmo Marconi develops radio telegraphy
                   and transmits Morse code by wireless for the first time.
                   1897 Karl Ferdinand Braun, a German physicist, invents the first
                   cathode-ray tube, the basis of all modern television cameras and
                   receivers.



       1900 – 1909

            1900
                   Kodak Brownie makes photography cheaper and simpler.
                   Pupin's loading coil reduces telephone voice distortion.

            1901
                   Sale of phonograph disc made of hard resinous shellac
                   First electric typewriter, the Blickensderfer.
                   Marconi sends a radio signal across the Atlantic.

            1902
                   Germany's Zeiss invents the four-element Tessar camera lens.
                   Etched zinc engravings start to replace hand-cut wood blocks.


5                                                      Sony Broadcast & Professional Europe
Part 2 – The history of television

                    U.S. Navy installs radio telephones aboard ships.
                    Photoelectric scanning can send and receive a picture.
                    Trans-Pacific telephone cable connects Canada and Australia.

             1903
                    Technical improvements in radio, telegraph, phonograph, movies and
                    printing.
                    London Daily Mirror illustrates only with photographs.
                    A telephone answering machine is invented.
                    Fleming invents the diode to improve radio communication.
                    Offset lithography becomes a commercial reality.
                    A photograph is transmitted by wire in Germany.
                    Hine photographs America's underclass.
                    The Great Train Robbery creates demand for fiction movies.
                    The comic book.
                    The double-sided phonograph disc.

             1905
                    In Pittsburgh the first nickelodeon opens.
                    Photography, printing, and post combine in the year's craze, picture
                    postcards.
                    In France, Pathe colors black and white films by machine.
                    In New Zealand, the postage meter is introduced.
                    The Yellow Pages.
                    The juke box; 24 choices.

             1906
                    A program of voice and music is broadcast in the U.S.
                    Lee de Forest invents the three-element vacuum tube.
                    Dunwoody and Pickard build a crystal-and-cat's-whisker radio.
                    An animated cartoon film is produced.
                    Fessenden plays violin for startled ship wireless operators.
                    An experimental sound-on-film motion picture.
                    Strowger invents automatic dial telephone switching.

             1907
                    Bell and Howell develop a film projection system.
                    Lumiere brothers invent still color photography process.


Sony Training Services                                                                      6
Broadcast Fundamentals

                   DeForest begins regular radio music broadcasts.
                   In Russia, Boris Rosing develops theory of television and transmits
                   black-and-white silhouettes of simple shapes, using a mechanical mirror-
                   drum apparatus as a camera and a cathode-ray tube as a receiver.

            1908
                   Campbell-Swinton, a Scottish electrical engineer, publishes proposals
                   about an all-electronic television system that uses a cathode-ray tube for
                   both receiver and camera.
                   In U.S., Smith introduces true color motion pictures.

            1909
                   Radio distress signal saves 1,700 lives after ships collide.
                   First broadcast talk; the subject: women's suffrage.



       1910-1919

            1910
                   Sweden's Elkstrom invents "flying spot" camera light beam.

            1911
                   Efforts are made to bring sound to motion pictures.
                   Rotogravure aids magazine production of photos.
                   "Postal savings system" inaugurated.

            1912
                   U.S. passes law to control radio stations.
                   Motorized movie cameras replace hand cranks.
                   Feedback and heterodyne systems usher in modern radio.
                   First mail carried by airplane.

            1913
                   The portable phonograph is manufactured.
                   Type composing machines roll out of the factory.

            1914
                   A better triode vacuum tube improves radio reception.
                   Radio message is sent to an airplane.
                   In Germany, the 35mm still camera, a Leica.
                   In the U.S., Goddard begins rocket experiments.


7                                                     Sony Broadcast & Professional Europe
Part 2 – The history of television

                    First transcontinental telephone call.

             1915
                    Wireless radio service connects U.S. and Japan.
                    Radio-telephone carries speech across the Atlantic.
                    Birth of a Nation sets new movie standards.
                    The electric loudspeaker.

             1916
                    David Sarnoff envisions radio as "a household utility."
                    Cameras get optical rangefinders.
                    Radios get tuners.

             1917
                    Photocomposition begins.
                    Frank Conrad builds a radio station, later KDKA.
                    Condenser microphone aids broadcasting, recording.

             1918
                    First regular airmail service: Washington, D.C. to New York.

             1919
                    The Radio Corporation of America (RCA) is formed.
                    People can now dial telephone numbers themselves.
                    Shortwave radio is invented.
                    Flip-flop circuit invented; will help computers to count.

        1920-1929

             1920
                    The first broadcasting stations are opened.
                    First cross-country airmail flight in the U.S.
                    Sound recording is done electrically.
                    Post Office accepts the postage meter.
                    KDKA in Pittsburgh broadcasts first scheduled programs.

             1921
                    Quartz crystals keep radio signals from wandering.
                    The word "robot" enters the language.
                    Western Union begins wirephoto service.

Sony Training Services                                                                         8
Broadcast Fundamentals

            1922
                   A commercial is broadcast, $100 for ten minutes.
                   Technicolor introduces two-color process for movies.
                   Germany's UFA produces a film with an optical sound track.
                   First 3-D movie, requires spectacles with one red and one green lens.
                   Singers desert phonograph horn mouths for acoustic studios.
                   Nanook of the North, the first documentary.

            1923
                   Vladimir Zworykin patents the "Iconoscope", an electronic camera tube.
                   By the end of the year he has also produced a picture display tube, the
                   "Kinescope".
                   People on one ship can talk to people on another.
                   Ribbon microphone becomes the studio standard.
                   A picture, broken into dots, is sent by wire.
                   16 mm nonflammable film makes its debut.
                   Kodak introduces home movie equipment.
                   Neon advertising signs.
                   The A.C. Nielsen Company is founded. Nielsen's market research is
                   soon being used by companies deciding where to advertise on radio.

            1924
                   John Logie Baird is the first to transmit a moving silhouette image, using
                   a mechanical system based on Paul Nipkow's model.
                   Low tech achievement: notebooks get spiral bindings.
                   The Eveready Hour is the first sponsored radio program.
                   At KDKA, Conrad sets up a short-wave radio transmitter.
                   Daily coast-to-coast air mail service.
                   Two and a half million radio sets in the U.S.

            1925
                   John Logie Baird obtains the first actual television picture.
                   Vladimir Zworykin takes out the first patent for colour television.
                   The Leica 35 mm camera sets a new standard.
                   Commercial picture facsimile radio service across the U.S.
                   All-electric phonograph is built.
                   A moving image, the blades of a model windmill, is telecast.
                   From France, a wide-screen film.



9                                                      Sony Broadcast & Professional Europe
Part 2 – The history of television

             1926
                    John Logie Baird gives the first successful public demonstration of
                    mechanical television at his laboratory in London.
                    The National Broadcasting Company (NBC) is formed by Westinghouse,
                    General Electric and RCA.
                    Commercial picture facsimile radio service across the Atlantic.
                    Some radios get automatic volume control, a mixed blessing.
                    The Book-of-the-Month Club.
                    In U.S., first 16mm movie is shot.
                    Goddard launches liquid-fuel rocket.
                    Permanent radio network, NBC, is formed.
                    Bell Telephone Labs transmit film by television.

             1927
                    The British Broadcasting Corporation is founded.
                    Columbia Phonographic Broadcasting System, later CBS, is formed
                    Pictures of Herbert Hoover, U.S. Secretary of Commerce, are
                    transmitted 200 miles from Washington D.C. to New York, in the world's
                    first televised speech and first long-distance television transmission.
                    NBC begins two radio networks.
                    Farnsworth assembles a complete electronic TV system.
                    Jolson's "The Jazz Singer" is the first popular "talkie."
                    Movietone offers newsreels in sound.
                    U.S. Radio Act declares public ownership of the airwaves.
                    Technicolor.
                    Negative feedback makes hi-fi possible.

             1928
                    Station W2XBS, RCA's first television station, is established in New York
                    City, creating television's first star, Felix the Cat — the original model of
                    which is featured in Watching TV
                    Later in the year, the world's first television drama, The Queen's
                    Messenger, is broadcast, using mechanical scanning
                    John Logie Baird transmits images of London to New York via
                    shortwave.
                    The teletype machine makes its debut.
                    Television sets are put in three homes, programming begins.
                    Baird invents a video disc to record television.
                    In an experiment, television crosses the Atlantic.



Sony Training Services                                                                       10
Broadcast Fundamentals

                   In Schenectady, N.Y., the first scheduled television broadcasts.
                   Steamboat Willie introduces Mickey Mouse.
                   A motion picture is shown in color.
                   Times Square gets moving headlines in electric lights.
                   IBM adopts the 80-column punched card.

            1929
                   In London, John Logie Baird opens the world's first television studio, but
                   is still able to produce only crude and jerky images. However, because
                   Baird's television pictures carry so little visual information, it is possible
                   to broadcast them from ordinary medium-wave radio transmitters.
                   Experiments begin on electronic color television.
                   Telegraph ticker sends 500 characters per minute.
                   Ship passengers can phone relatives ashore.
                   Brokers watch stock prices on an automated electric board.
                   Something else new: the car radio.
                   In Germany, magnetic sound recording on plastic tape.
                   Air mail flown from Miami to South America.
                   Bell Lab transmits stills in color by mechanical scanning.
                   Zworykin demonstrates cathode-ray tube "kinescope" receiver, 60 scan
                   lines.

       1930-1939

            1930
                   The first commercial is televised by Charles Jenkins, who is fined by the
                   U.S. Federal Radio Commission.
                   The BBC begins regular television transmissions.
                   Photo flashbulbs replace dangerous flash powder.
                   "Golden Age" of radio begins in U.S.
                   Lowell Thomas begins first regular network newscast.
                   TVs based on British mechanical system roll off factory line.
                   Bush's differential analyzer introduces the computer.
                   AT&T tries the picture telephone.

            1931
                   Owned jointly by CKAC and La Presse, Canada's first television station,
                   VE9EC, starts broadcasting in Montreal. Ted Rogers, Sr. receives a
                   licence to broadcast experimental television from his Toronto radio
                   station. Also this year, RCA begins experimental electronic
                   transmissions from the Empire State Building.


11                                                     Sony Broadcast & Professional Europe
Part 2 – The history of television

                    Commercial teletype service.
                    Electronic TV broadcasts in Los Angeles and Moscow.
                    Exposures meters go on sale to photographers.
                    NBC experimentally doubles transmission to 120-line screen.

             1932
                    Parliament creates the Canadian Radio Broadcasting Commission,
                    superseded by the CBC in 1936.
                    Disney adopts a three-color Technicolor process for cartoons.
                    Kodak introduces 8 mm film for home movies.
                    The "Times" of London uses its new Times Roman typeface.
                    Stereophonic sound in a motion picture, "Napoleon."
                    Zoom lens is invented, but a practical model is 21 years off.
                    The light meter.
                    NBC and CBS allow prices to be mentioned in commercials.

             1933
                    Western Television Limited's mechanical television system is toured and
                    demonstrated at Eaton's stores in Toronto, Montreal and Winnipeg.
                    Armstrong invents FM, but its real future is 20 years off.
                    Multiple-flash sports photography.
                    Singing telegrams.
                    Phonograph records go stereo.

             1934
                    Drive-in movie theater opens in New Jersey.
                    Associated Press starts wirephoto service.
                    In Germany, a mobile television truck roams the streets.
                    In Scotland, teletypesetting sets type by phone line.
                    Three-color Technicolor used in live action film.
                    Communications Act of 1934 creates FCC.
                    Half of the homes in the U.S. have radios.
                    Mutual Radio Network begins operations.

             1935
                    William Hoyt Peck of Peck Television of Canada uses a transmitter in
                    Montreal during five weeks of experimental mechanical broadcasts.
                    Germany opens the world's first three-day-a-week filmed television
                    service. France begins broadcasting its first regular transmissions from
                    the top of the Eiffel Tower.

Sony Training Services                                                                      12
Broadcast Fundamentals

                   German single lens reflex roll film camera synchronized for flash bulbs.
                   Also in Germany, audio tape recorders go on sale.
                   IBM's electric typewriter comes off the assembly line.
                   The Penguin paperback book sells for the price of 10 cigarettes.
                   All-electronic VHF television comes out of the lab.
                   Eastman-Kodak develops Kodachrome color film.
                   Nielsen's Audimeter tracks radio audiences.

            1936
                   There are about 2,000 television sets in use around the world. The BBC
                   starts the world's first public high-definition/electronic television service
                   in London.
                   Berlin Olympics are televised closed circuit.
                   Bell Labs invents a voice recognition machine.
                   Kodachrome film sharpens color photography.
                   Co-axial cable connects New York to Philadelphia.
                   Alan Turing's "On Computable Numbers" describes a general purpose
                   computer.

            1937
                   Stibitz of Bell Labs invents the electrical digital calculator.
                   Pulse Code Modulation points the way to digital transmission.
                   NBC sends mobile TV truck onto New York streets.
                   A recording, the Hindenburg crash, is broadcast coast to coast.
                   Carlson invents the photocopier.
                   Snow White is the first feature-length cartoon.

            1938
                   Allen B. DuMont forms the DuMont television network to compete with
                   RCA. Also this year, DuMont manufactures the first all-electronic
                   television set for sale to the North American public. One of these early
                   DuMont television sets is featured in Watching TV.
                   Strobe lighting.
                   Baird demonstrates live TV in color.
                   Broadcasts can be taped and edited.
                   Two brothers named Biro invent the ballpoint pen in Argentina.
                   CBS "World News Roundup" ushers in modern newscasting.
                   DuMont markets electronic television receiver for the home.
                   Radio drama, War of the Worlds," causes national panic.


13                                                      Sony Broadcast & Professional Europe
Part 2 – The history of television

             1939
                    Because of the outbreak of war, the BBC abruptly stops broadcasting in
                    the middle of a Mickey Mouse cartoon on September 1, resuming at that
                    same point when peace returns in 1945. The first major display of
                    electronic television in Canada takes place at the Canadian National
                    Exhibition in Toronto. Baseball is televised for the first time.
                    Mechanical scanning system abandoned.
                    New York World's Fair shows television to public.
                    Regular TV broadcasts begin in USA.
                    Air mail service across the Atlantic.
                    Many firsts: sports coverage, variety show, feature film, etc.

        1940-1949

             1940
                    Dr. Peter Goldmark of CBS introduces a 343-line colour television
                    system for daily transmission, using a disc of three filters (red, green and
                    blue), rotated in front of the camera tube.
                    Fantasia introduces stereo sound to American public.

             1941
                    North America's current 525-line/30-pictures-a-second standard, known
                    as the NTSC (National Television Standards Committee) standard, is
                    adopted.
                    Stereo is installed in a Moscow movie theater.
                    FCC sets U.S. TV standards.
                    CBS and NBC start commercial transmission; WW II intervenes.
                    Goldmark at CBS experiments with electronic color TV.
                    Microwave transmission.
                    Zuse's Z3 is the first computer controlled by software.

             1942
                    Atanasoff, Berry build the first electronic digital computer.
                    Kodacolor process produces the color print.

             1943
                    Repeaters on phone lines quiet long distance call noise.

             1944
                    Harvard's Mark I, first digital computer, put in service.
                    IBM offers a typewriter with proportional spacing.
                    NBC presents first U.S. network newscast, a curiosity.

Sony Training Services                                                                       14
Broadcast Fundamentals

            1945
                   BBC returns regular transmission of television, at the exact same time of
                   day at exactly the same point in the programme.
                   Clarke envisions geo-synchronous communication satellites.
                   It is estimated that 14,000 products are made from paper.

            1946
                   NBC and CBS demonstrate rival colour systems. The world's first
                   television broadcast via coaxial cable is transmitted from New York to
                   Washington D.C.
                   Jukeboxes go into mass production.
                   Pennsylvania's ENIAC heralds the modern electronic computer.
                   Automobile radio telephones connect to telephone network.
                   French engineers build a phototypesetting machine.

            1947
                   A permanent network linking four eastern U.S. stations is established by
                   NBC. On June 3, Canadian General Electric engineers in Windsor
                   receive the first official electronic television broadcast in Canada,
                   transmitted from the new U.S. station WWDT in Detroit. This year also
                   sees the development of the transistor, on which solid-state electronics
                   are based.
                   Hungarian engineer in England invents holography.
                   The transistor is invented, will replace vacuum tubes.
                   The zoom lens covers baseball's world series for TV.

            1948
                   Television manufacturing begins in Canada. The television audience
                   increases by 4,000 percent this year, due to a jump in the number of
                   cities with television stations and to the fact that one million homes in the
                   U.S. now have television sets. The U.S. Federal Communications
                   Commission puts a freeze on new television channel allocations until the
                   problem of station-to-station interference is resolved.
                   The LP record arrives on a viny disk.
                   Shannon and Weaver of Bell Labs propound information theory.
                   Land's Polaroid camera prints pictures in a minute.
                   Hollywood switches to nonflammable film.
                   Public clamor for television begins; FCC freezes new licenses.
                   Airplane re-broadcasts TV signal across nine states.




15                                                    Sony Broadcast & Professional Europe
Part 2 – The history of television

             1949
                    The first Emmy Awards are presented, and the Canadian government
                    establishes an interim policy for television, announcing loans for CBC
                    television development.
                    An RCA research team in the U.S. develops the Shadow Mask picture
                    tube, permitting a fully electronic colour display.
                    Network TV in U.S.
                    RCA offers the 45 rpm record.
                    Community Antenna Television, forerunner to cable.
                    Whirlwind at MIT is the first real time computer.
                    Magnetic core computer memory is invented.

        1950-1959

             1950
                    Cable TV begins in the U.S., and warnings begin to be issued on the
                    impact of violent programming on children.
                    European broadcasters fix a common picture standard of 625 lines. (By
                    the 1970s, virtually all nations in the world used 625-line service, except
                    for the U.S., Japan, and some others which adopted the 525-line U.S.
                    standard.)
                    Over 100 television stations are in operation in the U.S.
                    Regular USA color television transmission.
                    Vidicon camera tube improves television picture.
                    Changeable typewriter typefaces in use.
                    A.C. Nielsen's Audimeters track viewer watching.

             1951
                    The first colour television transmissions begin in the U.S. this year.
                    Unfortunately, for technical reasons, the several million existing black-
                    and-white receivers in America cannot pick up the colour programmes,
                    even in black-and-white, and colour sets go blank during television's
                    many hours of black-and-white broadcasting. The experiment is a failure
                    and colour transmissions are stopped.
                    The U.S. sees its first coast-to-coast transmission in a broadcast of the
                    Japanese Peace Conference in San Francisco.
                    One and a half million TV sets in U.S., a tenfold jump in one year.
                    Cinerama will briefly dazzle with a wide, curved screen and three
                    projectors.
                    Computers are sold commercially.
                    Still camera get built-in flash units.
                    Coaxial cable reaches coast to coast.


Sony Training Services                                                                      16
Broadcast Fundamentals

            1952
                   Cable TV systems begin in Canada. On September 6, CBC Television
                   broadcasts from its Montreal station; on September 8, CBC broadcasts
                   from the Toronto station.
                   The first political ads appear on U.S. television networks, when
                   Democrats buy a half-hour slot for Adlai Stevenson. Stevenson is
                   bombarded with hate mail for interfering with a broadcast of I Love Lucy.
                   Eisenhower, Stevenson's political opponent, buys only 20-second
                   commercial spots, and wins the election.
                   3-D movies offer thrills to the audience.
                   Bing Crosby's company, Crosby Enterprises, tests video recording.
                   Wide-screen Cinerama appears; other systems soon follow.
                   Sony offers a miniature transistor radio.
                   EDVAC takes computer technology a giant leap forward.
                   Univac projects the winner of the presidential election on CBS.
                   Telephone area codes.
                   Zenith proposes pay-TV system using punched cards.
                   Sony offers a miniature transistor radio.

            1953
                   A microwave network connects CBC television stations in Montreal,
                   Ottawa and Toronto.
                   The first private television stations begin operation in Sudbury and
                   London.
                   Queen Elizabeth's coronation is televised.
                   CBC beats U.S. competitors to the punch by flying footage across the
                   Atlantic.
                   In the USA TV Guide is launched.
                   NTSC colour standard adopted and the USA begins colour transmission
                   again, this time successfully.
                   Japanese television goes on the air for the first time.
                   CATV system uses microwave to bring in distant signals.

            1954
                   Magazines now routinely offer the homemaker tips on arranging living-
                   room furniture for optimal television-viewing pleasure.
                   U.S.S.R. launches Sputnik.
                   Radio sets in the world now outnumber newspapers printed daily.
                   Regular colour TV broadcasts established.
                   Sporting events are broadcast live in colour.



17                                                    Sony Broadcast & Professional Europe
Part 2 – The history of television

                    Radio sets in the world now outnumber daily newspapers.
                    Transistor radios are sold.

             1955
                    Tests begin to communicate via fiber optics.
                    Music is recorded on tape in stereo.

             1956
                    Ampex Corporation demonstrates videotape recording, initially used only
                    by television stations.
                    Henri de France develops the SECAM (sequential colour with memory)
                    procedure. It is adopted in France, and the first SECAM colour
                    transmission between Paris and London takes place in 1960.
                    Several Louisiana congressmen promote a bill to ban all television
                    programmes that portray blacks and whites together in a sympathetic
                    light.
                    Bell tests the picture phone.
                    First transatlantic telephone calls by cable.

             1957
                    The Soviet Union launches the world's first Earth satellite, Sputnik.
                    Soviet Union's Sputnik sends signals from space.
                    FORTRAN becomes the first high-level language.
                    A surgical operation is televised.
                    First book to be entirely phototypeset is offset printed.

             1958
                    The CBC's microwave network is extended from Victoria, B.C. to Halifax
                    and Sydney, Nova Scotia, to become the longest television network in
                    the world.
                    Pope Pius XII declares Saint Clare of Assisi the patron saint of
                    television. Her placement on the television set is said to guarantee good
                    reception.
                    Videotape delivers colour pictures.
                    Stereo recording is introduced.
                    Data moves over regular phone circuits.
                    Broadcast bounced off rocket, pre-satellite communication.
                    The laser is introduced.
                    Cable carries FM radio stations.




Sony Training Services                                                                       18
Broadcast Fundamentals

            1959
                   CBC Radio-Canada Montreal producers go on strike.
                   Bonanza debuts, starring Canadian actor Lorne Greene.
                   Local announcements, weather data and local ads go on cable.
                   The microchip is invented.
                   Xerox manufactures a plain paper copier.
                   Bell Labs experiments with artificial intelligence.
                   French SECAM and German PAL systems introduced.

       1960-1969

            1960
                   The Nixon-Kennedy debates are televised, marking the first network use
                   of the split screen. Kennedy performs better on television than Nixon,
                   and it is believed that television helps Kennedy win the election.
                   Sony develops the first all-transistor television receiver, making
                   televisions lighter and more portable.
                   Ninety percent of American homes now own television sets, and
                   America becomes the world's first "television society". There are now
                   about 100 million television sets in operation worldwide.
                   Echo I, a U.S. balloon in orbit, reflects radio signals to Earth.
                   In Rhode Island, an electronic, automated post office.
                   A movie gets Smell-O-Vision, but the public just sniffs.
                   Zenith tests subscription TV; unsuccessful.

            1961
                   The Canadian Television Network (CTV), a privately owned network,
                   begins operations.
                   The beginning of the Dodd hearings in the U.S., which examined the
                   television industry's "rampant and opportunistic use of violence".
                   Boxing match test shows potential of pay-TV.
                   FCC approves FM stereo broadcasting; spurs FM development.
                   Bell Labs tests communication by light waves.
                   IBM introduces the "golf ball" typewriter.
                   Letraset makes headlines simple.
                   The time-sharing computer is developed.

            1962
                   The Telstar television satellite is launched by the U.S., and starts
                   relaying transatlantic television shortly after its launch. The first
                   programme shows scenes of Paris.


19                                                     Sony Broadcast & Professional Europe
Part 2 – The history of television

                    A survey indicates that 90 percent of American households have
                    television sets; 13 percent have more than one.
                    Cable companies import distant signals.
                    FCC requires UHF tuners on television sets.
                    The minicomputer arrives.
                    Comsat created to launch, operate global system.

             1963

                    From Holland comes the audio cassette.
                    Zip codes introduced.
                    CBS and NBC TV newscasts expand to 30 minutes in color.
                    PDP-8 becomes the first popular minicomputer.
                    Polaroid camera instant photography adds color.
                    Communications satellite is placed in geo-synchronous orbit.
                    On November 22, regular television programming is suspended
                    following news of the Kennedy assassination.
                    On November 24, live on television, Jack Ruby murders Lee Harvey
                    Oswald, Kennedy's suspected assassin. Kennedy's funeral is televised
                    the following day. 96 per cent of all American television sets are on for
                    an average 31 hours out of 72 during this period — watching, many say,
                    simply to share in the crisis.

             1964
                    The Beatles appear for the first time on Ed Sullivan Show.
                    Procter and Gamble, the largest American advertiser, refuses to
                    advertise on any show that gives "offense, either directly or by inference,
                    to any organized minority group, lodge or other organizations,
                    institutions, residents of any State or section of the country or a
                    commercial organization."
                    Olympic Games in Tokyo telecast live globally by satellite.
                    Touch Tone telephones and Picturephone service.
                    From Japan, the videotape recorder for home use.
                    Russian scientists bounce a signal off Jupiter.
                    Intelsat, international satellite organization, is formed.

             1965
                    The Vietnam War becomes the first war to be televised, coinciding with
                    CBS's first colour transmissions and the first Asia-America satellite link.
                    Protesters against the war adopt the television-age slogan, The whole
                    world is watching.



Sony Training Services                                                                        20
Broadcast Fundamentals

                   Sony introduces Betamax, a small home videorecorder.
                   Electronic phone exchange gives customers extra services.
                   Satellites begin domestic TV distribution in Soviet Union.
                   Computer time-sharing becomes popular.
                   Color news film.
                   Communications satellite Early Bird (Intelsat I) orbits above the Atlantic.
                   Kodak offers Super 8 film for home movies.
                   Cartridge audio tapes go on sale for a few years.
                   Most television broadcasts in the USA are in colour.
                   FCC rules bring structure to cable television.
                   Solid-state equipment spreads through the cable industry.

            1966
                   Colour television signals are transmitted by Canadian stations for the
                   first time.
                   Linotron can produce 1,000 characters per second.
                   Fiber optic cable multiplies communication channels.
                   Xerox sells the Telecopier, a fax machine.

            1967
                   Sony introduces the first lightweight, portable and cheap video recorder,
                   known as the "portapak". The portapak is almost as easy to operate as a
                   tape-recorder and leads to an explosion in "do-it-yourself" television,
                   revolutionizing the medium.
                   Also this year, the FCC orders that cigarette ads on television, on radio
                   and in print, carry warnings about the health dangers of smoking.
                   Dolby introduces a system that eliminates audio hiss.
                   Computers get the light pen.
                   Pre-recorded movies on videotape sold for home TV sets.
                   Cordless telephones get some calls.
                   Approx. 200 million telephones in the world, half in U.S.

            1968
                   Sony develops the Trinitron tube, revolutionizing the picture quality of
                   colour television.
                   World television ownership nears 200 million, with 78 million sets in the
                   U.S. alone. The U.S. television industry now has annual revenues of
                   about $2 billion and derives heavy support from tobacco advertisers.
                   FCC approves non-Bell equipment attached to phone system.
                   The RAM microchip reaches the market.


21                                                    Sony Broadcast & Professional Europe
Part 2 – The history of television

             1969
                    On July 20, 1969, the first television transmission from the moon is
                    viewed by 600 million television viewers around the world.
                    Sesame Street debuts on American Public Television, and begins to
                    revolutionize adult attitudes about what children are capable of learning.
                    Astronauts send live photographs from the moon.

        1970-1979

             1970
                    Postal Reform Bill makes U.S. Postal Service a government corporation.
                    In Germany, a videodisc is demonstrated.
                    U.S. Post Office and Western Union offer Mailgrams.
                    The computer floppy disc is an instant success.

             1971
                    Canada's Anik I, the first domestic geo-synchronous communications
                    satellite, is launched, capable of relaying 12 television programmes
                    simultaneously.
                    India has a single television station in New Delhi, able to reach only 20
                    miles outside the city.
                    South Africa has no television at all.
                    Intel builds the microprocessor, "a computer on a chip."
                    Wang 1200 is the first word processor.

             1972
                    The Munich Olympics are broadcast live, drawing an estimated 450
                    million viewers worldwide. When Israeli athletes are kidnapped by
                    Palestinian terrorists during the games, coverage of the games cuts
                    back and forth between shots of the terrorists and footage of Olympic
                    events.
                    The American-conceived Intelsat system is launched this year,
                    becoming a network and controlling body for the world's communications
                    satellite system.
                    HBO starts pay-TV service for cable.
                    Sony introduces 3/4 inch "U-Matic" cassette VCR.
                    New FCC rules lead to community access channels.
                    Polaroid camera can focus by itself.
                    Digital television comes out of the lab.
                    The BBC offers "Ceefax," two-way cable information system.
                    "Open Skies": Any U.S. firm can have communication satellites.
                    Landsat I, "eye-in-the-sky" satellite, is launched.

Sony Training Services                                                                        22
Broadcast Fundamentals

                   "Pong" starts the video game craze.

            1973
                   Ninety-six countries now have regular television service.
                   Watergate unfolds on the air in the U.S. and ends the following year with
                   Nixon's resignation.
                   U.S. producers sell nearly $200 million dollars worth of programmes
                   overseas, more than the rest of the world combined.
                   The microcomputer is born in France.
                   IBM's Selectric typewriter is now "self-correcting."
                   The term Electronic News Gathering, or ENG is introduced.
                   "Teacher-in-the-Sky" satellite begins educational mission.

            1975
                   A study indicates that the average American child during this decade will
                   have spent 10,800 hours in school by the time he or she is 18, but will
                   have seen an average 20,000 hours of television. Studies also estimate
                   that, by the time he/she is 75, the average American male will have
                   spent nine entire years of his life watching television; the average British
                   male will have spent eight years watching.
                   The microcomputer, in kit form, reaches the U.S. home market.
                   Sony's Betamax and JVC's VHS battle for public acceptance.
                   "Thrilla' from Manila"; substantial original cable programming.

            1976
                   The Olympics, broadcast from Montreal, draw an estimated 1 billion
                   viewers worldwide.
                   Apple I compter introduced.
                   Ted Turner delivers programming nationwide by satellite.
                   Still cameras are controlled by microprocessors.

            1977
                   South Africans see television for the first time on May 10, as test
                   transmissions begin from the state-backed South Africa Broadcast Co.
                   The Pretoria government has yielded to public pressure after years of
                   banning television as being morally corrupting. Half the broadcasts are
                   in English, half in Afrikaans.
                   Columbus, Ohio, residents try 2-way cable experiment, QUBE.

            1978
                   Ninety-eight percent of American households have television sets, up
                   from nine percent in 1950. Seventy-eight percent have colour
                   televisions, up from 3.1 percent in 1964.

23                                                    Sony Broadcast & Professional Europe
Part 2 – The history of television

                    From Konica, the point-and-shoot camera.
                    PBS goes to satellite for delivery, abandoning telephone lines.
                    Electronic typewriters go on sale.

             1979
                    There are now 300 million television sets in operation worldwide.
                    Flat-screen pocket televisions, with liquid crystal display screens, are
                    patented by the Japanese firm Matsushita. The pocket television is no
                    bigger than a paperback book.
                    Speech recognition machine has a vocabulary of 1,000 words.
                    From Holland comes the digital videodisc read by laser.
                    In Japan, first cellular phone network.
                    Computerized laser printing is a boon to Chinese printers.

        1980-1989

             1980
                    During the 1980s, in the U.S. and Germany, laws and policies are
                    enacted to preserve a person's right to television in the event of financial
                    setback. Later in the year, the U.S. Cable News Network (CNN) goes on
                    the air in the U.S.
                    India launches its national television network.
                    Sony Walkman tape player starts a fad.
                    In France, a holographic film shows a gull flying.
                    Phototypesetting can be done by laser.
                    Intelsat V relays 12,000 phone calls, 2 color TV channels.
                    Public international electronic fax service, Intelpost, begins.
                    Atlanta gets first fiber optics system.
                    CNN 24-hour news channel started.
                    Addressable converters pinpoint individual homes.

             1981
                    450,000 transistors fit on a silicon chip 1/4-inch square.
                    Hologram technology improves, now in video games.
                    The IBM PC.
                    The laptop computer is introduced.
                    The first mouse pointing device.

             1982
                    From Japan, a camera with electronic picture storage, no film.


Sony Training Services                                                                       24
Broadcast Fundamentals

                   USA Today type set in regional plants by satellite command.
                   Kodak camera uses film on a disc cassette.

            1983
                   Cellular phone network starts in U.S.
                   Lasers and plastics improve newspaper production.
                   Computer chip holds 288,000 bits of memory.
                   Time names the computer as "Man of the Year."
                   ZIP + 4, expanded 9-digit ZIP code is introduced.
                   AT&T forced to break up; 7 Baby Bells are born.
                   American videotext service starts; fails in three years.

            1984
                   Trucks used for SNG transmission.
                   Experimental machine can translate Japanese into English.
                   Portable compact disc player arrives.
                   National Geographic puts a hologram on its cover.
                   A television set can be worn on the wrist.
                   Japanese introduce high quality facsmile.
                   Camera and tape deck combine in the camcorder.
                   Apple Macintosh, IBM PC AT.
                   The 32-bit microprocessor.
                   The one megabyte memory chip.
                   Conus relays news feeds for stations on Ku-Band satellites.

            1985
                   Digital image processing for editing stills bit by bit.
                   CD-ROM can put 270,000 papers of text on a CD record.
                   Cellular telephones go into cars.
                   Synthetic text-to-speech computer pronounces 20,000 words.
                   Picture, broken into dots, can be transmitted and recreated.
                   USA TV networks begin satellite distribution to affiliates.
                   At Expo, a Sony TV screen measures 40x25 meters.
                   Sony builds a radio the size of a credit card.
                   In Japan, 3-D television; no spectacles needed.
                   Pay-per-view channels open for business.




25                                                      Sony Broadcast & Professional Europe
Part 2 – The history of television

             1986
                    HBO scrambles its signals.
                    Cable shopping networks.

             1987
                    Half of all U.S. homes with TV are on cable.
                    American government deregulates cable industry.

             1988
                    Government brochure mailed to 107 million addresses.

             1989
                    Tiananmen Square demonstrates power of media to inform the world.
                    Pacific Link fiber optic cable opens, can carry 40,000 phone calls.



        1990- 2000

             1990
                    Flyaway SNG aids foreign reportage.
                    IBM sells Selectric, a sign of the typewriter's passing.
                    Most 2-inch videotape machines are also gone.
                    Videodisc returns in a new laser form.

             1991
                    During the Gulf War, CNN coverage of the conflict is so extensive and
                    wide-ranging that it is commonly remarked, only half in jest, that Saddam
                    Hussein is watching CNN for his military intelligence, instead of relying
                    on his own information-gathering methods.
                    Beauty and the Beast, a cartoon, Oscar nominee as best picture.
                    Denver viewers can order movies at home from list of more than 1,000
                    titles.
                    Moviegoers astonished by computer morphing in Terminator 2.
                    Baby Bells get government permission to offer information services.
                    Collapse of Soviet anti-Gorbachev plot aided by global system called the
                    Internet.
                    More than 4 billion cassette tape rentals in U.S. alone.
                    3 out of 4 U.S. homes own VCRs; fastest selling domestic appliance in
                    history.




Sony Training Services                                                                       26
Broadcast Fundamentals

            1992
                   Cable TV revenues reach $22 billion.
                   At least 50 U.S. cities have competing cable services.
                   After President Bush speaks, 25 million viewers try to phone in their
                   opinions.

            1993
                   A TV Guide poll finds that one in four Americans would not give up
                   television even for a million dollars.
                   Dinosaurs roam the earth in Jurassic Park.
                   Unfounded rumors fly that cellphones cause brain cancer.
                   Demand begins for "V-chip" to block out violent television programs.
                   1 in 3 Americans does some work at home instead of driving to work.

            1994
                   After 25 years, U.S. government privatizes Internet management.
                   Rolling Stones concert goes to 200 workstations worldwide on Internet
                   "MBone."
                   To reduce Western influence, a dozen nations ban or restrict satellite
                   dishes.
                   Prodigy bulletin board fields 12,000 messages in one after L.A. quake.

            1995
                   CD-ROM disk can carry a full-length feature film. (CD-Video)
                   Sony demonstrates flat TV set.
                   DBS feeds are offered nationwide.
                   Denmark announces plan to put much of the nation on-line within 5
                   years.
                   Major U.S. dailies create national on-line newspaper network.
                   Lamar Alexander chooses the Internet to announce presidential
                   candidacy.
                   There are over a billion television sets in operation around the world.

            2002
                   Bibliotheca Alexandrina is due to open on April 23. This is intended as
                   the modern equivalent to the ancient Alexandria Library which burnt
                   down about 1600 years ago with great loss of information and human
                   understanding.




27                                                   Sony Broadcast & Professional Europe
Part 2 – The history of television




Sony Training Services                                  28
Broadcast Fundamentals


Part 3                                 Image perception & colour
The human eye
       Evolutionary advantage
                 The human eye is a marvel of evolution and selective breeding. Mapping
                 the evolutionary history of the eye is difficult but almost certainly started
                 as in some ancient creature that possessed a group of especially light
                 sensitive cells on the surface of its skin.
                 The advantage of being able to sense possible attack, the presence of
                 possible food and a possible mate, must have been a very big
                 advantage. The eye must have evolved quickly from one generation of
                 creature to another.
                 It is perhaps easy to see how the light sensitive cells became better, and
                 how the ability to see colour must have given creatures a clear
                 advantage over those that could not. Even the ability to see a wide
                 spectrum of colours must have helped creatures.
                 Exactly how the lens evolved is less clear. However the lens started its
                 evolution is obviously gave those creatures that possessed them the
                 ability to see with greater clarity. It is also not clear why certain
                 evolutionary paths favoured the multi-lens compound eye, and why
                 others favoured the single lens design.
                 Evolution has not been entirely favourable, especially to humans. The
                 human eye is not perfect. It has a few drawbacks, most of which we
                 have adapted to. Some of these shortcomings actually make it easier to
                 design television, as we will see later.

       What is the eye
                 Most of us have two working eyes. Sight in humans is more important
                 than any of our other senses. If either or both of our eyes fails to work, it
                 is one of the most disabling disabilities humans can have.
                 The eye grows from rudimentary skin cells before we are born. Neural
                 connections are made directly to the brain early on in development, and
                 what results is one of the most complex and wonderful structures in the
                 human body.

       The eye’s structure
                 As far as broadcast video is concerned most of the complexity of the
                 human eye is irrelevant. However there are a few features and facts
                 about the eye that are interesting.
                 The human eye approximates to a sphere. In fact for somebody with
                 perfect sight the back of the eye is very close to a perfect sphere.
                 The eye is filled with a jelly like fluid called the vitreous humor. This fluid
                 keeps the eyeball in shape. The fact that its is clear means that light can
                 pass through it from front to back.




29                                                   Sony Broadcast & Professional Europe
Part 3 - Colour

                   The front of the eye is covered with a clear protective film called the
                   conjunctiva. Behind this is another protective film called the cornea.
                   Just behind this is the iris, a muscular ring that allows the amount of light
                   entering the eye to be regulated. In bright light the iris closes. The iris is
                   tinted. There appears to be no reason why this is so, but this is what
                   gives the eye its ‘colour’.
                   Between the cornea and the iris is a watery fluid called the aqueous
                   humor. This keeps the front of the eye in shape.
                   Behind the iris is the lens. A marvel of evolution this organic structure
                   focuses light to the back of the eyeball. The amazing thing about this
                   lens is that its shape can be altered to change the focal length. The
                   cillary muscle, a small muscle surrounding the lens, squashed it and
                   allows the eye to focus on closer objects. When the muscle relaxes the
                   eye focuses to infinity.




Figure 1                                                                         The human eye

                   (Lens optics is discussed in a later chapter.)
                   As mentioned the back of the eye is almost spherical. It comprises a
                   large structure called the retina.

             The retina
                   The retina is a structure that senses light and colour, and sends this
                   information to the brain. It is between 200 and 250 microns thick and
                   comprises various layers.
                   The outermost layer is a pigment layer. This acts as the outer wall to the
                   retina and as a light stop.
                   Inside this is the receptor cells. There are two types of receptor cells.
                   One type are rod shaped, and the other fatter and are cone like. For this

Sony Training Services                                                                       30
Broadcast Fundamentals

                 reason they are commonly referred to as rod and cones. Light hitting
                 these cells starts a protein electro-chemical reaction in a material called
                 rhodopsin. This reaction quickly passes along the length of the cell’s
                 axion. The end of the axion connected to the axion of a nerve cell,
                 called a bipolar cell, via a structure called a synapse. A synapse is not
                 actually a connection, but a small gap across which a protein electro-
                 chemical transfer takes place.




Figure 2                                                                     The human retina


                 Once the transfer has taken place another protein electro-chemical
                 reaction travels the length of the bipolar cell's axion to its body, and then
                 out along another axion to another synapse.
                 This second synapse connects to another nerve cell called the ganglion
                 cell. The signal passes down the ganglion’s axion using the same
                 reaction mechanism. The ganglion cell’s axions pass across the inner
                 surface of the eyeball and out through the nerve bundle, out of the eye.
                 The bundle passes back into the head and directly to the brain.
                 Light therefore has to pass through the whole thickness of the retina
                 before hitting the rods and cones.




31                                                  Sony Broadcast & Professional Europe
Part 3 - Colour

              Rods
                    Rod receptor cells have a broad sensitivity range. They are most
                    sensitive to green, which is nearer the centre of the optical electro-
                    magnetic spectrum.




Figure 3                                                                    Rod and cone cells


                    Rod cells measure the brightness of the image, or put another way the
                    black and white parts of the image.

              Cones
                    Cone receptor cells have a narrow sensitivity range. There are three
                    types of cone cell. The first is sensitive to about 440nm wavelength light
                    (blue), the second to about 530 nm (green), and the third to about
                    560nm (red)
                    Cone cells are therefore responsible for seeing colour. Every colour is a
                    mix of blue, green and red.

              Receptor density across the retina
                    There are about 120,000,000 rod cells in the retina and about 7,000,000
                    cone cells.
                    About 64% of the cone cells are sensitive to red light, about 32% to
                    green light and just 2% to blue light.
                    Most of the retina is the same, with an even concentration of rod and
                    cone cells. However there are two areas of the retina where this even
                    concentration is different, the fovea and the blind spot.


 Sony Training Services                                                                      32
Broadcast Fundamentals

            The fovea
                 The lens focuses the centre of the image to a point on the retina called
                 the fovea. This area of the retina has a very dense concentration of
                 receptor cells. Furthermore, all these cells are cones. There are no rod
                 cells in the fovea. The fovea allows the eye to study the centre of an
                 image or scene in great colour detail.

            The blind spot
                 Because all the ganglion cell axions are on the inside of the retina they
                 need to pass out of the eyeball at some point. It stands to reason
                 therefore that wherever this point is there can be no receptor cells at all.
                 This area is therefore known as the blind spot.

       Interesting facts about the eye

            The eye is far from perfect
                 Although the eye is a marvel of biological engineering it has a number of
                 design flaws. The cornea, lens and vitreous humor are not absolutely
                 clear. They all reduce the amount of light hitting the retina and colour it
                 slightly.

            The eye’s image is bent out of shape and upside down
                 The image falling on the retina is reasonably well proportioned near to
                 the fovea. However the nearer you get to the outer edge the more
                 compressed and distorted the image becomes.
                 The lens also focuses the image upside down and back to front on the
                 retina.

            The brain corrects for imperfections
                 The brain corrects the image to remove colour casting from the cornea,
                 lens and vitreous humor. It also corrects edge distortion giving us the
                 impression of a flat correctly proportioned image.

            Having two eyes allows us to measure distance
                 When focusing of close object, not only do the lenses squash to focus,
                 but also the eyeballs turn towards each other. This can be used by the
                 brain the measure how far away an object is.
                 You can see this happening by getting a friend to hold their finger up at
                 arm’s length, and focus on it. Then ask them to keep focussing on the
                 finger while slowly moving it closer to their face.

            The eye gets board easily
                 The eye is very good as seeing change. If you stare at something long
                 enough it will disappear. The brain eventually cancels the image out
                 altogether. Thus the eye works best if it continually moves, scanning
                 across edges and shapes continually updating what the brain receives.



33                                                  Sony Broadcast & Professional Europe
Part 3 - Colour

             Images can ‘burn in’ to the retina
                   Linked to the last interesting fact, if you stare at something long enough
                   it will appear to disappear but the image is ‘burnt in’. If you then look at
                   something else the original image will appear in negative for a while.

             The eye remembers
                   The protein electro-chemical reactions in the eye’s cells that sense light
                   and pass the signals back to the brain take a certain time to react and
                   stop.
                   A flash is therefore ‘stretched’ so that the eye effectively sees it for
                   longer than it actually occurs. This effect is known as persistence of
                   vision. Film and television rely heavily on persistence of vision to turn
                   what is actually many still images flashing one after another, into what
                   appears to be a constantly changing image.

             The eye is good at seeing patterns
                   The eye can pick out patterns very well. This is a problem for television
                   and digital imagery because lines and pixels tend to stand out.
                   For instance a digital photograph will appear to be not as good an image
                   when compared to a conventional photograph of exactly the same
                   resolution, because the digital photograph pixels have a pattern and the
                   conventional photograph grains are random.

             The eye is very sensitive to green
                   A third of the cone cells are sensitive to green. The rod cells, although
                   intended for seeing the overall brightness of an image. are more
                   sensitive to green.
                   This makes the eye sensitive to green and more sensitive to changes in
                   the green part of the spectrum.
                   This has an important impact of the design of colour television.

             The fovea is not good for dark vision
                   Rod receptor cells are more sensitive that cone receptor cells. Thus in
                   dark conditions things appear to turn black and white.
                   It is best not to look at something directly in low light, but to look just to
                   the side or above it. This will put the object on the retina where there are
                   plenty of rod cells and you will be able to see it. (Incidentally, it may not
                   be a good idea to look just below an object in dark conditions as you
                   may put it into the blind spot, where you can’t see it at all.)




Sony Training Services                                                                        34
Broadcast Fundamentals



The concept of primary colours
                   Any colour can be described as a combination of 3 primary colours.
                   Children are often taught that the three primary colours are Red, Yellow
                   and Blue. This is a perfectly reasonable assumption when learning
                   painting and art. Mixing these colours allows children to make almost
                   any colour they want.




Figure 4                                                            Children’s primary colours


           Subtractive colour mixing
                   This concept is called subtractive colour mixing, because the overall
                   colour gets darker the more paint you add to the mix.




Figure 5                                                           Subtractive primary colours

                   In reality Red Yellow and Blue are not the correct primary colours for
                   subtractive colour mixing. The reason for this is that mixing Red , Blue
                   and Yellow does not give Black, it makes Brown. True subtractive
                   primaries should remove all colour and brightness when mixed together,
                   i.e. Black.


35                                                   Sony Broadcast & Professional Europe
Part 3 - Colour

                   The true subtractive primaries are Magenta, Yellow and Cyan. While
                   these three colours might appear close to Red, Yellow and Blue as far
                   as children are concerned, they are sufficiently different to go to Black
                   when mixed together in equal proportions.

           Additive colour mixing
                   The opposite of subtractive colour mixing is additive colour mixing.
                   Additive primary colours are relevant to light. If three additive primary
                   coloured lights are mixed in equal proportions the result is White light.
                   The three additive primary colours are Red, Blue and Green.




Figure 6                                                                Additive primary colours




Secondary and tertiary colours
                   Each set of primary colour has a set of secondary colours. If you mix any
                   two of the primary colours in equal proportions you will get a secondary
                   colour.
                   In fact the three subtractive primary colours are the secondary colours of
                   the additive primary colours, and visa versa.
                   A tertiary colour is found by mixing equal proportions of three primary
                   colours. There are only two tertiary colours, White and Black.




Sony Training Services                                                                       36
Broadcast Fundamentals



Hue saturation and luminosity
                           Colour can be described as a 3 dimensional shape. At the top is white.
                           Half way down is a circle of all the colours at their full intensity. You can
                           see all six primaries, both additive and subtractive around the edge of
                           the circle. At the bottom is Black.
                                     W h ite                                           W h ite




                      B lu e                       Red              Y e llo w                          C yan
                                    M a g e n ta                                       G re e n




                                     B la c k                                          B la c k

Figure 7                                                                                      Colour 3D shape (front & back)

                           This is a 3 dimensional space, therefore it is possible to pick any point at
                           have any colour you want. The line running down the centre runs from
                           White to Black through Grey.

                                  G re e n                                                G re e n



                                                     Y e llo w   Y e llo w
           C yan                                                                                                   C yan




                                 W h ite                                                  B la c k




             B lu e                                                 Red
                                                   Red                                                         B lu e


                                M a g e n ta                                            M a g e n ta

Figure 8                                                                                    Colour 3D shape (top & bottom)




37                                                                              Sony Broadcast & Professional Europe
Part 3 - Colour

                    If you look at the shape from the top you will get a circle with all the
                    colours around the edge and White in the middle. Look at the shape
                    from the bottom and you will see Black in the middle.




 Figure 9                                                             Hue saturation & luminosity


            Hue
                    Hue is the colour. You can change the hue by rotating around the centre
                    of the circle.

            Saturation
                    Saturation can be called colour intensity. It is a measure of how far from
                    the centre you are. Zero saturation is White, Grey or Black. Full
                    saturation is somewhere on the edge of the circle.

            Luminosity
                    Luminosity is how far up or down the shape you are. If you take any
                    colour and force its luminosity up it will tend towards White, and visa-
                    versa down to Black.


 The CIE space
                    The common method for describing colour is the CIE colour space. This
                    2 dimensional representation is used for additive colour fields to define
                    the ability of a video system to capture and display colour. As you can



 Sony Training Services                                                                        38
Figure 10                                                           Hue, saturation & luminosity
Broadcast Fundamentals

                 see the NTSC and PSL gamuts are well within the total range of natural
                 colours.
                 Each corner of the gamut triangles for NTSC and PAL specify the
                 primary colours. They are different for each standard.
                 Television cameras and displays have a long way to go before they are
                 able to capture and display every colour available in nature.




Figure 11                                                               CIE colour space




39                                               Sony Broadcast & Professional Europe
Part 4 – The basic television signal


Part 4                                    The basic television signal
The problem of getting a picture from A to B
                    A picture is a 2 dimensional object. It has height and width. A moving
                    picture adds a third dimension, time, to the other two.
                    If we are to send a moving image from one place to another we need to
                    change the image content into a serial signal.

            Film frames
                    Film conveys a moving image as a series of frames. These are like 2
                    dimensional chunks of data appearing at once, one after another, so
                    rapidly that it appears to be smooth.

            The raster scan
                    A raster scan scans an image and turns it into a serial stream of data.
                    By combining film’s method of conveying frames with the raster scan
                    method we could convey a moving image as a serial signal.

                The basic raster frame
                    The normal raster scan, and the method used by all broadcast television
                    standards, scans each line from left to right, and the each successive
                    line from top to bottom. This is called a frame.
                    The definition of the signal itself is simple. The brighter the image is at
                    that point on the line the higher the signal’s voltage.




Figure 12                                                                         The raster scan




Sony Training Services                                                                        40
Broadcast Fundamentals

            Lines and frame rate
                 We need to decide on a frame rate and the number of lines. We want
                 the highest quality possible so it would be better to have as many lines
                 as possible, and as many frames per second as possible. We would also
                 want to ensure that each line had the highest quality (bandwidth)
                 possible.
                 However the overall bandwidth of the signal is strictly limited by
                 broadcast standards authorities, so we have to find a reasonable
                 compromise between the number of frames per second, the number of
                 lines, and the quality of each line.
                 An increase in either the number of lines, the number of frames per
                 second, or the bandwidth of each line, will increase the signal’s overall
                 bandwidth.

       The blanking intervals

            Horizontal blanking
                 Each raster line is normally referred to as the active line. This is where
                 the line is traced out on the image. There is a short interval between one
                 active line and the next. The scanning system uses this time to fly back
                 to the beginning of the next line. The signal is ‘cut’ for this period of time
                 to prevent the flyback appearing on the television set. In the raster
                 scanning system this interval is referred to as the horizontal flyback.
                 The interval is also called the line blanking interval or horizontal
                 blanking interval.
                 Thus every video line consists of the active line period and the horizontal
                 blanking interval which is used as a flyback period.

            Vertical blanking
                 There is a longer interval between one entire scan and the next. During
                 this time the scanning system moves back from the bottom right corner
                 to the top left corner. Just as with the horizontal flyback interval the
                 signal is ‘cut’, to prevent it appearing on the television screen. This is
                 referred to as vertical flyback.
                 This is normally also called the vertical blanking interval.


Interlaced raster scanning
                 If a frame is raster scanned and the frame rate is the same as that of film
                 i.e. 24 frames per second, there is a severe amount of picture flicker.
                 This is because every point in the image will have faded before the
                 scanning mechanism can go back around to refresh it.




41                                                   Sony Broadcast & Professional Europe
Part 4 – The basic television signal




                                                                                        H o r iz o n ta l fly b a c k
                                                                                         (o n ly o n e s h o w n )

      E v e n lin e s
        (fie ld 1 )                                                                               O d d lin e s
                                                                                                   (fie ld 2 )




                                                                                        V e r tic a l fly b a c k
                                                                                        fr o m fie ld 2 to 1




                                                        V e r tic a l fly b a c k
                                                        fr o m fie ld 1 to 2
Figure 13                                                                           The interlaced raster scan


                        Televisions could be designed to reduce flicker by increasing the
                        persistence of the screen. However this would mean any rapid
                        movement on the screen would be seen as blurring and streaking.
                        You could increase the frame rate but this would increase bandwidth.
                        The solution is to interlace the raster scan. Interlaced scans scan the
                        odd numbered lines first, from top to bottom. Then the raster scan starts
                        from the top again and scans the even lines from top to bottom.
                        This method of scanning reduces flicker by effectively writing an image
                        at twice the frame rate.
                        Each of the scans is called a field, and two interlaced field make up a
                        frame.


Half lines
                        Modern video standards also take into account that each line in the
                        raster scan is not exactly horizontal. In fact the raster scan is
                        progressing slowly from the top of the image to the bottom at a constant
                        rate. The left side of each line is actually slightly lower than the right
                        side.
                        Therefore video standards have an odd number of lines per frame. The
                        first field of each frame begins with a whole line and ends with a half
                        line. The second field begins with a half line and ends with a whole line.
                        This system gives a more rectangular raster scanned image.




Sony Training Services                                                                                                  42
43
                                                                                                                                                                                H o r iz o n t a l     A c tiv e
                                                                                                                                                                                 b la n k in g          v id e o




                                       Figure 14
                                                                                                                                                 H o r iz o n t a l
                                                                                                                                                 syncs
                                                                                                                                                                                                                                                                                                                                         Broadcast Fundamentals




                                                                              V e r t ic a l f ly b a c k                         H a lf lin e                                                                           V e r t ic a l b la n k in g
                                                                              ( V e r t ic a l b la n k in g )
                                                                                                                                         A c t iv e v id e o lin e


                                                                                                                                                                                                                                                                                                             A c tiv e v id e o




                                                                                                                                                                H o r iz o n t a l s y n c s                                                                                                          H o r iz o n t a l b la n k in g




                                                                                    H o r iz o n ta l fly b a c k
                                                                                    ( H o r iz o n t a l b la n k in g )

                                                                                                                                                                                                                                                                              V e r t ic a l b la n k in g
                                                                                                                                                                                                                          H o r iz o n ta l b la n k in g
                                                                                                                                                                               E q u a lis in g p u ls e s
                                                                                                             L in e s y n c
                                                                                                                                                                                                 B r o a d p u ls e s
                                                                                                                        V e r t ic a l                                                                                         E q u a lis in g p u ls e s   L in e s y n c
                                                                                                                       b la n k in g
                                                                                                                           lin e


                                                                                                                                                                                          V e r t ic a l b la n k in g




                                       Basic horizontal and vertical detail



Sony Broadcast & Professional Europe
Part 4 – The basic television signal


Synchronisation
        The basic principle
                   Synchronisation is the principle of making sure two pieces of equipment,
                   that both have some kind of regular clock or rhythm to run at the same
                   rate. The two pieces of equipment are said to be ‘locked’ together.
                   Synchronisation is often done with some form of synchronisation signal,
                   generally simply called a sync signal.

        How does television sync?
                   All television equipment contains some form of clock or oscillator. This
                   will have a natural frequency which is close to the correct frequency.
                   Somewhere in the television transmission station will be a master sync
                   pulse generator containing a precision master oscillator. Its frequency is
                   correctly set to within 1 cycle in several million.
                   All the equipment in the transmission station is locked to this master
                   sync pulse generator. This is easy because the equipment’s own clock is
                   running as about the same rate. The sync signal ‘pulls’ the equipment’s
                   own oscillator to exactly the correct frequency.
                   The transmission station will send out a television signal that contains
                   sync pulses. All equipment from the transmission station to the television
                   at home contains similar oscillators which are ‘pulled’ to exactly the
                   correct frequency by the sync pulses.

        Line, or horizontal, sync pulses
                   Line sync pulses are parts of the video signal that define the beginning
                   of each video line. They occur at a certain time during the horizontal
                   blanking interval.
                   Line sync pulses are short intervals of time where the video signal drops
                   below the voltage specified for black (the blanking level).
                   Line sync pulses have a particular shape, because they are bandwidth
                   limited. The beginning and end of the pulses are sloped. The beginning
                   of the video line is specified as the mid-point of the slope at the
                   beginning of the sync pulse.
                   These pulses are placed some time during the horizontal interval. Their
                   position relative to the beginning of the active line is set and known, so
                   once the position of the pulse is found the beginning of the active line is
                   known.

        Vertical sync pulses
                   The vertical blanking interval is more complex, and is relatively longer,
                   than the horizontal blanking interval. The time interval is the same as
                   many video lines. It contains a complex series of pulses that define the
                   beginning of each field and each frame.




Sony Training Services                                                                      44
Broadcast Fundamentals

            Blanked vertical lines
                 The vertical interval starts and finishes with a few blanked video lines.
                 These are simply video lines with their respective horizontal sync pulses,
                 but with the active line period blanked as well.

            Equalising pulses
                 The vertical interval contains a number of equalising pulses near to the
                 end of one field and the start of the next field. Equalising pulses are
                 shorter than line sync pulses and occur every half line.
                 The reason for this is so that there is a same pattern of equalising pulses
                 for every field, even though the transition between the first field and the
                 second is half way through one line.

            Broad pulses
                 Broad pulses are placed in between the two groups of equalisation
                 pulses. These are very wide pulses, in fact, so wide that only a small
                 portion of time is spent not in a broad pulse.

            Definition of the start of the field
                 The definition of the start of each field is the beginning of the first broad
                 pulse.


 The oscilloscope
                   An oscilloscope is an instrument that allows engineers to view video
                   signals, not as a picture, but as a constantly changing signal. It
                   shows the kind of signals show on page 43 as a bright trace across
                   the display.
                   All oscilloscopes have more than one input, so that various signals
                   can be compared to one another. They also often allow for complex
                   triggering so that complex or intermittent signals can be caught and
                   studied.
                   Most oscilloscopes use tubes, similar to monochrome televisions.
                   The most modern ones use digital flat screen colour technology.
                   Engineers use the oscilloscope to check the levels and timings of
                   video signals.




45                                                   Sony Broadcast & Professional Europe
Part 5 – The monochrome NTSC signal


Part 5                           The monochrome NTSC signal
The 405 line system
                   The first important monochrome video signal was the 405 line
                   monochrome system adopted by many countries around the world.
                   Although an important video standard in its time, the 405 line standard is
                   now obsolete. Furthermore it is different from any of the modern video
                   standards.
                   Therefore we will look at the 525 line monochrome standard as the first
                   important and relevant standard.


The 525 line monochrome system
                   The 525 line monochrome standard was proposed by the American
                   NTSC (National Television Standards Committee) and quickly became
                   popular.
                   This standard formed a strong basis for the existing 525 line colour
                   system used by many countries around the world, and so it seems
                   sensible to study it first.
                   The 525 line monochrome system has 525 lines per video frame, with
                   262.5 lines per field.


Frame rate and structure
                   The chosen frame for NTSC was 30 frames per second or 60 fields per
                   second. It is commonly thought that this was so that NTSC televisions
                   could be locked to the mains power. This is only half true. Mains power
                   alternating frequency is not accurate enough to provide a reliable
                   synchronisation signal for television receivers. Television equipment
                   does not use mains as a locking signal. However if television equipment
                   is not somehow linked to mains, the resulting beating and aliasing
                   frequencies can cause undesirable effects on the screen. Making the
                   frame rate the same as the mains power frequency at least makes these
                   undesirable effects stand still.
                   Field 1 starts on line 1. There are 6 equalisation pulses, then 6 broad
                   pulses, then 6 more equalisation pulses. Normal horizontal syncs starts
                   on line 10.
                   The first active video line is line 22, and the last is line 262. Half of line
                   263 is active.
                   Field 1 starts half way through line 263 with 6 equalisation pulses, 6
                   broad pulses and 6 more equalisation pulses. Normal horizontal syncs
                   start at the beginning of line 273.
                   The first active video line is line 285 and the last is line 525.




Sony Training Services                                                                          46
Broadcast Fundamentals

                    Field start displacement
                                   The trigger point for the start of a field is normally the first broad pulse.
                                   This is the point television receivers use to start the next field. However
                                   the ‘official’ start point of each field is the beginning of the first broad
                                   pulse.
                                   This gives a discrepancy between the technical start of each field and
                                   the line numbers. The first broad pulse is at the start of line 4, and half
                                   way through line 266.


Line rate and structure
                                   The line rate is simply the frame rate multiplied by 525, or 15.75kHz.
                                   Disregarding the vertical interval, all NTSC lines have the same basic
                                   structure.


Bandwidth considerations
                                   The video signal can have energy that can stretch to 10Mhz and
                                   beyond. However, because of the highly repetitive nature of video, with
                                   each video line similar to the one before and after, and each video field
                                   and frame similar to the one before and after, most of the energy is
                                   centred around harmonics of line, field and frame rate. This makes the
                                   bandwidth look like a series of spikes, with very little between them.
                        V id e o s ig n a l b a n d w id th
   A m p litu d e




                                                                                      H a r m o n ic s to 1 0 M H z o r h ig h e r

            DC                                                     F re q u e n c y



                                 1 lin e              1 fra m e
                                15750H z                30H z




Figure 15                                                                                             Video signal bandwidth

                                   The television signal is modulated onto a radio frequency carrier before
                                   being sent to the transmitter mast and out to the home. The harmonics
                                   may possibly spread out either side of the carrier to 10MHz or more,


47                                                                        Sony Broadcast & Professional Europe
Part 5 – The monochrome NTSC signal

                                                   giving a possible total bandwidth of over 20MHz. These spreads either
                                                   side of the carrier are called the upper and lower sidebands.
                                                   The regulatory authorities assigned a 6MHz bandwidth to each television
                                                   channel. The designers of the original television standard therefore had
                                                   to devise a scheme for restricting the video signal down to a 6MHz limit.
                                           L o w e r s id e b a n d                                                                                       U p p e r s id e b a n d
A m p litu d e




                                                                                                       V H F c a r r ie r                                      F re q u e n c y
                 DC                                                                                     fre q u e n c y



                           L o w e r s id e b a n d
                           filte r e d le a v in g                                                                          U p p e r s id e b a n d
                           v e s tig a l s id e b a n d                                                                     filt e r e d to 4 .2 M H z



                                          V H F c a r r ie r
                                          fre q u e n c y                                                                                                                                    A u d io c a r r ie r




      - 1 .2 5                        0                                                                                                                                           4 .2                         4 .5   4 .7 5
                                                                                     6 M H z t o ta l c h a n n e l b a n d w id th




          T e le v is io n c h a n n e ls
          o n th e r a d io s p e c tr u m




                       A u d io            V id e o                                                     A u d io            V id e o                                                     A u d io           V id e
                       c a r r ie r        c a r r ie r                                                 c a r r ie r        c a r r ie r                                                 c a r r ie r       c a rri

                                                          C h a n n e l b a n d w id th                                                    C h a n n e l b a n d w id th
                                                                    (6 M H z )                                                                       (6 M H z )

Figure 16                                                                                                                                                                  Video channel bandwidth

                                                   Filters are used to cut off as much of the lower sideband as possible. It
                                                   is not possible to cut everything off, so the filter restrains the lower
                                                   sidebands to just 1.25MHz. What is left is commonly called the vestigal
                                                   sideband.




Sony Training Services                                                                                                                                                                                                 48
Broadcast Fundamentals

                 The upper sideband is filtered and restricted to about 4.2MHz. Filters
                 cannot create a sharp clean cut-off at 4.2MHz, but rather a smooth roll-
                 off that disappears to zero just below 4.5MHz.
                 A simple audio carrier is placed at 4.5MHz, clear of the video signal. Its
                 sidebands do not extend very far and there is nothing left at 4.75MHz.
                 Thus the total bandwidth including the video and audio signals is
                 constrained to 6MHz.

       Quality considerations
                 The low frequency detail of the video signal is centred around the carrier
                 frequency and the low order sidebands. Fine detail is centred around the
                 high frequency sidebands above 4MHz. It is worth remembering that
                 random noise will also tend to be centred around the high frequency
                 sidebands.
                 Most home television receivers cannot show much detail above 4MHz. It
                 is therefore pointless trying to transmit this level of detail to the home.

       Radio spectrum and television channels
                 The regulatory authorities specified a series of analogue television
                 channels 6MHz apart. Each video carrier is 1.25MHz from the bottom of
                 the channel, and each audio carrier is 5.75MHz from the bottom of the
                 carrier (4.5MHz above the video carrier).
                 Television companies have the responsibility to ensure that each
                 channel they transmit has carriers at exactly the correct allocated
                 frequency, and that the bandwidth is properly filtered to constrain it to
                 within the 6MHz limit.




49                                                  Sony Broadcast & Professional Europe
Part 6 – Colour and television


Part 6                                            Colour and television
Using additive primary colours
                   The additive primary colour principle has particular relevance to
                   television because it uses light to detect the image in the camera and to
                   display it in the television set at the other end.
                   The colour television camera splits the image into three separate
                   images, one for the Red part of the image, one for Green and one for
                   Blue.
                   The colour television set has three sets of phosphor dots, one type that
                   shine Red, one type that shine Green and the last type that shine Blue.
                   The television set also has three electron guns, each one targeting one
                   set of phosphor dots.

        Original plans
                   The original idea was to use Red, Green and Blue throughout the whole
                   transmission system, from camera to television set.
                   Both RCA and CBS (amongst others) developed systems that
                   sequentially send Red, Green and Blue parts of the original image over
                   a normal monochrome transmission system.
                   RCA chose to send ‘dots’ of colour, Red Green Blue in a rotating
                   sequence. CBS chose to filter successive video fields through a rotating
                   Red Green Blue filter wheel.
                   Neither system worked too well. What was needed was a system that
                   added some colour element to the existing monochrome signal, allowing
                   those with monochrome television sets to watch television with the same
                   quality as before, but allowing those with new colour televisions to see
                   that same basic quality of image in colour.


Ensuring compatability
                   The original ideas for a colour television system were not popular
                   because they were not compatible with the existing monochrome
                   standard.

        A compatible monochrome signal
                   The colour video camera produces three separate Red, Green and Blue
                   images. In theory simply mixing these three in equal proportions should
                   give a perfect White.
                   The human eye is more sensitive to different Green than to Red or Blue.
                   Therefore any miss-calculations in generating the Green primary colour
                   in the television set would be more obvious that for Red or Blue.
                   The proportions of Red, Green and Blue was therefore adjusted to
                   match human eye characteristics, standard luminosity curves, and make
                   up for the non-linear nature of the light/voltage characteristics of a



Sony Training Services                                                                    50
Broadcast Fundamentals

                 standard video camera and voltage/light characteristics of the standard
                 television set. The equation for White (Y) is therefore :-
                          Y = 0.299R + 0.587G + 0.114B
                 This provided for a perfectly balanced monochrome signal that could be
                 used to generate a standard monochrome signal compatible with
                 existing monochrome television sets.

       Maintaining compatible channel bandwidth
                 The regulatory authorities are constantly being pressured to provide
                 space on the limited radio spectrum for all kinds of radio services,
                 including commercial radio and television stations, airline radio
                 communications, ambulance, police and other emergency services,
                 radio control model enthusiasts, citizen band radio and HAM radio.
                 They were therefore not prepared to allocate more of the precious radio
                 bandwidth to television companies wanting to switch from monochrome
                 to colour.
                 Designers had therefore to somehow fit the colour television signal into
                 the existing 6MHz allocated to them for monochrome television.


Adding colour
       Colour difference signals
                 There are three theoretical colour difference signals, each one being the
                 difference on a primary colour from White. The colour difference signals
                 are therefore (R-Y), (G-Y) and (B-Y).
                 It is possible to generate any hue, saturation of luminance using any
                 three of the four signals, Y, (R-Y), (G-Y) and (B-Y). It is also possible to
                 generate the Y signal or any of the three primary colours from any three
                 of these four signals.
                 The Y signal was an essential requirement of any compatible colour
                 system. As already mentioned this is the same as a standard
                 monochrome signal.
                 A decision had to be made as to which two of the three available colour
                 difference signals would be used.
                 The Green signal is a much higher proportion of Y than either Blue or
                 Red. Therefore any miscalculation in Green will not be so obvious as it
                 would be for either Red or Blue.
                 It was therefore decided to use the Red and Blue colour difference
                 signals (R-Y) and (B-Y).

       Generating the (R-Y) and (B-Y) colour difference signals
                 Generating the colour difference signals is a simple piece of
                 mathematics. The relationship between the Y signal and the three
                 primary colour signals has already been established. Thus the (R-Y)
                 colour difference signal is simply :-
                                R-Y     = R - (0.299R + 0.587G + 0.114B)


51                                                  Sony Broadcast & Professional Europe
Part 6 – Colour and television

                                          = R-0.299R – 0.587G – 0.114B
                                          = 0.701R – 0.587G – 0.114B
                   Likewise the (B-Y) signal can be derived in the same way.
                                  B-Y     = B - (0.299R + 0.587G + 0.114B)
                                          = –0.299R – 0.587G + B – 0.114B
                                          = –0.299R – 0.587G + 0.886B

        Component colour video signals
                   Component colour video signals can either be in the original R, G, B
                   form, or more commonly, are defined as the Y, (R-Y) and (B-Y) signals.
                   Their relationship to the original primary colour signals is as previously
                   mentioned, i.e. :-
                                  Y = 0.299R + 0.587G + 0.114B
                                  (R-Y) = 0.701R – 0.587G – 0.114B
                                  (B-Y) = –0.299R – 0.587G + 0.886B




Sony Training Services                                                                     52
Broadcast Fundamentals


         Using component video
                   The advantage of component video is that the three signals are kept
                   apart. This ensures that the quality is kept as high as possible.
                   Analogue component video equipment has completely separate
                   electronics for the three signals. Connections between different
                   pieces of equipment if always done using three separate cables and
                   connectors, or using one cable with three cable cores.
                   Having three separate sets of electronics, and three separate
                   connections is obviously more expensive, compared to the needs of
                   monochrome television, therefore analogue component video is
                   something normally retained for broadcast and professional use.

              Component connections
                   Analogue component is normally connected between different
                   pieces of equipment using three 75 ohm coaxial cables and three
                   BNC connectors at each end. The cables should be about the same
                   length although this only becomes a problem where the difference
                   between the cables is greater than several metres.
                   BNC cables are often cut to the same length, tied together as a
                   triple, and made up with colour coded BNC connectors at both end.
                   Conventionally red, green and blue colour coded connectors are
                   used. If R,G,B component video is being used the connectors are
                   used as is. Conventionally, if there is a sync signal provided, it is
                   normally on the green signal, hence the phrase “Sync on Green”.
                   With Y,(R-Y),(B-Y) component video the red colour coded
                   connectors are conventionally used for the (R-Y) connection, the
                   blue connectors for the (B-Y) connection, and the green ones for
                   the Y connection.
                   .




53                                                  Sony Broadcast & Professional Europe
Part 6 – Colour and television




Figure 17                                                          Basic component video signal




Combining R-Y & B-Y
                    Colour television requires that there be just one colour signal. This must
                    therefore be a combination of the R-Y and B-Y contributions. The
                    designers of the first popular colour television standard decided to
                    combine the two colour difference signals onto a special carrier called a
                    subcarrier, because its frequency is under the main RF carrier used to
                    transmit the video signal.

            Quadrature amplitude modulation
                    The designers devised an ingenious way of modulating the two colour
                    signals onto one carrier by using quadrature amplitude modulation.
                    Amplitude modulation is simple to achieve and to understand. The
                    amplitude of the carrier is simply the level of the signal being modulated.
                    What results is a steady frequency sine wave with varying amplitude.


Sony Training Services                                                                      54
Broadcast Fundamentals

                 S u b c a r r ie r
                                                                                               S u b c a r r ie r v e c to r




                                 Q u a d r a tu r e c a r r ie r         Q u a d r a t u r e c a r r ie r v e c to r

Figure 18                                                                   Quadrature vector representative

                          This sine wave can be thought of as a rotating vector. The vector is
                          rotating in an anticlockwise direction and its length defines the amplitude
                          of the signal.
                          It becomes easy to see how two signals can be modulated onto one
                          subcarrier when you consider them as vectors. One of the signals can
                          be modulated on a carrier that is delayed by 1/4 cycle (90 degrees).
                          When the two modulated signals are combined they will not interfere
                          with each other because they are 90 degrees apart.
                          The subcarrier is sent with the video signal. This makes decoding the
                          two colour difference signals easy. You simply look at the amplitude of
                          the signal in phase with the subcarrier, and the amplitude of the signal
                          90 degrees out of phase.


Video signal spectra
            The monochrome signal
                          As explained on page , the monochrome video signal is highly repetitive,
                          and the signal’s spectrum is not a smooth spread of energy from DC to
                          high frequency. There are very definite energy peaks at harmonics of
                          line rate, field rate and frame rate. There is very little energy between
                          these peaks.
                          When the monochrome signal is modulated onto the radio carrier,
                          sidebands would normally extend above and below the carrier with
                          energy peaks at harmonics of line, field and frame rate. However the
                          monochrome signal is filtered, and the lower sidebands are cut. The
                          upper sidebands are filtered to about 4.2MHz.




55                                                                 Sony Broadcast & Professional Europe
Part 6 – Colour and television

            The colour signal
                         The basic colour signal has the same bandwidth as the monochrome
                         signal. When modulated onto the subcarrier it has the same basic
                         bandwidth extending either side of the subcarrier frequency, and its
                         energy is centred around harmonics of line rate, field rate and frame
                         rate.
                         If this signal is added to the monochrome signal it would be swamp it.
                         The upper sidebands would also extend way beyond the total 6MHz
                         channel bandwidth allowed by the regulatory authorities. The colour
                         signal is therefore attenuated so that its total bandwidth is very much
                         smaller. This prevents it from interfering with the monochrome signal,
                         and constrains it to within the total channel bandwidth.


Combining monochrome and colour
                                                                                                                      C o lo u r s u b c a r r ie r
                                                             C o lo u r s id e b a n d s                               A u d io c a r r ie r
                                              Y s ig n a l




                        1 lin e                1 fra m e
                   1 5 7 3 4 .2 5 H z           30H z




                       C o lo u r s id e b a n d s                                  C o lo u r s u b c a r r ie r     Y s ig n a l




Figure 19                                                                                         Mixing colour and monochrome together



Sony Training Services                                                                                                                                56
Broadcast Fundamentals

                          The subcarrier signal is not only a useful way of combining the two
                          colour difference signals into one signal, by using quadrature
                          modulation. It is also a neat way of combining the colour signal with the
                          monochrome signal.
                          Both the monochrome and colour signals have spectra with energy
                          centred around the harmonics of line rate, with gaps between each
                          harmonic. If the frequency of the subcarrier is carefully chosen the
                          colour harmonics can be made to sit exactly between the monochrome
                          harmonics.
                          The subcarrier is also chosen to be as high as possible, but ensuring
                          that the whole colour signal is within the 4.5MHz overall bandwidth of the
                          complete video signal.
                          Placing the subcarrier as high as possible also ensures that, if there is
                          any interference between the monochrome and colour parts of the signal
                          it will only affect the high frequency fine detail of the picture.
                          The final video signal, with Y, (R-Y) and (B-Y) mixed together is called
                          composite video.




 Using composite video
                               Composite video is a very popular method of connecting analogue
                               video between pieces of video equipment and only requires a single
                               cable.
                               However, although analogue component video requires three
                               cables to connect pieces of video equipment together it has higher
                               quality compared to analogue composite video.
             P in a s s ig n m e n t
                      P lu g                       S ocket

                                        C o re
                                                                       C o n n e c tio n           F u n c tio n   I/ P o r
                                                                                                                    O /P
                                                                            C o re              S ig n a l            I& 0
                                                                           S h ie ld            G ro u n d              -
                                       S h ie ld




                P a n e l c o n n e c to r                                        C a b le c o n n e c to r




                    ‘T ’ p ie c e                                                      Te rm in a to r
                                                    P lu g


                 P lu g

                                                             S ocket



57                                                                                 Sony Broadcast & Professional Europe
Part 7 – Colour NTSC television


Part 7                                                      Colour NTSC television
Similarity to monochrome
                                 The colour NTSC television signal is based on the monochrome NTSC
                                 signal. It has exactly the same number of lines per frame and per field.
                                 The active video region is the same, as is the structure of the vertical
                                 blanking region, the vertical sync pulses, the horizontal blanking region
                                 and the horizontal sync pulses.


Choice of subcarrier frequency.
                                 The subcarrier frequency was originally chosen to be between the 227th
                                 and the 228th harmonic of line rate. This would make it 3,583,125Hz.
                                                                I s ig n a l                           A u d io c a r r ie r
                                     Y s ig n a l                              Q s ig n a l
                                                                                              S u b c a r r ie r




                 1 lin e              1 fra m e
            1 5 7 3 4 .2 5 H z         30H z




Figure 20                                                                                            Colour NTSC bandwidth



Sony Training Services                                                                                                         58
Broadcast Fundamentals

                 However this frequency produces interference with the audio carrier
                 signal in the final television signal. The frame rate was therefore altered
                 slightly, from 30 frames per second to 29.97 frames per second. The
                 subcarrier was moved to 3,579,545Hz.


                 In hindsight this was probably not a good idea. It may have been better
                 to have moved the audio carrier slightly instead. The fact that NTSC is
                 not now an integer number of frames per second causes many problems
                 with standardisation and editing. (See page 207)


Adding colour
                 As mentioned the colour difference signal must be filtered before they
                 are modulated, to restrict their bandwidth compared to the monochrome
                 signal, to prevent them from interfering with it too much.
                 What is more, with a subcarrier frequency of about 3.58MHz, it would
                 appear that the colour bandwidth capacity above this frequency is only
                 about 0.5MHz before it starts to interfere with the audio signal carrier
                 itself.
                 The solution is not to modulate the R-Y and B-Y signals directly but two
                 other signals called I and Q.
                 It was found that the human eye was more sensitive to colour around the
                 orange/cyan axis, compared to white, than to colours 90 degrees from
                 this around the magenta/green axis, compared to white.
                 Thus two signals were generated, one called the I (in-phase) signal
                 which was modulated with the subcarrier on the orange vector, and the
                 other called the Q (quadrature) signal which was modulated with a
                 subcarrier signal phase shifted to a delay of 90 degrees.

       The I signal
                 The I signal is found by the equation :-
                                I       = 0.877(R-Y) cos 33deg – 0.493(B-Y) sin 33deg
                                        = 0.74(R-Y) – 0.27(B-Y)
                 In terms of the original R, G and B signal, I can be described as :-
                                I       = 0.60R – 0.28G – 0.32B
                 The I signal is asymmetrically filtered to a bandwidth of +0.5MHz and –
                 1.5MHz. This allows relatively high definition for the I signal.

       The Q signal
                 The Q signal is found by the equation :-
                                Q       = 0.877(R-Y)sin 33deg + 0.493(B-Y)cos 33deg
                                        = 0.48(R-Y) + 0.41(B-Y)
                 In terms of the original R, G and B signal, Q can be described as :-
                                Q       = 0.21R – 0.52G + 0.31B


59                                                  Sony Broadcast & Professional Europe
Part 7 – Colour NTSC television

                       The Q signal is symmetrically filtered to a bandwidth of 0.5MHz. This
                       allows relatively low definition for the for the Q signal.

            Burst
                       Between 8 and 10 cycles of subcarrier are sent during the horizontal
                       blanking interval of every line, except during part of the vertical blanking
                       region. The television receiver uses this to lock its own internal oscillator
                       to the correct frequency and phase so that it can determine colour
                       correctly.
                       For NTSC colour video, burst is defined as having a phase of 180
                       degrees.

            The colour NTSC vector display
                                                        9 0 (R -Y )
                                                           100
                                               R ed
                                                103
                                                            90
                                             8 8 IR E
                                                                                 M a g e n ta
                                                            80                       61
                                                                                  8 2 IR E
                                                            70

                                                            60
                                                                                                           I com ponent
                                                            50                                                  33
                                                            40

                                                            30
                    Y e llo w
                       167                                  20
                    6 2 IR E
                                                            10

      180                                                                                                           0 (B -Y )
                                 B u rs t
                                   180
                                4 0 IR E                                                          B lu e
                                                                                                   347
                                                                                                6 2 IR E




                                  G re e n
                                    241
                                 8 2 IR E
                                                                                            Q com ponent
                                                                       C yan                    303
                                                                         283
                                                                      8 8 IR E
                                                           270
Figure 21                                                                                           Colour NTSC vector space

                       A good way of showing the colour elements of an NTSC signal is to
                       show the signal on a vector display. This is also sometimes called a
                       polar display.
                       The vector display shows the amplitude and phase of the chroma
                       (colour) signal. The display is circular. The centre represents zero
                       amplitude. An increase in amplitude is represented by a move towards


Sony Training Services                                                                                                          60
Broadcast Fundamentals

                 the outer edge of the display. The position around the display represents
                 the phase of the signal with respect to the sub-carrier.
                 Thus no colour will be seen as a bright dot in the centre of the display.
                 Fully saturated colour will be seen as a bright dot near the edge of the
                 display. The position of the dot will indicate the colour, or hue.
                 Burst appears as a bright dot to the left of the centre of the display (at
                 180 degrees).



 The vectorscope
                   A vectorscope is a special kind of oscilloscope for measuring the
                   colour content of a composite video signal. It shows a vector display
                   of the signal and has a graticule with markings for the vertical and
                   horizontal axes, concentric circles for colour saturation and target
                   boxes for all the important colours and for burst.
                   Engineers use a vectorscope to check that composite video has the
                   correct colour phase and saturation and that there is no distortion
                   on the colour signal.




The gamut
                 Gamut is the limit the colour content of a composite signal can attain on
                 a vector display.
                 The NTSC gamut is not circular but more shaped like a rugby ball, i.e. it
                 is tall and thin.
                 No part of the signal can extent beyond the gamut as this will produce
                 illegal colours.




 The gamut detector
                   A Gamut detector is an instrument that is connected between any
                   piece of video equipment and the composite monitor it is playing
                   into. It checks that the video signal is within gamut at every point,
                   and shows any area of the picture that has illegal colours as an
                   enclosed warning area on the monitor.


61                                                   Sony Broadcast & Professional Europe
Part 7 – Colour NTSC television


Vertical interval structure
                   The vertical interval for NTSC is similar to that of monochrome NTSC.
                   The equalisation and broad pulses have the same basic construction.

        4 field structure
                   Colour NTSC differs from monochrome NTSC in that it contains a
                   subcarrier signal and subcarrier burst elements at the beginning of each
                   video line, except during the vertical interval.
                   The phase relationship between subcarrier and horizontal syncs does
                   not repeat every frame, but every 2 frames, (4 fields). This relationship is
                   called the sc/h (subcarrier to horizontal sunc) relationship. The
                   sequence of fields and frames is called the colour frame sequence.
                   This relationship becomes important when editing. If the final video
                   sequence is to maintain a steady sc/h relationship after it has been
                   edited, edits must be made to the correct field in the 4 field sequence.




Sony Training Services                                                                     62
E n d o f f ie ld 4         S ta r t o f fie ld 1




63
                                                                                                                                                                                                               V s y n c tr ig g e r

                                                                                                        A c t iv e v id e o lin e s                                                                                                     V e r t ic a l b la n k in g in t e r v a l                                                                                       A c t iv e v id e o lin e s




                                       Figure 22
                                                                                                                                                                      6 e q u a lis a tio n p u ls e s                          6 b r o a d p u ls e s                                6 e q u a lis a tio n p u ls e s




                                                                                                  523              524                  525                      1                   2                   3             4                  5                      6                    7                   8              9         10         11         12    21            22                  23
                                                                                                                                                                                                                                                                                                                                                                                                        Broadcast Fundamentals




                                                                                                                              E n d o f f ie ld 1         S ta r t o f fie ld 2
                                                                                                                               H a lf lin e                                                                                                                                                                                                                                       H a lf lin e




                                                                                                         261                  262                   263                  264                 265             266               267                  268                    269                270                 271        272        273        274   284        285                286




                                                                                                                              E n d o f f ie ld 2         S ta r t o f fie ld 3




                                                                                                  523              524                  525                      1                   2                   3             4                  5                      6                    7                   8              9         10         11         12    21            22                  23




                                                                                                                              E n d o f f ie ld 3         S ta r t o f fie ld 4
                                                                                                                               H a lf lin e                                                                                                                                                                                                                                       H a lf lin e




                                                                                                         261                  262                   263                  264                 265             266               267                  268                    269                270                 271        272        273        274   284        285                286




                                       Colour NTSC vertical interval (showing 4 field sequence)




Sony Broadcast & Professional Europe
Part 8 – PAL television


Part 8                                                           PAL television
What is PAL?
                   PAL stands for Phase Alternate Line. It is a description of the way colour
                   video information is encoded and presented.

        The disadvantages of NTSC
                   The NTSC colour television system is based on the original 525 line per
                   frame monochrome signal. Colour was added to the monochrome signal
                   without increasing the overall channel bandwidth by modulating the
                   colour signal onto a subcarrier, and hiding it within the upper harmonics
                   of the monochrome signal. A burst of subcarrier just before every line
                   ensured that the television receiver could lock to the subcarrier phase
                   and decode the colour properly.
                   After NTSC was introduced and went into common use it was found that
                   the transmitters suffered from non-linearity problems, i.e. the phase of
                   the chroma shifted as the level of the chroma signal changed. The
                   phase was locked to burst, so was correct at burst level. Any colour at
                   levels significantly different from burst came out wrong. Hence NTSC
                   television receivers have a Hue control to alter the colours and try to
                   make them look better, and NTSC gained the dubious title “never the
                   same colour”.
                   Hue controls never worked completely. You could correct one colour and
                   others would go wrong.
                   At the same time, it was always felt that it was a pity that the change
                   from monochrome NTSC to colour could not have also been used to
                   increase the number of lines per frame, and improve the vertical
                   resolution of the image.
                   Two solutions were introduced in later years to overcome the colour
                   problem, and increase the number of lines per frame. These were PAL
                   and SECAM. Both took a slightly different approach to the problem of
                   transmitter non-linearity, but both increased the vertical resolution of
                   NTSC in the same way.

        The PAL solution
                   The PAL system was introduced later than the NTSC system and was
                   able to correct the main disadvantages of the NTSC system. It aimed to
                   eradicate the colour phase shifting problem by alternating the phase of
                   the (R-Y) part of the colour signal on each successive video line. The
                   phase switched from positive to negative, negative to positive, for each
                   new line.
                   The PAL receiver was able to use this alternating phase to detect if the
                   overall colour phase had shifted, at any level of chroma signal, and pull it
                   back to where it should be, resulting in true colour on home television
                   screens.
                   PAL also has 625 lines per frame. This improved the vertical resolution
                   of PAL video pictures giving a better picture.


Sony Training Services                                                                       64
Broadcast Fundamentals


The PAL signal
       The PAL video line
                 There a several forms of PAL video line depending on its position in the
                 overall video frame. Most of these are the active video lines.
                 Each active PAL video line consists of a horizontal sync pulse, burst and
                 the video information for that line. The rest of the line is blanked.
                 The total line duration is 64uS. Blanking extends for 12us and the active
                 line for 52us.

            Start of the line
                 The start of the line is defined as the half transition point of the leading
                 edge of the horizontal sync pulse. This is 1.5us from the end of the last
                 active video region, and 10.5us from the beginning of the next active
                 video region.

       The frame
                 The PAL frame consists 625 video lines. 576 of these are active, leaving
                 49 lines for the vertical blanking interval.
                 The PAL frame is divided into 2 fields. Both fields have 312.5 lines each.
                 Field 1 has 287.5 active lines, field 2 has 288.5 active lines.

       Vertical blanking parameters

            Broad and equalisation pulses
                 PAL has 6 equalisation pulses, followed by 6 broad pulses, followed by 6
                 more equalisation pulses. All broad pulses and equalisation pulses have
                 a ½ line duty cycle, i.e. repeat every ½ line.

            Start of the field and frame
                 The start of field 1 is defined as the start of the half transition point of the
                 first broad pulse, i.e. at the beginning of line 1, and for field 2, half way
                 through line 313.
                 The start of the frame is the same as the start of field 1.

            Video blanking
                 PAL vertical video blanking extends from lines 311 to 336 between fields
                 1 and 2, and between lines 623.5 and 23.5 between fields 2 and 1.


The PAL chroma signal
                 The PAL component (R-Y) and (B-Y) signals are attenuated to reduce
                 their bandwidth, but they are not rematrixed to two signals at different
                 phase to (R-Y) and (B-Y) as they are in NTSC with the I and Q signals.
                 The (B-Y) signal is attenuated to a signal called U, and the (R-Y) signal
                 to a signal called V according to the equations :-


65                                                    Sony Broadcast & Professional Europe
Part 8 – PAL television

                                                                    U = 0.492 (B-Y)
                                                                    V = 0.877 (R-Y)
                              The U signal is modulated onto the subcarrier and the V signal onto a
                              quatrature signal 90 degrees advanced from subcarrier.


                                              +R ed                    -C y a n
                                                103      100             77
                                             9 5 IR E                 9 5 IR E
                                                          90

                                 -G re e n                80                      + M a g e n ta
                                    120                                                61
                                 8 9 IR E                 70                        8 9 IR E

                                                          60

                                                          50

                                                          40

                                                          30
               + Y e llo w                                          + B u rs t                       - B lu e
                   167                                    20           45                               13
                6 7 IR E                                          2 1 .5 IR E                       6 7 IR E
                                                          10
                                                                                                                            0
   180                                                                                                             (U c o m p o n e n t)
                                                                                                                         (B -Y )

                - Y e llo w                                      -B u rs t                          + B lu e
                   193                                             315                                347
                6 7 IR E                                       2 1 .5 IR E                         6 7 IR E




                                + G re e n
                                    241
                                 8 9 IR E
                                                                                  - M a g e n ta
                                                                                        300
                                                                                     8 9 IR E
                                                -R e d               +C yan
                                                 257                    283
                                              9 5 IR E               9 5 IR E
                                                         270
Figure 23                                                                                                        PAL vector space


            V switching
                              The V component of the chroma signal is switched negative/positive
                              positive/negative every line. Line 1 of field 1 is positive. The receiver
                              uses this to determine difference in chroma phase at any level of
                              chroma.

            Burst phase and swinging burst
                              Burst is chosen to be at 135 degrees. Burst also switches
                              negative/positive positive/negative every line in accordance with the V
                              component of the chroma signal. This is called the swinging burst, and
                              the television receiver uses this to determine if the V component of the
                              chroma signal is negative or positive on each video line.


Sony Training Services                                                                                                                     66
Broadcast Fundamentals

       PAL vector display
                 The PAL vector display has twice as many box targets and the NTSC
                 display. Half of these are similar to NTSC, the other half are negative
                 versions for those lines when the V component and burst and negative.
                 Thus, for instance positive red is at 103 degrees and negative red at 257
                 degrees. Positive cyan is at 283 degrees while negative cyan is at 77
                 degrees.


Choice of subcarrier frequency
                 As mentioned on page video is highly repetitive and the bandwidth
                 spectra of the monochrome and colour signals have energy centred
                 around the harmonics of line, field and frame rate. In NTSC, the colour
                 subcarrier is chosen so that the colour harmonics sit in the energy gaps
                 of the Y signal.

            V switching component problem
                 However in PAL the V component switches every video line. This means
                 that the colour signal is similar, not every line, but every other line. The
                 energy of the PAL chroma signal is not based on harmonics of line rate
                 but of half line rate.
                 Placing the subcarrier exactly between two Y harmonics will mean half
                 the colour harmonics sit exactly on the Y harmonics, just what we are
                 trying to avoid!
                 So the original designers of PAL chose to place the subcarrier between
                 line harmonics 283 and 284, offset slightly from the centre between
                 these two Y harmonics. This is called a ¼ line offset and two colour
                 harmonics now sit between each pair of Y harmonics.

            Dot crawl problem
                 It is happy chance that the NTSC subcarrier phase between subsequent
                 line is exactly opposite, and between subsequent fields is also exactly
                 opposite. This helps to cancel out any patterning effect due to subcarrier
                 itself.
                 In PAL, without the ¼ line offset, V component switching causes the
                 exact opposite to happen. The phase of each subsequent line, and field,
                 is the same, causing fine vertical stripes in the picture.
                 With the ¼ line offset this turns into a crawling dot pattern across the
                 image. Thus the designers of PAL added a further 25Hz to the
                 subcarrier to ‘spoil’ this dot pattering effect and make it much less
                 noticeable. This is called picture frequency shift because it is the same
                 as frame rate.

            Subcarrier frequency calculation
                 Thus the final calculation for the PAL subcarrier calculation is :-
                                        fsc     = (( N – ¼ ) x L x fv ) + fv



67                                                  Sony Broadcast & Professional Europe
Part 8 – PAL television

                                                                           fsc = subcarrier freq.
                                                                    N = chosen harmonic
                                                                           L = lines per frame
                                                                           Fv = frames per
                   second
                                            = ( (284 – ¼) x 625 x 25 ) + 25
                                            = ( 283.75 x 625 x 25 ) + 25
                                            = 4433593.75 + 25
                                            = 443361875 Hz
                                            = 4.43361875 MHz



Bruch blanking
                   During the development of PAL it was found that the swinging burst
                   caused problems with some reference generators. If the same pattern of
                   vertical blanking is us in PAL as in NTSC, the first and last burst for each
                   field could have either a positive or negative V component.
                   Bruch blanking is a method of blanking the burst during the vertical
                   interval so that the first and burst of every field always has a positive V
                   component.
                   The Bruch blanking pattern extends over 4 fields. The table below shows
                   the first and last lines to have burst for all 8 fields.
                    Field               First line with burst          Last line with burst
                    1                             6                             310
                    2                           320                             622
                    3                             7                             309
                    4                           319                             621
                    5                             6                             310
                    6                           320                             622
                    7                             7                             310
                    8                           319                             621




Sony Training Services                                                                        68
69
                                       Figure 24
                                                                                   0 .7 V
                                                                            Y
                                                                                                                                                                                Broadcast Fundamentals




                                                                                   0 .3 V




                                                                                 0 .3 5 V

                                                                          R -Y
                                                                           &
                                                                          B -Y
                                                                                 0 .3 5 V




                                                                                    F ro n t p o rc h                    B a c k p o rc h
                                                                                      (1 .5 5 u S )     L in e s y n c      (5 .8 u S )
                                                                                                          (4 .7 u S )

                                                                                                           L in e b la n k in g                              A c t iv e lin e
                                                                                                              (1 2 .0 5 u S )                                  (5 2 u S )




                                       Component PAL video line timings




Sony Broadcast & Professional Europe
                                                                                                                                            L in e 6 4 u S
Part 8 – PAL television

                                                                             E n d o f fie ld 4           S ta r t o f fie ld 1


       A c tiv e v id e o lin e s                                                                                                 V e r tic a l b la n k in g in te r v a l                                                                                A c tiv e v id e o lin e s
                                    H a lf lin e                                                                                                                                                                                                       H a lf lin e
                                                            5 e q u a lis a tio n p u ls e s                        5 b r o a d p u ls e s                    5 e q u a lis a tio n p u ls e s




620           621                   622            623            624                  625                      1                   2                     3                   4                  5         6          7         8    22         23             24                 25




                                                                            E n d o f fie ld 1           S ta r t o f fie ld 2




      308                309                 310         3 11                312                   313                   314                 315                   316                317            318       319        320       321   335        336              337




                                    H a lf lin e                                                                                                                                                                                                       H a lf lin e
                                                                            E n d o f f ie ld 2      S ta r t o f fie ld 3




620           621                   622            623            624                  625                      1                   2                     3                   4                  5         6          7         8   22          23             24                25




                          H a lf lin e
                                                                            E n d o f f ie ld 3      S ta r t o f fie ld 4




      308                309                 310         3 11                312                   313                   314                 315                   316                317            318       319        320       321   335        336              337




                                    H a lf lin e                                                                                                                                                                                                       H a lf lin e

                                                                             E n d o f f ie ld 4         S ta r t o f fie ld 5




620           621                   622            623            624                  625                      1                   2                     3                   4                  5         6          7         8    22         23             24                 25




                                                                             E n d o f fie ld 5          S ta r t o f fie ld 6




      308                309                 310         3 11                312                   313                   314                 315                   316                317            318       319        320       321   335        336              337




                                    H a lf lin e                                                                                                                                                                                                       H a lf lin e

                                                                            E n d o f fie ld 6           S ta r t o f f ie ld 7




619           620                   621            623            624                  625                      1                   2                     3                   4                  5         6          7         8    22         23             24                25




                                                                           E n d o f fie ld 8        S ta r t o f fie ld 1




      308                309                 310         3 11                312                   313                   314                 315                   316                317            318       319        320       321   335        336              337




Figure 25


Sony Training Services                                                                                                                                                                                                                                           70
Broadcast Fundamentals




       Different types of PAL
                 There are in fact 8 different types of PAL. The differences are small. Line
                 durations and subcarrier frequencies are different between the different
                 PAL types.
                 The ITU has given a letter designation to each of the PAL types. These
                 are ‘B’, ‘D’, ‘G’, ‘H’, ‘I’, ‘M’, ‘N’ and ‘Combination N’. Great Britain uses
                 PAL I.


The disadvantages of PAL
                 PAL is more complex than NTSC (which is more complex than
                 monochrome television).
                 Monochrome television has a 2 field repeating relationship, i.e. 2 fields
                 make one complete frame. NTSC television has a subcarrier to
                 horizontal sync (SC/H) relationship that repeats every 4 fields.
                 The SC/H relationship of PAL however is even more complicated and
                 results in an 8 field relationship. This has an impact on editing systems
                 and special effects. Good editing with PAL signals can only be done at
                 the correct 8 field editing point. Decoding is also more complex. It is
                 more difficult to separate the colour and monochrome components.




71                                                   Sony Broadcast & Professional Europe
Part 9 – SECAMtelevision


Part 9                                                  SECAM television
                   SECAM is another approach to solving the inherent problems of NTSC.
                   However SECAM is not popular in the studio. Although clever, SECAM
                   is very much more complex than PAL.
                   SECAM is not covered in great detail her because it is not used very
                   much within the studio.
                   SECAM is similar to PAL, with 625 lines per frame, with 312.5 lines per
                   field, and 50 fields per second. However that is where the similarity
                   ends.
                   SECAM transmits Y on every video line and each colour difference
                   signal on each successive video line, i.e. R-Y then B-Y then R-Y, as so
                   on. Thus all SECAM receivers have a line memory so the the colour
                   difference signal from one line can be used in the decoding of the next
                   line as well.
                   SECAM also used a form of low frequency preemphasis on the colour
                   difference signals.




Sony Training Services                                                                    72
Broadcast Fundamentals


                                                        The video camera
Types of video camera
       The camera
                 Although all the cameras mentioned here may be described as video
                 cameras the true video camera is a unit that simply converts a moving
                 image into an electrical signal.
                 All surveillance and CCTV cameras tend to fall into this description.
                 Some professional and broadcast cameras also fall into this bracket as
                 well although the more professional ones tend to be dockable (see
                 below).

       The video camcorder
                 The video camcorder differs from the camera in that it can record the
                 image it is looking at onto some storage medium it is holding within itself.
                 The most common camcorders today record to tape. An increasing
                 number of camcorders are using disk or solid-state technology instead.
                 Much of the design for the initial part of a camcorder is exactly the same
                 as it is for a camera. This is called the camcorder front end.
                 It is when we reach the signal processing part of the camcorder that
                 things start to look a little different.

       The dockable camera
                 The obvious next step is to split the camera part of the camcorder from
                 the recorder part. This has been done to a number of broadcast and
                 professional camcorder designs, and the technology is called ‘dockable’.
                 The term ‘dockable’ is also used for some broadcast system cameras.
                 These cameras have no tape recorder section, they are purely a
                 camera. However the front end can be split from the back end, and form
                 part of a much larger system. Dockable system cameras are described
                 in the next section.
                 Dockable units allow you to ‘pick and mix’ the back and front halves of a
                 camera depending on the requirements of the shoot, technical reasons,
                 or on financial constraints.


System cameras
                 System cameras are used in television studios and outside broadcast
                 trucks. Their whole philosophy is that the camera forms part of a
                 complete environment that is operated and controlled by a team of
                 people, rather than just one person.
                 The beginning of the system is the lens. As with many professional and
                 broadcast cameras this will be removable, and will be chosen to match
                 the application for which it is being used.
                 The system camera itself is of the dockable variety. The front end has
                 the optical block and the circuitry for processing the signals from the

73                                                  Sony Broadcast & Professional Europe
Part 10 – The video camera

                   optical sensors. The back end handles the conversion into a standard
                   video signal. This may be a standard analogue composite or component
                   connection, a digital connection, maybe compressed, or something a
                   little more professional like a triax connector. Indeed triax connectors
                   have a true system approach as they allow for very high quality output
                   from the camera, as well as allowing for power and control signals to the
                   camera, all in one cable.
                   Connections from the camera are sent into camera control units. These
                   units allow the camera to be controlled remotely as well as allowing for
                   adjustments like colour correction.
                   This whole approach is to allow the cameraman to simply frame the
                   shot. Someone else back in the control room will see feeds from all the
                   cameras and will be able to ensure that there is a good balance between
                   them. Yet another person can take care of colour, ensuring that whites
                   look white and skin tone looks correct, for instance.


Parts of a video camera
        The lens
                   Every camera will use a lens to focus the image. In some cameras the
                   lens is fixed, i.e. it is not removable. In other cameras the lens can be
                   removed and replaced with another with different characteristics and/or
                   quality.
                   All removable lens used to be a screw fix. The screw fix lens is not as
                   popular now as it used to be and is generally only used on cameras
                   where it is unusual to change the lens.
                   The popular method of changing lenses on most modern cameras if the
                   bayonet or breach mount. Rather than screwing the lens into place
                   bayonet and breach lenses are removed and fitted by a simple twist
                   through about 90 degrees. The action is far quicker, and far more
                   positive than the screw fixing.

             Lens electrical connections
                   Modern cameras generally require electrical connections between the
                   camera and the lens, for three possible controls, focus, zoom and
                   aperture. Some camera lenses may have controls for one, two, or all
                   three controls.
                   These electrical connections allow the camera operator to control the
                   lens from the camera grips, rather than reaching forward to the lens
                   itself. This helps the cameraman balance the camera and keep it steady.
                   Alternatively the electrical connections can be fed into the camera
                   electronics to allow for automatic iris control, or focus and zoom from a
                   remote camera control box.
                   In some cases the electrical connection is made through the bayonet
                   mount itself. This is useful because it is a good positive action, and does
                   not involve any cables.
                   In other cases a separate connection may have to be made after the
                   lens is fitted.

Sony Training Services                                                                       74
Broadcast Fundamentals

       The sensor
                 Light from the lens passes into the camera itself and into a sensor.
                 There are various designs of sensor, but they all change the image into
                 an electrical signal.

            Colour camera considerations
                 Colour cameras need to split the image into three primary colours. This
                 can be done using a specially designed colour sensor, or can be done
                 by first splitting the image into three separate images, one for each
                 primary colour, in a special piece of optics called a diachroic splitter
                 block. Each output from the block is sent to a separate sensor, and is
                 really just a normal monochrome sensor.
                 Diachroic blocks and multiple sensors add size, weight and cost to the
                 camera design, but produce a better image. Therefore cheaper colour
                 cameras, and small colour cameras, use colour sensors. Professional
                 and broadcast cameras use diachroic blocks and three, or maybe four,
                 normal sensors.
                 However there are signs of a radical change in colour camera design
                 allowing for high quality colour cameras with no diachroic block and only
                 one sensor. This is explained in the Parts on Image Sensors.

       Signal processing
                 Signals from the image sensors are passed into the camera’s
                 electronics. This electronics buffers, amplifies and converts the signals
                 into a form that can be used outside the camera. This could be a
                 composite of component signal, digital or analogue, baseband or
                 compressed. The available outputs will depend of the camera’s
                 application, cost, and sometimes size.
                 The camera’s electronics also allow the signals to be modified by the
                 operator. Many cameras have controls for brightness, and maybe some
                 sort of colour control. Professional and broadcast cameras often have
                 complex controls for colour balance, white and black level adjustments,
                 and adjustments like latitude and knee controls.

            Camcorder signal processing
                 Camcorders generally have similar signal processing as cameras, with
                 the same controls, and the same outputs. However camcorder signal
                 processing also turns the signal from the sensors into some kind of
                 signal that can be recorded onto the internal medium.
                 In the case of a tape this would be a serial signal with some form of
                 channel coding. Channel coding is where the signal is modified in some
                 way to allow it to be recorded on tape effectively and without loss. Digital
                 camcorders often also use some form of error correction.
                 Disk storage also requires its own type of channel coding and error
                 correction.
                 Solid state storage does not require and channel coding and may not
                 require any error correction.


75                                                  Sony Broadcast & Professional Europe
Part 10 – The video camera


Video camera specifications
             Resolution
                   Resolution is a measure of the resolving power of the camera.
                   All cameras, colour or monochrome, are the single sensor type. The
                   sensor pixels in colour cameras are divided between the three primary
                   colours. Thus, for the same sensor density, there is a difference in
                   resolution between monochrome and colour cameras. Monochrome
                   cameras will therefore tend to have a higher resolution than colour
                   cameras.
                   Still cameras often use the number of pixels in the sensor as a measure
                   of resolution. However it is not a common method of defining resolution
                   in video cameras. Sensor resolution will give a basic figure for the
                   sensor itself. In many cases only a proportion of the pixels are actually
                   used in the picture. If specifications mention ‘active pixels’ or ‘effective
                   pixels’ rather than simply ‘pixels’, this will give greater assurance that all
                   these pixels are part of the picture.
                   The camera’s circuitry will also affect the sensor’s resolution. Badly built
                   circuitry will have a poor bandwidth that will reduce the resolution
                   provided by the sensor by the time the signal reaches the output. Having
                   a good sensor and bad circuitry is a waste. CCTV camera resolution
                   figures should always be related to the final output signal.
                   Resolution figures are sometimes given as vertical resolution. This is the
                   number of active lines in the picture. All PAL based CCTV cameras are
                   built around the PAL television system with 625 lines per frame. Of this
                   576 lines are active. All PAL based CCTV cameras should be able to
                   achieve a vertical resolution of 576 lines.
                   Resolution figures are normally given as horizontal resolution. This is a
                   measure of the number of individual pixels per line the camera is able to
                   resolve, and is measured in vertical lines. Horizontal resolution can
                   never be higher than the sensor’s horizontal resolution, and is often
                   lower, due to bandwidth limitations of the circuitry.
                   Horizontal resolution and bandwidth are related by the equation :-
                                                          1 
                                             Bandwidth =         
                                                          Period 
                   Each horizontal line lasts about 50uS long (exactly 52uS). The pixels, or
                   vertical lines, are divided up into this 50uS. The period is one clock
                   cycle, producing two vertical lines, one black, one white.
                   Therefore :-
                                                         50 × 10 −6
                                                Period =
                                                          Lines 
                                                                 
                                                          2 




Sony Training Services                                                                        76
Broadcast Fundamentals

                                                          1 × 10 −4
                                                      =
                                                           Lines
                 Therefore the bandwidth can be found by combining these two
                 equations :-

                                                                  
                                                                  
                                                            1     
                                         Bandwidth =          −4 
                                                       1 × 10  
                                                                
                                                       Lines  
                                                               


                                                   = Lines × 10000

                 These equations boil down to a very simple rule. If the number of lines or
                 pixels is measured in hundreds, and the bandwidth in MHz, the two are
                 equal, i.e. 400 vertical lines = 4MHz bandwidth, 600 lines = 6MHz
                 bandwidth.
                 Bandwidth, probably more than any other parameter, is the figure that is
                 more difficult to achieve. Bandwidth costs money and separates the
                 good cameras from the bad ones. For square pixels the horizontal
                 resolution would need to be 768 vertical lines, or pixels, which gives
                 almost 8MHz bandwidth! No CCTV camera can achieve this. Cameras
                 achieving 600 vertical lines are considered good quality.

            Sensitivity
                 Sensitivity is a measurement of how much signal the camera produces
                 for a certain amount of light.
                 Sensitivity can be measured as the minimum amount of light that will
                 give a recognisable picture, and is sometimes called ‘minimum
                 illumination’. Figures of below 10 lux should be possible for standard
                 CCTV cameras. However although this method provides an easy guide
                 to CCTV planners and installers, it is a highly subjective measurement.
                 What is a recognisable picture to one person may be unrecognisable to
                 another.
                 Professional and broadcast cameras use a different, more quantifiable
                 method for measuring sensitivity. The camera is pointed towards a
                 known light source. This is often a 2000 lux source at 3200K light
                 temperature (colour). The iris is then closed until the output is exactly
                 700mV.
                 Thus a reasonable sensitive camera may be f11 at 2000lux, whereas a
                 less sensitive camera may be f8 at 2000lux.
                 CCTV camera specifications are often not so consistent. Different lux
                 levels are specified. In the case of low light and night cameras normal
                 colour temperatures are meaningless because the camera is not
                 designed to be lit with standard 3200K light! These cameras often



77                                                  Sony Broadcast & Professional Europe
Part 10 – The video camera

                    specify the minimum illumination sensitivity, and should quote figures
                    very much less than 1.
                    Dome camera manufacturers specify sensitivity with the dome removed,
                    because the figure is better than with it fitted, Some give figures with the
                    dome fitted as well. The camera would normally be used with the dome
                    fitted. This factor needs to be remembered. Dome cameras need to be
                    more sensitive than other cameras, if they are to overcome the losses
                    through the dome itself.

             Signal to noise ratio (SNR)
                    A camera’s SNR is found by comparing the amount of video signal to the
                    amount of noise, in decibels, with the equation :-
                                                        video 
                                                 20 log       dB
                                                        noise 
                    As a guide, an SNR of about 20dB is poor and is probably not viewable.
                    30dB will give a barely distinguishable image. 50dB is acceptable and
                    60dB good.
                    As a ratio of video signal to noise, 20dB is 10:1, and 60dB is 1000:1.



             Gain
                    CCTV cameras with automatic gain control (AGC) add another
                    complication to the specifications. Manufacturers will quote sensitivity
                    figures with AGC switched on. However they will generally quote SNR
                    figures with the AGC switched off. The reasons for this are obvious. It
                    makes the figures look better!

             Output formats
                    CCTV cameras use many different video output formats, from the simple
                    analogue composite output fitted to most cameras, through the analogue
                    Y-C output format, digital formats of one kind or another, and direct
                    computer network outputs used by some of the latest cameras.
                    Specifications always show the SNR, sensitivity, etc. from the best
                    output. The most common output connection people use is the analogue
                    composite output. Most cameras have it fitted and it is a simple
                    connection. However it is also the worst quality output.




Sony Training Services                                                                       78
Broadcast Fundamentals


                                                                                    Lenses
                     A lens is a transparent curved object capable of bending light. Most
                     lenses are made from glass. However any clear material will make a
                     lens. Different materials have different optical and physical
                     characteristics, some of which are better than those of glass.
                     A lens is based on some basic properties any transparent material have,
                     with respect to light. The most important is their ability to bend light.


Refraction
                     If a light ray passes from one transparent material to another it is bent
                     according to the relative refractive indices of the two materials.

            Snell’s law
                     Snell’s law defines the behaviour of the light ray. It states :-
                              n1 sin i = n2 sin r
                     Where n1 is the refractive index of one material and n2 is the refractive
                     index of the other. i is the angle if incident (approach angle) and r is the
                     angle or refraction (leaving angle).
                     Every material has a different refractive index.
                     The refractive index of air is 1. Therefore the refractive index of any
                     transparent material can be found by rearranging the equation above,
                     thus :-
                              n2 = n1 sin i / sin r however n1 = 1 therefore
                                     n2 = sin i / sin r

            The coin in the tank of water




Figure 26                                                                       The coin in the tank

                     Light passing from water to air, or visa-versa, is bent because water has
                     a refractive index that is different to air.
                     Imagine a tank of water with a coin sitting at the bottom of it. The rays of
                     light coming from the coin pass upwards through the water and out into

79                                                        Sony Broadcast & Professional Europe
Part 11 – Lenses

                   the air. As they pass from water to air they are bent by an angle relative
                   to the angle you are looking at from vertical.
                   Thus if you are looking at the coin at any other angle that from directly
                   above the coin itself it will appear to be in a different position that it
                   actually is.


The block of glass
                   The next step is to imagine a block or thick sheet of any clear material,
                   like glass.




Figure 27                                                       Refraction through a block of glass



                   Light passing through the glass at an angle is bent as it passes from air
                   to glass and out again from glass to air. The angle the light ray bends as
                   it passes from air to glass is exactly the same but opposite to the angle
                   as it passes from glass to air.
                   The ray of light on the incoming side of the glass is parallel but displaced
                   to the outgoing side. This displacement can be found by the following
                   equation :-
                                          d = t sin i (1-1/n)
                   Where d is the displacement, t is the thickness of the glass, i is the
                   incident angle and n is the refractive index of the glass.
                   Notice that the refracted r angle is not part of the equation. The entrance
                   and exit rays are parallel and therefore irrelevant.


The prism
                   A prism is a little like the block of glass we have just looked at where the
                   two sides of the glass are not parallel with one another.
                   The most common prism is a block of glass with a triangular section.
                   The sides of the triangular section can be at any angle to one another


Sony Training Services                                                                         80
Broadcast Fundamentals

                     although most prisms have something approaching an equalateral
                     triangular section.

            Light bending properties of a prism
                     Light entering one side of the prism is bent and leaves the prism at a
                     different direction. The angle of bend is called the deviation and can be
                     found by the equation :-
                                    D = A (n-1)
                     Where D is the deviation, A is the angle of the prism, and n is the
                     refractive index of the glass (or whatever material the prism is made of).




Figure 28                                                              Refraction through a prism




            Colour splitting properties of a prism
                     White light is made up of many different colours. Each colour has a
                     different wavelength.




Figure 29                                                            Splitting light through a prism



                     When a ray of white light passes from one transparent material to
                     another the different wavelengths are refracted by different angles. This
                     has the effect of splitting the white light into its constituent colours.


The convex lens
                     The convex lens is a little like a series of prisms placed next to each
                     other, all with slightly different angles between their two side.




81                                                      Sony Broadcast & Professional Europe
Part 11 – Lenses




Figure 30                                                               The convex lens as prisms



                   In fact if you increase the number of prisms, making them smaller and
                   smaller you will eventually have a perfect convex lens.
                   Convex lenses have a focal point. If parallel light enters one side of the
                   lens it is focused to a single point. This is the basis for all lens designs. If
                   the lens did this perfectly all lens designs would be just one convex
                   element. However, as we will see the convex lens is not perfect, and
                   certain things have to be done to eliminate these imperfections.




Figure 31                                                                         The convex lens



                   The sides of most convex lenses are made from part of a sphere. While
                   this is easy to produce, and perfectly good enough for most lenses, it
                   can present problems for certain lenses.




Figure 32                                                  The convex lens as part of two spheres



Sony Training Services                                                                         82
Broadcast Fundamentals


The concave lens
                 The concave lens is effectively the opposite of the convex lens. In the
                 same way it can be seen as an infinite arrangement of prisms and its
                 two sides are based on parts of a sphere.




Figure 33                                                          The convex lens as prisms




Figure 34                                                             The concave lens focus

                 Concave lenses have a focal point. However the concave lens focal
                 point is a ‘virtual’ focal point on the approach side the lens. The focal
                 point has no practical purpose in the same way as it does with convex
                 lenses, but is used mathematically to calculate the properties of lens
                 designs that use concave lens elements.




Figure 35                                                       The concave lens as spheres



83                                                  Sony Broadcast & Professional Europe
Part 11 – Lenses


Chromatic aberration
                    There are two types of chromatic aberration, axial (sometimes called
                    longitudinal) chromatic aberration and lateral (sometimes called
                    transversal) chromatic aberration.

            Axial chromatic aberration
                    The basic prism showed how it is possible to split white light into its
                    constituent colours. Any refractive surface will bend different coloured
                    light by different degrees. This effect is called dispersion.




Figure 36                                                               Axial chromatic abberation

                    A lens is really an infinite number of small prisms laid out in a particular
                    way. Therefore it stand to reason that a lens will split white light into its
                    constituent colours.
                    This effect is called axial, or longitudinal, chromatic aberration and
                    presents problems for lens designers.
                    Looking at a basic convex lens when parallel rays of white light enter at
                    one side of the lens it is split into its constituent colours, with each colour
                    having a different focal point depending on its wavelength. Shorter
                    wavelength colours, at the ultra-violet end of the spectrum, are refracted
                    more and have a shorter focal point.

                Correcting axial chromatic aberration
                    In order to correct axial chromatic aberration cause by one lens element
                    you need to add another lens element with the opposite error.
                    The required overall effect of most lens designs is to produce a perfect
                    convex lens. However a perfect convex lens does not exist. By sticking a
                    concave lens onto the convex lens you can eliminate the axial chromatic
                    aberration of the basic convex lens.
                    This is why lens designers stick lens element together.
                    However simply sticking a concave lens with the opposite effect to a
                    convex lens also eliminates the focal effect and the lens ends up looking
                    like a flat piece of glass!
                    The trick is to use a convex lens and concave lens with different
                    refractive indices. Thus although chromatic aberration is eliminated. The
                    two lenses together still focus to a point.


Sony Training Services                                                                         84
Broadcast Fundamentals




Figure 37                                                                           Lens doublet

                     This design is called an achromatic doublet.




Figure 38                                                            Various lens doublet designs

                     Achromatic doublets come in all different forms, depending on the
                     particular use for which they are intended. The important thing is that the
                     convex element is fatter in the middle that at the edge and the concave
                     lens is thinner in the middle than at the edge.

            Lateral chromatic aberration
                     Lateral chromatic aberration is a less obvious problem than axial
                     chromatic aberration. It arises from the same limitation of lens elements
                     but effect the image laterally. It causes fringing near the outer edge of
                     images, where the different colours have been split apart.
                     Lateral chromatic errors affect lens with very long or very short focal
                     lengths, i.e. long telephoto lenses and fish-eye lenses.

                Correcting for lateral chromatic aberration
                     Lateral chromatic aberration in telephoto lens designs can be reduced
                     by not using refractive elements in the design. Mirror lenses use curved
                     mirrors instead of lenses. Thus no refraction and not dispersion.
                     The other method is to use low dispersion material such as fluorite.
                     However this material is difficult and expensive to work, and is affected
                     by normal air. Flourite lens elements can only therefore be used as an
                     internal element where they can be protected by a normal glass
                     element.




85                                                      Sony Broadcast & Professional Europe
Part 11 – Lenses


Spherical aberration
                    As shown before the two sides of most lenses are designed as parts of a
                    sphere. This makes manufacture easy and is perfectly good in most
                    cases.
                    However making lens sides as part of a sphere is not actually correct.
                    The focal point at the edge of the lens is actually at a different point
                    compared to the middle.
                    Most lens elements are small enough for this not to be a problem.
                    However lens designs with large elements, such as some television
                    camera lenses, lenses intended for dim lighting conditions and some
                    wide angle lenses, can suffer from spherical aberration.
                    One answer is to use lens doublets or triplets where the spherical
                    aberration of one element is eliminated by another.

            Aspheric elements
                    Another answer is to use lenses where the sides are not part of a
                    sphere. The perfect lens is slightly flatter at the edge than in the middle,
                    making the refractive power of the lens greater nearer the middle of the
                    lens.


                    These so-called aspherical lenses are difficult to produce, especially in
                    quality, making lens designs with good spherical aberration
                    characteristics more expensive.




Figure 39                                                                     The aspherical lens


            Coma
                    Coma is a distortion effect that shows up as a fuzziness at the edge of
                    the image. It is caused by spherical aberration but shows itself in rays of
                    light passing through the lens at a sharp angle.


Properties of the lens
            The principal element
                    Although lenses are made up from a collection or convex and concave
                    lens elements, doublets and triplets, and even mirror elements, they can
                    all be thought of as a single perfect convex lens element. Lenses use


Sony Training Services                                                                         86
Broadcast Fundamentals

                  many different elements to correct aberrations, reduce size and allow
                  the lens to be controlled.
                  The theoretical single perfect convex lens element, is referred to as the
                  principal element. Its position is called the principal point.

       Focal point
                  The focal point is where light from infinity (i.e. parallel light) is brought to
                  a signal point. Convex lenses and concave mirrors have a real focal
                  point. Concave lenses and convex mirrors have a virtual focal point.

       Focal length
                  The focal length can be found from the formula:
                  1/f = 1/u + 1/v where f = focal length
                                                  u = object distance
                                                  v = image distance
                  This can be simplified when the object is at infinity. The equation
                  becomes f=v. So a simple definition of focal length is defined as the
                  length from the principal element to the focal point.
                  The focal length has an effect on the field of view of a lens, and its
                  magnification. A lens with a short focal length has a wide field of view,
                  and low magnification, and is called a wide-angle lens. A lens with a long
                  focal length has a small field of view, and high magnification, and is
                  called a telephoto lens.

       Aperture
                  The aperture of a lens is a way of expressing the amount of light passing
                  through it. Its maximum value is limited by the lens’s pupil, or iris.
                  Aperture is controlled by a multi-bladed iris mechanisn. The larger the
                  aperture the more light is passed through the lens. The aperture is
                  normally expressed as the f number.
                  This f-number is expressed mathematically as:
                            N = f/d       where f = focal length,
                                            d = diameter of the entrance pupil
                  For an aperture of 2 this would normally be written as f/2.
                  The larger the aperture the smaller the f-number. When the f-number
                  doubles the light passing through the lens is reduced by a factor of 4.
                  The markings on a lens are therefore normally indicated as ratios of 1.4
                  i.e. 1.5, 2, 2.8, 4, 5.6, 8 and so on. Each step or ‘f-stop’ represents a
                  halving of the light.

       Depth of field
                  The depth of field is the range of object distances for which the image is
                  within a permissible degree of sharpness. Only objects at the focal point
                  are perfectly in focus, objects closer and further away are slightly out of
                  focus. The depth of field is therefore not an absolute figure, but is
                  derived from the concept of a circle of confusion.

87                                                     Sony Broadcast & Professional Europe
Part 11 – Lenses

        In c o m in g lig h t
                                                Lens
                                                                                               Focal
                                                                                               p o in t




                                                                                                      C ones of
                                                                                                      c o n fu s io n


                                                  Ir is
Figure 40                                                                                                                      Depth of field

                                Depth of field is dependent upon focal length and the aperture of the
                                lens. A long focal length (telephoto) lens has a small depth of field. The
                                smaller the aperture (bigger the f-number) the larger the depth of field.


                                                 Lens
                                                                      D e p th o f fie ld




                                                                                    Focal
                                                                                    p o in t              A c c e p ta b le
                                                                                                          c o n fu s io n


                                 Ir is a t la r g e a p e r tu r e
                                                 Lens

                                                                     D e p th o f fie ld




                                                                                  Focal
                                                                                  p o in t                          A c c e p ta b le
                                                                                                                    c o n fu s io n


                                 Ir is a t s m a ll a p e r tu r e

Figure 41                                                                           Change in depth of field with aperture



The concave and convex mirrors
                                Concave and convex mirrors are used a lot in optics. They provide an
                                alternative to lenses without all the disadvantages associated with light
                                as it refracts through glass (or whatever the lens is made from).


Sony Training Services                                                                                                                   88
Broadcast Fundamentals

                 Mirrors have the opposite effect on incoming light that lenses have.
                 Concave lenses disperse incoming light. Concave mirrors focus light to a
                 point. Convex lenses focus light to a point. Convex mirrors disperse
                 light.
                 Mirrors are very useful for long lenses. They allow a lens design to be
                 ‘folded’, reducing the overall length of the lens.


Lens types
       Normal lens
                 A normal lens is one which produces an image which is equivalent to the
                 image from the human eye. This is a little subjective as the image seen
                 by the human eye is greatly distorted, however the normal lens aims to
                 produce an image with nearly the same level of magnification, image
                 distortion and perspective at the center of view of the human eye.
                 As a rough guide the focal length of the normal lens is approximately the
                 same distance as the image diagonal. For a 35mm camera a normal
                 lens is one with a focal length of 50mm.
                 Normal lenses are also able to attain a lower aperture f number, partly
                 because the optics are better at this focal length but also because the
                 mathematics for calculating the aperture f number are dependant on the
                 focal length and are more favourable for the normal lens.

       The telephoto lens
                 The telephoto lens is a lens with a focal length greater than the normal
                 lens, although the term “telephoto” is normally reserved for lenses wit a
                 focal length greater than twice that of the normal lens. For an image size
                 of 35mm, a telephoto lens is considered to be one with a focal length of
                 more than 100mm.
                 Telephoto lenses can magnify objects from a long distance. They have
                 minimal image distortion, and compress perspective.

       Wide-angle lens
                 A wide-angle lens is one with a focal length smaller than the normal
                 lens. It has a wide field of view permitting a wide vista to be captured.
                 For a 35mm camera a wide-angle lens is one with a focal length of
                 smaller than 50mm although the term “wide angle” is normally reserved
                 for lenses with a focal length smaller than about 30mm.
                 Wide angle lenses make object appear unnaturally small, they distort the
                 image and stretch perspective.

       The fisheye lens
                 A fisheye lens is an extreme wide-angle lens. As the focal length
                 becomes shorter it becomes increasingly difficult to maintain a
                 geometrically correct image, i.e. one that is square. When this design
                 target is abandoned the image becomes curved at the edges but a very
                 wide view becomes possible. At the extreme it is possible to have a fish



89                                                 Sony Broadcast & Professional Europe
Part 11 – Lenses

                   eye lens with totally circular image, and an angle of view of more than
                   180 degrees.

        The zoom lens
                   A zoom lens is a lens with a variable focal length. For a 35mm camera it
                   is common to use a lens with a 30-100mm zoom lens. It can therefore
                   change the field of view from wide angle to telephoto. For television
                   camera lenses it is common to have a zoom lens with a focal length
                   range of 8mm to 150mm. For specialist applications, such as sports
                   there are lenses that have zoom ratios of more than 40:1.

        Prime lens
                   A prime lens is any lens that is not a zoom lens, i.e. any fixed focal
                   length lens, and are so called because of their superior quality. While
                   zoom lenses are very versatile, and their quality has reached remarkable
                   levels in the last decade or so, they are still a compromise. The best
                   quality prime lenses are always better quality than the best quality zoom
                   lenses.

        Mirror lenses
                   Mirror lenses use a combination of normal lenses and mirrors. Telephoto
                   lenses are sometimes mirror lenses. Mirrors allow for compact designs
                   for telephoto lenses with very long focal lengths.
                   A characteristic of mirror lenses is that anything out of focus appears as
                   a donut shape, rather than a simple blur.


Extenders and adaptors
                   There are various extenders and adaptors that can be fitted between the
                   lens and the camera. These can be used to allow lenses with one
                   mounting scheme to be fitted to a camera with another mounting
                   scheme. They can also be used to alter the characteristics of the lens.

             Mount adaptors
                   Mount adaptors allow lenses intended for one mount to be fitted to a
                   camera with another mount. Mount adaptors are popular in still cameras
                   where there are a lot of different mounts. Optics tend to suffer because
                   the lens is pushed away from the camera and the back flange to film
                   distance is not optimal.
                   All camera manufacturers have mechanical and electrical connections
                   between the lens and camera to allow the camera to control the lens.
                   These connections are very specific to the manufacturer. Adaptors
                   cannot guarantee to provide a match for these connections between the
                   lens and camera.
                   Some lens manufacturers offer lenses with no specific mount. These
                   lenses are designed slightly shorter than they should be. You select
                   which mount you want and the appropriate adaptor is fitted, building the
                   lens up to the correct length. Mechanical and electrical connections are
                   much more likely to work with this kind of mount adaptor.


Sony Training Services                                                                    90
Broadcast Fundamentals

            2x, 3x etc. adaptors
                 This kind of adaptor increases the focal length of the lens. The simplest
                 of these is little more than a tube pushing the lens away from the camera
                 and boosting the focal length as a result. The better ones have lens
                 elements in them to improve the optics. No matter what the adaptor,
                 they are always a compromise. Fitting a 2x adaptor to a 25mm lens will
                 never attain the quality of a 50mm lens. However they provide a way of
                 effectively doubling the number of lenses you have, with only a marginal
                 reduction in quality.
                 Mechanical and electrical quality can vary just as with the optical quality.
                 Some adaptors are able to transfer the mechanical and electrical
                 connections between the lens and camera better than others.


Filters
                 Filters are sometimes used to correct something in the picture, to protect
                 the camera from damage, or to add some kind of special effect. A filter
                 can be placed in front of the lens or built in behind the lens

       Built-in filters
                 Filters placed behind the lens are always built-in because they would
                 otherwise push the lens away from the camera and alter its optical
                 characteristics. Two types exist, camera built-in, and lens built-in.
                 Lens built-in filter are used if a filter cannot be put in front of the lens.
                 This is particularly true of ultra-wide angle and fish-eye lenses, because
                 the front lens element tends to protrude from the front of the lens. A slot
                 somewhere at the back of the lens allows glass or gelatin filters to be
                 slotted into the lens.
                 Camera built-in filters are common in video cameras. There are often
                 about 5 of these filters built into a wheel. By turning a small knob on the
                 camera the camera operator can turn the wheel and bring different filters
                 between the lens and the sensor. Neutral density filter are used to cut
                 the amount of light in bright conditions. Yellow tinted filters are used to
                 correct the colour temperature for daylight operation.

       Front filters
                 There are a myriad of different filters that can be fitted in front of the
                 lens. Professional still camera users can screw filters directly to the front
                 of the lens. These screw-in filters are specifically designed for the
                 diameter of the front of the lens.
                 If a screw-in filter cannot be found an adaptor can be fitted to the front of
                 the lens. Once fitted, this allows a wide range of standard square filter
                 sheets to by placed in front of the lens, removing the need to find a
                 screw-in filter of the correct diameter.
                 This method is popular in movie cameras and video cameras. The
                 adaptor is generally called a matt box.




91                                                   Sony Broadcast & Professional Europe
Part 12 – Early image sensors


Part 10                                             Early image sensors
Selenium detectors
                   Selenium was the first photoelectric material to be found, in 1873. It was
                   used in the first mechanical experiments on television, like the Nipkow
                   disk system.
                   Selenium is classed as a photoconductive material because its
                   resistance changes when exposed to light.
                   Photovoltaic materials produce a voltage potential across themselves
                   under the influence of light.


The Ionoscope
                   The Ionoscope was the first image sensor of any commercial
                   importance. It consisted of an evacuated glass enclosure with a tube
                   fixed to it enclosing an electron gun.
                   The main enclosure had a screen made from a sandwich of
                   photosensitive particles, called a mosaic, a thin mica insulation layer and
                   a conductive sheet backing. The plate acted like a capacitor.




Figure 42                                                                      The Ionoscope

                   Light from the lens could enter the enclosure through a window and land
                   on the mosaic releasing electrons which were attracted away towards
                   the anode. Thus a positive charge image built up on the surface of the
                   mosaic. The charge was proportional to the intensity of light.
                   The electron gun fired electrons in a raster scan at the mosaic. Any
                   positive charge was cancelled by absorption of electrons from the beam.
                   This absorption was detected by the conductive plate and output as a
                   signal at the signal electrode.
                   The rest of the electrons bounced off the mosaic to be picked up by the
                   anode and drawn away to the anode electrode.



Sony Training Services                                                                    92
Broadcast Fundamentals


The Orthicon tube
                 The Orthicon tube was invented by Iams and Roase at RCA in 1939.
                 The Ionoscope used a high velocity electron beam which gave rise to
                 secondary emission of electrons which effectively reduced the tube’s
                 ability to catch all the electrons released purely by photoemission.
                 The Orthicon tube had a much reduced anode voltage. This tended to
                 make the mosaic saturate with electrons when no light was present. Any
                 more electrons would not strike the surface. Thus no signal appears
                 when there is no light. This produces better Black recognition.
                 Any electrons not striking the surface of the mosaic return back down
                 the same path as the electron beam to soak away in a collector next to
                 the electron gun.
                 The low anode voltage and the resulting low velocity of the electron
                 beam means that the bean was subject to interference by stray electric
                 fields near the mosaic. The resulted in a loss in resolution compared to
                 the Ionoscope.
                 The beam focusing and deflection was better than for the Ionoscope. A
                 long focus coil was used, and either electrostatic or electromagnetic
                 deflection. The beam retained its helical nature. This aided focussing.
                 The beam was also deflected such that it always struck the target at a
                 perpendicular angle.




Figure 43                                                           The image orthicon tube



The Image Orthicon tube
                 The Image Orthicon was an improvement over the Orthicon. In this
                 design light was focussed onto a photocathode plate. The released
                 electrons which were attracted by an accelerating grid back into the tube
                 towards a two sided glass target plate. Thus an image in electrons was
                 formed on the target.

93                                                 Sony Broadcast & Professional Europe
Part 12 – Early image sensors

                    A thin mesh was placed in front of the target. The electrons from the
                    photocathode penetrated straight through the mesh and onto the target.
                    However any secondary electron emission was soaked up by the mesh.
                    The electron bean was a similar low velocity perpendicular design as the
                    Orthicon. It scanned a raster image on the back of the target. Any
                    electrons not being soaked up by the target were returned back down
                    the beam to be collected by the anode, next to the electron gun.
                    Thus the return beam was a raster scan of the charge, and thus of the
                    image.


 The Vidicon tube




Figure 44                                                                    The Vidicon tube


                    The Vidocon tube was introduced by RCA in 1950. It used antimony
                    trisulphide target. This is a photo conductive material. The resistance
                    across the target changes when it is exposed to light. With certain limits
                    the change of resistance is proportional to the intensity of light.




 Figure 45                                                                     The vidicon tube

                    The back of the target is scanned by an electron gun with the same
                    basic design features as the low velocity perpendicular design used in
                    the Orthicon tube.

 Sony Training Services                                                                     94
Broadcast Fundamentals

                 The target is biased to the anode voltage. As the bean strikes the back
                 of the target current flow to the front is inversely proportional to the
                 resistance, which is inversely proportional to the light intensity.
                 Thus the anode bias voltage will alter as a raster scan of the image.


Variations on the Vidicon design
                 There were various improvements on the basic Vidicon tube design. The
                 Plumbicon was introduced by Philips in 1962. It used lead oxide as a
                 target material.
                 The Saticon was another design with a target made from arsenic,
                 selenium and tellurium.
                 The Diode Gun Plumbicon used a photo diode instead of an electron
                 gun
                 These later designs offered better resolution, greater contrast and better
                 colour balance than the basic Vidicon
                 .




95                                                 Sony Broadcast & Professional Europe
Part 13 – Dichroic blocks


Part 11                                                                                                Dichroic blocks
The purpose of a dichroic block
                            The purpose of a dichroic block is to split an incoming colour image into
                            its three primary colours. Most of the block is coated in black paint to
                            stop light getting in, except for a window to let the incoming colour image
                            in and three windows to let the outgoing primary images out.
                            They are fitted just behind the lens of a colour video camera. A sensor is
                            placed on each outgoing window, one for each primary. Each sensor
                            measures the brightness of each primary and sends out a video signal
                            for each primary.


Mirrors and filters
                            Various designs have been created over the years. Most designs are
                            now beginning to look very similar. Two basic design pattern are now in
                            use. The first is generally simply called a prism block or dichroic block.
                            The other is called a cross block or X block.

            Conventional dichroic blocks
                            The conventional prism block consists of at least three prisms, glued
                            together with a transparent epoxy cement. Light enters the incoming
                            window at the front and passes through the first prism. The back surface
                            is angled, and is coated with a red dichroic mirror. The choice of
                            material, and its thickness define the colour that is reflected.
                            Manufactures can ‘tune’ the dichroic mirror by altering the coating
                            thickness. The red light is reflected once more off the front of the first
                            prism and out through the red outgoing window. There is a red filter to
                            trim the light before it strikes the red sensor.
                                 R e d tr im filte r
                                                            R sensor



                 R e d lig h t                                                     B lu e d ic h r o ic m ir r o r


                                                                                                               G r e e n lig h t

               In c o m in g                                                                                    G r e e n tr im filte r
               w h ite lig h t




                                                                                                                     G sensor
                           Lens

                         R e d d ic h r o ic m ir r o r           B lu e lig h t                      B lu e tr im filte r
                                                                                                   B sensor
                                                          C y a n lig h t
Figure 46                                                                                                                                 Dichroic block


Sony Training Services                                                                                                                              96
Broadcast Fundamentals


                                     The cyan light passes through the second prism. The back of this prism
                                     is coated with a blue dichroic mirror that reflects blue light, letting
                                     everything else through (green). The blue light passes out through the
                                     blue outgoing window, through a blue trim filter and onto the blue
                                     sensor. Likewise the remaining green light passes out through the green
                                     outgoing window, through a green trim filter and onto the green sensor.
                                     It is worth noting that the sensors are basically the same device. Some
                                     manufacturers may carefully select sensors that have the best
                                     performance for each colour, most will not.

            Cross dichroic block
                                     The cross dichroic block consists of four small triangular prisms, glued
                                     together to make a small cube with two intersecting planes. Some faces
                                     of each prism are coated with dichroic mirrors, or trim filters.
                                                       G sensor          G r e e n lig h t

                      G r e e n t r im f ilt e r



            R e d d ic h r o ic m ir r o r                                                         B lu e d ic h r o ic m ir r o r

                                                                                                       R e d lig h t
                B lu e lig h t




                        B sensor
                                                                                                     R sensor

                             B lu e t r im f ilt e r                                          R e d t r im f ilte r



                                                                                                Lens



                                                                                             I n c o m in g
                                                                                             w h it e lig h t



Figure 47                                                                                            The cross dichroic block


                                     Light enters the front of the block. One of the intersecting planes is a
                                     blue dichroic mirror, the other a red dichroic mirror. Blue light reflects off
                                     the blue dichroic mirror and out from the left side through a blue trim
                                     filter. Red light reflects off the red dichroic mirror, passing out the right
                                     side through a red trim filter. The remaining light is green, and passed
                                     out the back of the block through a green trim filter.



97                                                                       Sony Broadcast & Professional Europe
Part 13 – Dichroic blocks

                   The cross dichroic block has become popular recently because of its
                   compact and simple design. However it has one major drawback. The
                   intersection between the four prisms causes a small vertical line on the
                   outputs. Although the light is out of focus as it passes the intersection,
                   and careful manufacture can make this intersection as tight as possible,
                   this is the main reason the cross block is not used on professional and
                   broadcast video cameras.


Optical requirements of a dichroic block
                   Every optical path, from the incoming window, to each outgoing window,
                   is identical. This is essential, because the lens will focus through the
                   dichroic block and onto the surface of the sensors behind. If one of the
                   optical paths is different, that particular primary colour will be out of
                   focus.
                   The position of the sensors is critical. All three sensors must be mounted
                   in exactly the same place relative to its own window. If there is any error,
                   that particular primary colour image will be in a different position to the
                   other two. Recombining the three primary images on the monitor will be
                   practically impossible.


Variation on a theme
                   Most dichroic block designs are now tending to look similar to the two
                   designs mentioned above. Some designs vary slightly.
                   Some designs swap the position of the red and blue dichroic mirrors.
                   Some designs have slight variations in the angles of the prisms and the
                   paths each primary colour will take.
                   Most designs have blue and red dichroic mirrors. Both of these mirrors
                   are relatively easy to make, because both have one cut off wavelength.
                   Green dichroic mirrors are more difficult to make because they have two
                   cut of wavelength designers have to worry about.
                   Some blocks have a forth or fifth outgoing window. This may be used for
                   a monochrome viewfinder output for the camera operator, or for some of
                   the camera’s internal functionality, like auto focusing or metering.
                   Some specialist video cameras do not have standard primary colours at
                   the outgoing windows. Security cameras may use infra-red for night
                   vision. Video cameras used in food processing and monitoring also use
                   infra-red to check the quality of food. These cameras may have one
                   window in the dichroic block dedicated to infra-red.


Using dichroic blocks in projectors
                   The increased popularity of low cost video projectors has lead to an
                   explosion in the need for cheap compact dichroic blocks. Quality is not
                   so much of an issue with projectors, and the cross dichroic block
                   therefore a very popular.
                   Dichroic blocks are used the opposite way round from video cameras.
                   Simple filters split light from the lamp into three primary beams. These


Sony Training Services                                                                      98
Broadcast Fundamentals

                 three beams are passed into the dichroic blocks through light valves,
                 where the sensors would be in a video camera. The light valves build up
                 an image for each primary by shutting light on or off for each pixel.
                 The dichroic block then combines the three primary images into one
                 colour image that is projected out to the screen.




99                                                Sony Broadcast & Professional Europe
Part 14 – CCD sensors


Part 12                                                             CCD sensors
Advantages of CCD image sensors
                   When looking at the advantages of CCD image sensors, you have to
                   realise what alternatives there are and what was used before these
                   devices became available.
                   Before CCD image sensors became popular video and television
                   cameras used some form of tube sensor. Plumbicon tubes were very
                   popular for a while.
                   Bearing these devices in mind, let us consider the advantages of CCD
                   image sensors.

        Compact design
                   The first and most obvious advantage of CCD image sensors are that
                   they are considerably smaller that tube sensors. They allow very
                   compact cameras to be made which can be used in discreet surveillance
                   and remote investigation in dangerous or confined places.

        Light design
                   CCD image sensors are considerable lighter than tube sensors. They
                   can weigh only a few ounces. This allows them to be designed into
                   portable cameras without increasing the overall weight of the camera by
                   any undue amount.

        High shock resistance
                   CCD image sensors have no moving parts. They also have a very light
                   duty mechanical construction that is highly resistant to acceleration and
                   deceleration damage.

        Low power consumption
                   CCD image sensors use a lot less power than older tube sensors. This
                   makes them suitable for any battery powered device.

        Good linearity
                   Linearity is important is measuring light levels accurately. Linearity
                   means that the output signal is proportional to the number of photons of
                   light entering the device.
                   Film and tube sensors are highly non-linear, partly because of their low
                   dynamic range. They give no output at all if the light level (number of
                   photons) is too low, and saturate if the light level is too high giving no
                   further output if the light intensity increases further.
                   CCD image sensors have good dynamic range, and good linearity over
                   this range.




Sony Training Services                                                                    100
Broadcast Fundamentals

            Good dynamic range
                     CCD image sensors saturate in the same way as any light sensor, but
                     the light intensity required to saturate these devices is generally much
                     higher.
                     CCD image sensors have no effective minimum. Some specialised
                     devices can measure near total dark.
                     Typically photographic film has a dynamic range of about 100. CCD
                     image sensors achieve about 10,000.

            High QE (quantum efficiency)
                     QE is the ratio of the number of photons of light detected to the number
                     of photons that enter the device.
                     Photographic film has a QE of about 5% to 20%. CCD image sensors
                     have a QE of between 50% and 90%. This makes them very efficient
                     and thus very useful for dark environment monitoring and studies of
                     deep space.

            Low noise
                     In fact CCD image sensors can suffer from thermal thermal noise.
                     However this noise if predictable and can be controlled or reduced.
                     Cooling the image sensor using conventional cooling find and a fan can
                     keep thermal noise to very low levels.
                     For specialist scientific imaging Peltier effect heat pump and cooling by
                     liquid nitrogen can reduce thermal noise to virtually zero.


  The basics of a CCD
                     A charge coupled device (CCD) is sometimes referred to as a bucket
                     brigade line. It consists of a series of cells. Each cell can store an
                     electric charge. The charge can then be transferred from one cell to the
                     next.

            The line of buckets
                     A very good way of thinking of a CCD is to imagine a line of buckets. At
                     one end is a set of digital scales where you can measure the amount of
                     water you pour into the first bucket.




Figure 48                                                                Single line of buckets



                     As soon as you pour the water from the digital scales into the first bucket
                     you will transfer the water from all the other buckets into the next bucket


  101                                                  Sony Broadcast & Professional Europe
Part 14 – CCD sensors

                       down the line. The water from the last bucket will be poured into the
                       digital scales at the other end, and measured.
                       Of course it would be impossible to move the water from one bucket to
                       the next at the same time. You would probably need another set of
                       buckets to store the water while you were transferring it.




  Figure 49                                                               The double line of buckets




              The electronic reality
                       CCD’s use a line of metal oxide semiconductor (MOS) elements,
                       constructed on the same chip. Each element contains 2, 3 or 4
                       polysilicon regions sitting on top of a thin layer of silicon oxide.
                       Polysilicon can be used as a charge holder or a conductor. Although it is
                       not as good a conductor as other metals like copper or aluminium it is
                       easy to fabricate and is transparent, which is useful when CCD’s are
                       used in cameras.
                       Silicon oxide (glass) is a good insulator.
                       These elements are fabricated on a p type doped silicon subtrate.
                       At each end of this line is a region of n type doped silicon.
                       Connections are made to all the polysilicon regions and to the two n type
                       doped regions.


  Using the CCD as a delay line
                       CCD’s have been very popular as a semiconductor delay line. They
                       were used in many electronic designs before semiconductor memory
                       became cheap and complex enough to be used instead.




Figure 50                                                                     The CCD delay line


                       CCD delay lines are essentially analogue. That is to say the charge they
                       carry is an analogue quantity. If a CCD delay line is to be used in a

  Sony Training Services                                                                       102
Broadcast Fundamentals

                 digital environment there must be a digital to analogue converter fitted to
                 the input and an analogue to digital converter fitted to the output.
                 The transfer of charge is however digital, and the CCD will have a clock
                 input which is used to transfer the charge from one MOS element to the
                 next in the line.

       How does the CCD delay line work?
                 The input signal is fed into the first polysilicon region. Using field effect
                 principles electrons are pulled from the n type doped region and collect
                 under the insulation layer.
                 The potential on the first region creates a potential ‘well’ that the
                 electrons effectively fall into.
                 Although there is a maximum charge that can be held in this potential
                 well, the amount of charge is proportional to the amount of time and the
                 potential applied to the first region.
                 The potential between the first and second regions are switched. This
                 effectively moves the potential well from just underneath the first region
                 to just underneath the second region. The first region becomes a
                 potential barrier. The charge is attracted to the second region.
                 The potential between the second and third regions are switched. The
                 charge is now attracted to the third region.
                 By switching the potential from one region to the next the charge can be
                 transferred from one region to the next, sitting just underneath the
                 insulation layer.
                 This leaves the first region clear. And the next charge packet can be
                 input to the line.
                 When the charge reaches the last region it transfers to the n type doped
                 region at the other end of the line and appears as a output signal.

            2 region elements
                 CCD delay lines with 2 polysilicon regions per MOS element use the
                 second region in each element in the same way you might use the spare
                 buckets in the line of buckets.
                 The charge is transferred to the second region before being transferred
                 to the first region of the next element.
                 The disadvantage of the 2 region element is that the charge could flow
                 the wrong way. 2 region elements employ special gates in the element
                 and use a stepping transfer voltage to ensure the charge flows
                 correctly. This all adds to the complexity and cost of this type of CCD.
                 However 2 region elements offer higher density that 3 or 4 region
                 elements.




103                                                  Sony Broadcast & Professional Europe
Part 14 – CCD sensors




Figure 51                                                  The 2 region element CCD delay line


             3 region elements
                   CCD delay lines with 3 polysilicon regions per MOS element are able to
                   ensure that the charge flows in the correct direction from one element to
                   the next.




Figure 52                                                  The 3 region element CCD delay line


                   The charge is pulled from the left region to the centre region, then from
                   the centre region to the right region.

Sony Training Services                                                                   104
Broadcast Fundamentals

                     However clock phasing is more complex than both the 2 and 4 region
                     elements.

                4 region elements
                     4 region elements have simpler clocking signal arrangements than 3
                     region designs and have better charge transfer capabilities, but it is
                     more difficult to achieve high density devices.




Figure 53                                               The 4 region element CCD delay line




    105                                                Sony Broadcast & Professional Europe
Part 14 – CCD sensors



 Using CCD’s as image sensors
            The basic principles
                     Metal oxide semiconductors are sensitive to light. If light enters the
                     substrate of a MOS device, under certain conditions it excites electrons
                     in the silicon into the conduction band. Put simply, electrons are shaken
                     loose by light.
                     The elements used in image sensors are similar to those used on CCD
                     delay lines. They can be 2, 3 or 4 region elements.

            Sensing light
                     When used as an image sensor a positive voltage is applied to the first
                     polysilicon region in each element. This develops a small potential well
                     just under the insulation layer.

                Step A – Exposure
                     As light penetrates the p type substrate of the CCD it shakes electrons
                     loose. The loose electrons in the vicinity of the potential well fall in and
                     are trapped, forming a small collected charge.
                     The stronger the light level falling on that element, or the longer the time
                     allowed, the greater the number of loose electrons, and the greater the
                     stored charge.




Figure 54                                                  The CCD delay line as an image sensor


                Step B, C & D – Transfer
                     When the CCD sensor has been exposed to the image for the required
                     time the charges stored under each element have to be transfered to the
                     end of the row where they can be sensed and output.
                     In Step B the potential of region 2 is raised. Now the potential well
                     extends over two regions and the charge spreads to fill the space.



 Sony Training Services                                                                       106
Broadcast Fundamentals

                 In Step C the potential of region 1 is lowered and region 3 is raised. The
                 potential well now occupies region 2 and 3. The charge is pulled across
                 so that it sits under regions 2 and 3.
                 In Step D the potential of region 2 is lowered and the potential of region
                 1 is raised. The potential well now occupies region 3 and 1 of the next
                 element. The charge is pulled across so that it sits under regions 3 and
                 1.
                 Steps B, C and D are repeated until the whole row has been transferred,
                 element by element to the output gate at the end of the row.
                 When this has been done and the whole row is empty of charge,
                 exposure can begin again.

       The arrangement of MOS elements
                 CCD’s used as image sensors comprise a matrix of MOS elements. The
                 elements are laid out in columns. Each column is similar to a CCD delay
                 lines. There are many columns in the matrix.
                 The number of elements in each column and the number of column
                 defines the overall resolution of the device. Each element corresponds
                 to a single captured point from the image, otherwise called a picture
                 element or pixel.A system of channel stops is used to guard one column
                 from the next. These prevent charge from one row leaking into the next.

       Reading columns
                 Steps B, C and D above explain how each column is read. This would
                 imply that CCD sensor would have an output gate at the end of each
                 row.
                 In fact CCD sensors have just one output. Therefore another CCD line is
                 placed at the end of the column, perpendicular to them all. This line is
                 called a read-out register. The charge from the element at the end of
                 each column is transferred to the elements in the read-out register.
                 Column clocking now stops. Clocking now transfers the charges in the
                 read-out register to the sense and output gate.
                 When the read-out register is empty, column clocking can start again
                 and clock the next charge from the columns into the read-out register.
                 This procedure carries on until the last charge in the columns has been
                 clocked into the read-out register and from there to the output.

       Similarity to raster scans.
                 This method of reading one pixel from each row into a column line, then
                 transferring them one by one to the output is similar to a conventional
                 television raster scan.
                 CCD sensors therefore lend themselves very well as conventional
                 television camera sensors.




107                                                 Sony Broadcast & Professional Europe
Part 14 – CCD sensors




   Figure 55             Arrangement of MOS elements




Sony Training Services                      108
Broadcast Fundamentals



 Back lit sensors
                    Many sensors are now back lit. Rather than allowing light to enter the
                    front of the sensor, passing through the region gates and the insulation
                    layer, the whole sensor is turned over and light passes directly into the
                    substrate from below.




Figure 56                                                                      Back lit sensors




            Substrate thickness
                    However the substrate is conventionally thick. This makes production
                    easier. Manufactures only work on the top surface. The thickness of the
                    sensor’s chip is irrelevant and therefore one less thing they have to
                    worry about.
                    Thick substrates also makes for a more robust sensor.
                    The problems with thick substrates is two fold. Firstly the electrons
                    loosened by the light are a long distance from the potential wells created
                    by the region gates. Secondly there is a risk that electrons loosened by
                    the light do not fall into the correct potential well.

                Back thinning
                    Back lit sensors now tend to be only about 15um thick. This makes sure
                    that the area of substrate where electrons are loosened by incoming
                    light are close to the potential wells. There is a greater chance that all
                    the electrons will be caught, and that the electrons will fall into the
                    correct potential well.
                    The gate side of back thinned CCD optical sensors tend to be mounted
                    on a rigid surface to make the whole device more robust. This surface is
                    often reflective to make the sensor more efficient by driving any light that
                    leaks out the back into the substrate.


 109                                                   Sony Broadcast & Professional Europe
Part 14 – CCD sensors


Problems with CCD image sensors
                   CCD image sensors are not perfect. They can suffer from manufacturing
                   defects, and operational anomolies. A few of these are listed here.

        Shorts
                   This is a manufacturing defect. Shorts can occur where silicon oxide
                   insulation breaks down or where any level in the MOS elements has not
                   been built properly.
                   Shorts result in the improper collection of charge, or charge loss. If the
                   collection of charge is damaged, individual pixels may be lost. If there is
                   a charge leakage there may be line smearing as charge is lost through
                   the short as the charge from each pixel is transferred down the line to
                   the output, past the short.

        Traps
                   A trap is a manufacturing defect where charge is not able to transfer
                   sucessfully.

        Thermal noise and dark current
                   As previously mentioned CCD image sensors have very noise
                   characteristics if they are kept cool. However if their temperature rises
                   thermal noise rises correspondingly.
                   This can give rise to a number of other problem, but overall will effect the
                   quality of the image capture process.
                   Electrons freed by thermal activity are attracted to the potential well
                   under each pixel. Thus charge develops even if there is no light falling
                   on the sensor. This gives rise to the term dark current.

        CTE (charge transfer efficiency)
                   CCD sensors must transfer the charge from one element to the next in
                   the line as efficiently as possible.
                   Imagine a CCD image sensor with 1024 by 1024 pixels. Charge from the
                   far end of the furthest line of a device will be transferred 2048 times
                   before it reaches the output sense and gate.
                   If there is a 90% CTE in the device the charge will have dropped to
                   0.000000000000000000000000008 of its original value! This is clearly
                   not a good thing.
                   CCD image sensors generally have CTE’s better than 99.999%. With a
                   1024 by 1024 sensor this still means that the charge in the furthest pixel
                   has dropped by 0.02%. While this is significantly better than in the case
                   of a 90% CTE it is still a problem in accurate light measurement
                   situations.
                   As sensors increase in resolution so CTE ratings must be kept as close
                   to 100% as possible.




Sony Training Services                                                                    110
Broadcast Fundamentals

       Chroma filtering and bad QE from front lit devices
                 Light passing into the sensor substrate passes through the region gates
                 and insulation layer.

            Using polysilicon regions
                 Making the regions from polysilicon rather than from aluminium allows
                 light to pass through them. All front lit sensors use polysilicon region
                 gates.

            Filtering effects
                 Light passing through the regions, even if they are made from
                 polysilicon, and the insulation layer are filtered. The filtering is non-
                 linear. Light at the blue end of the light spectrum is attenuated more than
                 at the red end.

            Bad QE
                 Filtering effects not only make the sensor’s characteristics non-linear,
                 but they reduce its QE. This makes them less effective where accurate
                 light measurement is required of camera sensors


CCD image sensors with stores
                 The problem with the design mentioned is that you have to wait after the
                 sensor has been exposed and all the pixels charged, for the charges to
                 be read out. This takes a while.

       FT sensors
                 In the FT (frame transfer) sensor design each column is twice as long.
                 Half of the columns are exposed. The other half acts as a temporary
                 store, and are covered by an aluminium mask.
                 After the image has been exposed the sensor the charges are
                 transferred quickly into the temporary store. The sensor can then start
                 exposing the next frame while the frame that was just exposed is output
                 through the read-out register.




111                                                Sony Broadcast & Professional Europe
Part 14 – CCD sensors




Figure 57                                                                            FT sensor


            IT sensors
                    In the IT (interline transfer) sensor design all the pixel charges are
                    transferred into read-out gates, and from there into separate column
                    CCD lines. These are called vertical read-out registers. These registers
                    are masked.
                    The charges can be transferred into the vertical read-out registers very
                    quickly, leaving the sensor free to start exposure again.
                    The vertical read-out registers can then be transferred into the horizontal
                    read-out register in the normal way.



Sony Training Services                                                                    112
Broadcast Fundamentals



            Overflow gate technology and shuttering
                    With the introduction of IT sensors came the introduction of an overflow
                    gate. This gate is placed on the opposite side of each sensor gate from
                    the vertical read-out register. It can be used in a number of ways.




Figure 58                                                                          IT sensor




 113                                                  Sony Broadcast & Professional Europe
Part 14 – CCD sensors

             Using the overflow gate to eliminate flair and burnout
                   When the overflow gate is closed it will not draw any charge away from
                   the sensor gate. After exposure all this charge can be drawn away by
                   the vertical register.
                   However if light levels get too high the sensor gate will become flooded.
                   Any charge above a certain level will not be drawn away from the
                   vertical register and the device will peak causing ‘burnout’ in the image.
                   Furthermore the excess charge will leak out of the effected gate and into
                   the surrounding gates spreading the perceived brightness from the
                   actual bright area.
                   Therefore the overflow gate is never actually close altogether. In fact, in
                   its ‘off’ mode it will still draw charge from the sensor gate, but only if the
                   amount of charge becomes excessive. This prevents the gate from
                   peaking and stops the flood of charge leaking into any other gates.

             Using the overflow gate for shutter opening and iris control
                   If the overflow gate is opened any charge building up underneath the
                   sensor gate as a result of light exposure will be immediately drawn away
                   into the overflow gate. This effectively switches the device off in the
                   same way as a mechanical shutter would do.
                   This can make the sensor behave a little like movie cameras with
                   variable shutter wheels. It is also used in applications like CCTV as an
                   electronic iris, assisting the mechanial iris in the lens itself.

             Using the vertical register and overflow gate for shutter closing
                   To simulate the shutter closing the accumulated charge under the
                   sensor gate can be drawn into the vertical register. At the same time the
                   overflow gate is opened, so that any further charge is drawn away from
                   the sensor gate.

        FIT sensors
                   One problem with IT designs are that the vertical read-out registers are
                   very close to the exposed regions of the device. If light levels are very
                   strong it is possible for charge to leak from the exposed region of the
                   sensor into the vertical register regions.
                   In the FIT (frame interline transfer) sensor design both FT and IT design
                   philosophies are used. The charges built up in the exposed areas are
                   transferred quickly to the vertical read-out registers, and then into an FT
                   type store. This takes them away from the vertical read out store so that
                   they cannot be corrupted if light levels are very high.




Sony Training Services                                                                       114
Broadcast Fundamentals



  HAD technology




Figure 59                                     The HAD sensor




  115                      Sony Broadcast & Professional Europe
Part 14 – CCD sensors

                   Sony introduced a new technology for image sensors in 1984. This
                   technology was a real departure from the conventional designs up to
                   that date.
                   Rather than using the photo-excitation technology of older designs this
                   new design uses an embedded photo diode for each pixel. The photo
                   diode has a heavily doped p type region called a p++ region. p++ doped
                   regions have a high number of accumulated holes. Hence the name
                   Hole Accumulated Diode or simply HAD.
                   The HAD increases the number of electrons that are released as light
                   enters the device. These electrons flow down into the device’s substrate
                   and collect underneath the HAD.
                   HAD sensors also have the advantage that light does not have to pass
                   through polysilicon regions to reach the diode. This makes the sensor
                   more efficient and linear.

        HAD sensor operation
                   HAD sensors comprise an array of HAD’s. An insulation layer of silicon
                   dioxide is laid on top of the HAD’s and a thin aluminium photo mask if
                   printed on top of the insulation. The photo mask prevents light from
                   getting into the sensor except where there is a HAD.
                   Light passes into the HAD and excites electron flow down into the n type
                   substrate. At a certain time, when the image is sampled, the voltage on
                   the polysilicon gate, to the right of the HAD, switches, creating a
                   potential well that attracts the accumulated charge away from
                   underneath the HAD.
                   The polysilicon region is part of a chain of polysilicon regions that form a
                   vertical register to transfer the accumulated charges out of the device.
                   Charge is therefore transferred out of the device in the same way as any
                   other IT or FIT device.
                   A region of p++ material is fabricated deeper into the device just to the
                   left of each HAD. This acts as a channel stop, preventing charge from
                   leaking from underneath each HAD into the vertical register to the left.

        Problems with HAD devices
                   The first problem with HAD sensors is the increased manufacturing
                   complexity. However, in commercial terms this increased complexity and
                   its resulting higher cost is more than offset by the increase in
                   performance. Manufacturing techniques have also improved
                   considerably over the years making it easier to produce HAD devices
                   reliably.
                   The second problem with HAD devices is the same problem facing any
                   IT or FIT device. Ideally the whole of the front of the device should be
                   light sensitive so that light hitting anywhere on the surface of the device
                   is picked up and output as a signal. The amount of space taken up by
                   the vertical registers, channel stops, polysilicon regions, etc. detracts
                   from this light sensitive area. This problem is partially overcome in later
                   designs.



Sony Training Services                                                                    116
Broadcast Fundamentals


  HyperHAD
                   HyperHAD, sometimes called microlenticular technology, improves on
                   the simple HAD device by fitting a small lens in front of each HAD. This
                   channels light from the area around the actual gate that would otherwise
                   be lost, increasing the effective area of each HAD outside of the HAD
                   itself.
                   HyperHAD sensors were introduced in 1989 and increased the
                   sensitivity and efficiency of HAD sensors.




Figure 60                                                           The HyperHAD sensor


  SuperHAD sensors
                   SuperHAD was introduced in 1997. It is basically similar to the
                   HyperHAD design, but the actual lenses are larger, and are therefore
                   able to capture more light, making SuperHAD sensors more sensitive
                   than HyperHAD sensors.


  PowerHAD sensors
                   PowerHAD is a marginal improvement on SuperHAD. The microlens
                   structure is similar to that of SuperHAD but the capacitance of the
                   vertical registers is reduced.




  117                                               Sony Broadcast & Professional Europe
Part 14 – CCD sensors




Figure 61                                                  The SuperHAD & PowerHAD sensor


 PowerHAD EX (Eagle) sensors
                    Previously simply called New Structure CCD and now sometimes called
                    Eagle sensors, PowerHAD EX sensors have another lens placed
                    between the on-chip microlens and the HAD. The microlenses are also
                    larger. Infact they are so large that they overlap, leaving no area on the
                    device that is not somehow concetrated into a HAD. This further
                    contentrates light capture increasing the efficiency of the sensor still
                    further.




Figure 62                                                                    The Eagle sensor




 Sony Training Services                                                                   118
Broadcast Fundamentals

                   The insulation layer between the polysilicon gate and the potential well
                   in the substrate underneath is also thinner. This decreases the gate’s
                   capacitance and increases the ‘depth’ of the well, making it better able to
                   collect the HAD’s accumulated charge.




Figure 63                                                                Lenticular designs




  EX View HAD sensors




                   EX View HAD sensors are physically the same as any other HAD based
Figure 64                                                            EX View HAD response


                   sensor. However the exact doping levels and construction of the HAD
                   makes it more sensitive to infra-red light. This makes EX View devices
                   very appropriate to security and low light level cameras.


  Single chip CCD designs
                   Professional and broadcast camera systems normally process the image
                   they are looking at as three primary colours. (See Colour Perception.)
                   This is important for good colour matching, and is essential if the
                   camera’s outputs are to comply with broadcast signal standards. (See
                   Colour in Television.)



  119                                                Sony Broadcast & Professional Europe
Part 14 – CCD sensors

                   The split from the original image to three images, one for each primary
                   colour, could be done in the camera’s electronics. However it is better to
                   do the split optically. Therefore all professional and broadcast cameras
                   have a three way dichroic splitter just behind the lens, and three CCD
                   sensors, one on each outputs from the dichroic splitter. Each sensor is
                   responsible for one of the primary colours. (See Dichroic Block Design.)




 Figure 65                                                                 Single chip HAD design


                   However this is either too expensive or simply not possible for smaller
                   cameras and cameras intended for industrial and domestic use.
                   Small security cameras simply do not have enough space for a dichroic
                   block and three CCD sensors. The cost of the dichroic block and three
                   CCD sensors would make domestic cameras simply too expensive. In
                   any case the increase in quality would almost certainly not be
                   appreciated.
                   Therefore all these types of cameras have one CCD sensor. The split
                   from the original image to its primaries is still required and is still best
                   done optically. Therefore single chip CCD cameras have a filter screen
                   fitted over the sensor.

        CCD filter screens
                   CCD filter screens consist of an array of small coloured squares. The
                   resolution of the CCD sensor and the filter squares is the same. Thus
                   each pixel in the sensor has one filter square.
                   Random filter screenThe human eye is very good at recognising
                   patterns. Thus it may seem a good idea to design a filter screen with a
                   random design of squares of the three primary colours.




Sony Training Services                                                                      120
Broadcast Fundamentals



                     However there is a chance that there will be discernable areas of one
                     colour.




   Figure 66                                                                The random screen


                The Bayer filter screen
                     A popular screen design is the Bayer screen. This screen has a greater
                     number of green squares, because of the human eye’s relative high
                     sensitivity to green areas of the colour spectrum.




Figure 67                                                                 The Bayer screen


                     The Bayer screen is very popular in single CCD cameras.

                The pseudo random Bayer screen
                     The problem with the Bayer screen is that there is a strong pattern. It
                     may be possible, at certain low resolutions, for the human eye to pick
                     out this pattern.




    121                                                Sony Broadcast & Professional Europe
Part 14 – CCD sensors




 Figure 68                                                        The psuedo random Bayer screen


                   Thus by jumbling the Bayer pattern in a particular way it is possible to
                   retain the same ratio of the three primary colours as the basic Bayer
                   pattern, but with no easily definable pattern. This design also makes
                   sure that there are no large ares of one colour by ensuring that there are
                   no squares of the same colour next to each other.

        Reading out from a single CCD camera
                   Filter screens need to be very accurately fitted to the sensor. If the filter
                   position is defined and fitted exactly, it is possible to define which pixel is
                   responsible for which primary colour.
                   Reading single CCD’s then becomes reasonable straightforward, if a
                   little more complex, than with triple CCD designs.




Figure 69                                                                       Pixel interpolation


                   The camera will read each pixel out in sequence in the same way.
                   However the read-out circuitry knows which pixel is responsible for each

Sony Training Services                                                                       122
Broadcast Fundamentals

                 primary colour, and sequentially demultiplexes all the pixels for each
                 primary to a different part of the electronics for further processing.

            Pixel interpolation
                 With a Bayer filter screen half the pixels are responsible for the green
                 primary, a quarter for red and a quarter for blue. This looks like Fig.
                 ,where the each pixel shows the brightness of each primary colour at
                 that point in the picture.
                 Some single CCD cameras rebuild a full image of pixels for each primary
                 by interpolating the pixels it has to make up the missing pixels. Putting
                 these three separate images back together gives a much more pleasing
                 result.
                 A newer approach is to follow the interpolation process by a small
                 amount of sharpening to improve the perceived quality of the image. (If
                 you squint at the three images the sharpened one looks better!)


Noise reduction
                 Noise is a problem in any image sensor. The first area where noise in
                 introduced is in the pixel itself where thermal noise is an enduring
                 problem (see page 110). The only way of eliminating this kind of noise is
                 to freeze the sensor to absolute zero to eliminate thermal electron
                 movement. This is not practical and the sensor would fail to work at all
                 anyway! It is really down to good image sensor design to accept thermal
                 noise will exist and reduce its effect.




                                                                          O u tp u t
                                                                           b u ffe r
                                      S ense
                         F ro m       s w itc h
                         C C D                                                         O u tp u t
                         a rra y                                                        s ig n a l



                                    S e n s in g                A u to -z e ro
                                   c a p a c ito r                s w itc h


                                                     0V    0V




Figure 70                                                                                            Auto zeroing

                 Another area where noise can be introduced is during the charge
                 transfer period, where the charge collected under each pixel is
                 transferred to the ouput.


123                                                       Sony Broadcast & Professional Europe
Part 14 – CCD sensors

                      The last area where noise can be a problem is in the output gate itself,
                      where the charge is placed into a capacitor and measured as a voltage.
                      This capacitor needs to be carefully and quickly drained of any charge
                      from a previous pixel, or from anywhere else, before the pixel charge is
                      put into it.

            Auto zeroing
                      Auto zeroing is the traditional way of cancelling noise in sensing
                      comparators, analogue to digital convertors, and image sensors. It aims
                      to pull the capacitor charge down to zero just before the pixel charge is
                      put in. This is done by building a switch into the circuit just before the
                      capacitor. The clocking circuit closes this switch just before the pixel
                      charge is input.
                      Auto zeroing switches need to be very low impedance if they are to work
                      effectively




                                            P ix e l
                         A c tiv e        s a m p le
                          p ix e l         & h o ld
            F ro m        c lo c k
            C C D                                               +
            a rra y                                                             O u tp u t
                                                                                 s ig n a l
                                                                 -

                                        R e fe re n c e
                      R e fe re n c e     s a m p le
                          p ix e l         & h o ld
                          c lo c k




Figure 71                                                              Correlated double sampling


            Double sampling
                      Double sampling (DS) is a method of eliminating the effects of noise in
                      the sense capacitor. Firstly the sense capacitor zeroed, and set to a
                      predetermined charge that is sensed as a reference voltage. Then the
                      capacitor is zeroed again and the pixel charge placed across the
                      capacitor.
                      Noise will be common to both samples. Therefore any difference
                      between the sensed reference voltage and the voltage this reference
                      should be is noise and is subtracted from the pixel voltage.




Sony Training Services                                                                        124
Broadcast Fundamentals

              Correlated double sampling
                                Correlated double sampling (CDS) eliminates some of the drawbacks of
                                auto zeroing, and provides for a simpler sampling method that for DS.
                                CDS places the operation range of the sensing capacitor away from
                                zero, where it becomes noisy and difficult to sense accurately.
                                CDS operates by placing a predetermined charge on the sensing
                                capacitor, and sensing this as a voltage, just as with DS. The pixel
                                charge is then input to the capacitor, without zeroing, and the voltage
                                sensed again. The difference between the two values is the pixel itself.
D u m m y p ix e ls




 A c tiv e p ix e ls




D u m m y p ix e ls
                         D u m m y p ix e ls




                                                                                                       D u m m y p ix e ls
                                                                       A c tiv e p ix e ls




                                                            P ix e l
                                           A c tiv e      s a m p le
                                             p ix e l      & h o ld
               F ro m                       c lo c k
               C C D                                                                              +
               a rra y                                                                                                           O u tp u t
                                                                                                                                  s ig n a l
                                                                                                   -

                                                        R e fe re n c e
                                    D um m y              s a m p le
                                      p ix e l             & h o ld
                                      c lo c k

Figure 72                                                                                                                    CDS with dummy pixels




125                                                                                          Sony Broadcast & Professional Europe
Part 14 – CCD sensors

            CDS and dummy pixels
                      One method of CDS involves the sampling of pixels outside the active
                      region on the CCD array. Many CCD sensors are designed such that
                      there are a few pixels on the edge of the array that are not focussed on.
                      These dummy pixels are masked and are effectively giving out a signal
                      corresponding to black.
                      The whole array is given a small reference precharge voltage. This pulls
                      the whole array from zero and gives the masked pixels a specific value.
                      The output gate has a sample and hold circuit that samples the masked
                      pixels and holds their average charge as an output voltage.
                      The active pixels are routed to a separate sample and hold and
                      compared to the reference.

                                                       P ix e l
                                                     s a m p le
                                       A c tiv e      & h o ld
       F ro m                           p ix e l
       CC D                             c lo c k
       a rra y                                                                +
                                                                                         O u tp u t
                                                                                          s ig n a l
                                                                              -

                     R e fe re n c e               R e fe re n c e
                       s a m p le                    s a m p le
         Dum m y      & h o ld 1                    & h o ld 2
          p ix e l
          c lo c k
Figure 73                                                CDS with dummy pixels and triple sample & hold


                 Triple sample and hold
                      A further improvement on the double sample and hold design is to add
                      another sample and hold circuit directly after the reference sample and
                      hold circuit. This seemingly silly idea removes transition errors within the
                      sample and hold switches themselves. This is shown in Fig. . Sample
                      and hold circuits 2 and 3 are switched for every active pixel. Any
                      switching transition errors in the active pixel circuit will be eliminated by
                      circuit 3.

            Correlated triple sampling
                      A further method of sampling the CCD array is called correlated triple
                      sampling (CTS). This method is not used very much. It improves the
                      noise cancelling effect of CDS by taking a third sample part way through
                      the pixel reset period. This third sample allows for more information to
                      get gained about the noise pattern within the array.

            Fowler sampling
                      A natural conclusion of the progression from DS, CDS and CTS is to
                      take multiple samples. This is particularly useful for long exposures.
                      Multiple array samples are taken at the beginning and end of the
                      integration time. These are averaged out to eliminated noise.




Sony Training Services                                                                                 126
Broadcast Fundamentals

                 Fowler sampling is not appropriate for broadcast camera design as the
                 exposure time is relatively short. This kind of sampling is more
                 appropriate for imaging in low light situations, like space exploration.


The future of CCD sensors
                 CCD sensors, in one form or another, have gained universal dominance
                 within the video camera world. They have completely replaced the older
                 tube sensor designs of old (See Early Image Sensors.)
                 However a newer technology has begun to emerge that promised to
                 replace conventional CCD sensor technology is CMOS. Although CMOS
                 technology has been around longer than CCD technology, it was difficult
                 to produce. CMOS is now a very important technology for
                 microprocessor design, digital signal processing chip design and for
                 many other types of chip design.
                 Production processes are now reliable and versatile enough for some
                 manufactuers to use CMOS for image sensors.
                 However the move from conventional CCD to CMOS technology will not
                 be the revolution that the move from tube to CCD was. CMOS
                 represents more of an evolution rather than a revolution.




127                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder


Part 13                                        The video tape recorder
A short history
                   There are many different video tape formats. Some have been more
                   successful than others. Some have been technically more superior than
                   others. There is often very little correlation between commercial success
                   and quality or technical excellence.

        Beginnings

             AEG Magnetophons, Bing Crosby and the beginnings of Ampex
                   The video tape recorder has been in existence for about 50 years. The
                   only way to record video before video tape recorders was to use film.
                   Film did not lend itself well to television. It required a telecine to convert
                   the film to a television signal.
                   During the second world war John (Jack) Mullin was stationed in
                   England as part of the Signal Corps. He became intrigued by the fact
                   that the Germans were able to transmit propaganda and music in the
                   middle of the night. The music was particularly interesting, as the quality
                   was very good, but the music was orchestral. It seemed unlikely that this
                   quality came from 78rpm record disks, and it seems even more unlikely
                   that entire orchestras were being employed to play every night.
                   As the Allied forces pushed forward into Germany Jack found the
                   machine that was able to play out with such quality. It was an AEG
                   Magnetophon. He grabbed two and proceeded to ship the mechanical
                   parts back to the States using the war souvenir parcel service. He
                   reassembled a machine rebuilding the electronics and making a few
                   improvements on the original AEG design. He demonstrated the
                   improved Magnetophon at a number of venues.
                   Bing Crosby saw one of the demonstrations and became interested. He
                   desperately wanted to avoid doing live radio shows every day, and knew
                   that quality of the shellac based recording methods available at the time
                   were so poor that audiences at home could tell the difference between a
                   live show and a recorded one.
                   Bing Crosby invested $50000 in Ampex to have John Mullin’s machines
                   produced on a more commercial basis. Ampex was a very small
                   company based in Redwood City, California. It had been making electric
                   motors for aircraft during the was and was looking for new projects to get
                   involved in. Bing Crosby agreed the investments on the basis that his
                   company Crosby Enterprises would become the sole marketing channel
                   for Ampex machines.
                   Ampex audio recorders were very successful and allowed radio
                   programmes to be pre-recorded and edited before going live, but still
                   maintaining such quality that radio listeners could not tell the difference.
                   In the mean time 3M had made significant advances in the formulation of
                   magnetic tape, and Ampex had set up a division specifically to make
                   tape to supply the rapid increase in demand.


Sony Training Services                                                                       128
Broadcast Fundamentals

            The first commercial video recorders
                 At the end of the 1940’s television was enjoying a jump in popularity.
                 Television producers had to use modified film technology to record
                 television shows. The quality was poor. A new method of recording
                 video was desperately required, so that producers could edit shows and
                 delay transmission for different time zones.
                 In 1949 John Mullin approached Bing Crosby and proposed that the
                 same plan that was used design a commercial audio tape machines be
                 used for video. A team within Ampex was put together and the first
                 working machine was demonstrated in 1950.

            Other early machines
                 In England Axton began research work in 1952 on a video recorder they
                 called Vera (Vision Electronic Recording Apparatus).
                 By the mid 1950’s other companies like RCA had started projects to
                 develop a video tape recorder.

       The 1970’s
                 During the early 1970’s several companies, including Sony, Teac and
                 JVC introduced a semi-professional format based on a ¾” tape, called
                 U-Matic
                 The 2” machine was superseded by machines using a 1” tape during the
                 mid 1970’s, with companies like Sony and Bosch entering the fray. This
                 format was truly helical, with the tape now wrapped around the drum
                 which spun almost in line with the tape, rather than at right angles to it.
                 The Bosch machines were ratified as the B format while the SMPTE
                 ratified the 1” tape standard as the more successful C format.

       The 1980’s
                 By the early 1980’s ½” tape based broadcast machine began to appear.
                 The most successful of these was the Sony Betacam and Betamax
                 formats. Betacam, the broadcast format, was later improved with the
                 introduction of Betacam SP (Superior Performance).
                 The domestic ½” format, Betamax, eventually lost the commercial battle
                 with the VHS (Video Home System) format, although it was technically
                 superior.
                 In the late 1980’s digital video tape recorders had begun to appear with
                 the D1 and D2 tape formats. A digital version of Betacam SP was
                 introduced. Called Digital Betacam this format became a widely
                 accepted standard for high quality digital television recording.

       The 1990’s
                 ½” tape formats based on the original Betacam format were introduced
                 during the second half of the 1990’s. They introduced MPEG
                 compression to mainstream broadcast tape recording and the idea of
                 high quality low bit rate recordings, metadata, and offering a bridge
                 between streaming technology (tape) and file based technology
                 (computers).

129                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   The DV format was introduced during the mid 1990’s. Originally
                   designed as a digital replacement for VHS and Hi8, it has been more
                   popular as a domestic camcorder format rather than for recording
                   television programmes at home.
                   In the professional and broadcast arenas manufactuers have squeezed
                   extra quality and performance out of the DV format to produce the
                   DVCam (Sony) and DVCPro (Panasonic) formats suitable for more
                   professional and broadcast use.
                   DV, and its high quality derivatives, are helical scan systems, indeed
                   there is actually little fundamental mechanical different from the very first
                   true helical scan video recorders. They are just a lot smaller.


The present day
        The domestic arena
                   In the domestic environment VHS is still king, although its days are
                   almost certainly numbered. The general quality of television output (from
                   an image point of view) has been steadily increasing over the last few
                   years and people are starting to realise just how bad VHS is. Even from
                   a convenience point of view VHS is starting to look cumbersome and
                   fragile.
                   DV was designed as a possible replacement to VHS. However
                   manufacturers have never produced DV equivalents of the ubiquitous
                   domestic VHS recorder, with remote control and timer functions. It looks
                   likely that VHS will be superseded by recordable optical disk rather than
                   tape.

        The professional and broadcast arenas

             The transition to hard disk
                   Broadcast television has seen an increased use of hard disk technology.
                   Indeed people have predicted that tape will be replaced by disk for many
                   years, and yet new tape formats keep appearing, and broadcasters are
                   still buying tape based technology.
                   It is true that hard disk technology is being used a lot more than it used
                   to be, and it is slowly replacing areas of the broadcast chain previously
                   occupied by tape technology. Hard disk is now used heavily in post
                   production and editing.
                   However tape is still cheaper, and more robust than hard disk. It is still a
                   popular choise in acquisition, and archive storage. All popular
                   camcorders in use today use tape. It is removable, can be treated with a
                   fir amount of disrespect, and is readily available.
                   Archive and long term storage systems use tape, although video and
                   audio material is now generally stored as digital data, and is often
                   compressed to further save space on the tape. Tape robotics machines
                   allow large amounts of tapes to be stored safely, with the advantage of
                   automatic scheduling and database support, so that video and audio
                   archives can be searched


Sony Training Services                                                                     130
Broadcast Fundamentals

                 It is unlikely that hard disk will entirely replace tape. It is more likely that
                 optical disk, using blue laser technology, will replace tape.

            The cost/quality balance
                 Cost is now more of an issue than it ever was. The newer tape formats
                 are a careful balance between cost and quality. ‘Cost’ means total cost,
                 not just the price of the equipment, but also maintenance costs and
                 running costs, commonly referred to as ‘total cost of ownership’. ‘Quality’
                 means the image quality, as well as manufacturing quality and quality of
                 after sales service and support.
                 Digital tape technology satisfies this careful balance much better than
                 analogue tape technology. All major television companies now use
                 digital tape technology, and almost all of these companies record new
                 material in digital format. However, with large analogue tape archives,
                 analogue tape players are still in popular use.
                 For very high quality DI is still used. It is expensive but offers the kind of
                 quality not attainable by any other broadcast format. Many companies
                 use Digital Betacam, and a few the M2 format. While these formats are
                 slightly compressed they still offer superb image quality at a much more
                 realistic price.
                 Betacam SX, DVCam and DVCPro are popular for news gathering
                 where convenience and price are more important. Indeed domestic DV
                 is being used in many professional areas.

            The stream/file bridge
                 The major thrust in digital tape technology is in bridging the very difficult
                 gap between streams and files.
                 Video and audio are basically streams. They have no beginning and no
                 end, and do not contain any kind of header, label, or other information.
                 Video and audio are also strongly related to time. They are continuous
                 and must be played at the correct speed without breaks.
                 Files, on the other hand, are contained chunks of data. They have a
                 header, information about the contents of the file, etc.. Files are also not
                 related to time. When copying files from one location to another it really
                 does not matter how long it takes, how the data is actually transferred, or
                 if the beginning of the file gets to its destination before the end.
                 With increasing use of computer technology in broadcast, television
                 companies require a way of bridging this gap between these two basic
                 methods of storing media.
                 Manufactures are starting to produce tape recorders that can place extra
                 information into the video or audio stream, much like a computer file can.
                 This so called ‘metadata’ is the focus of a lot of research work.
                 Manufacturers are also starting to introduce tape formats that can output
                 video and audio in packets, or in file structures, so that they can be
                 saved as files on hard disk, and treated as files within the television
                 station.




131                                                   Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   The MPEG IMX format is a good example of this. Although this format is
                   essentially a stream recording system just like the original Ampex
                   VR-1000 of the mid 1950’s, it is able to output a stream as a series of
                   chunks, or packets of data.
                   The E-VTR takes this one step further and allows bits of video or audio
                   to be marked, and played out as a file. Although the material has been
                   recorded from a video or audio connection, as a stream, locked to time,
                   it is now being played out to a computer network, as a file with
                   associated metadata, and at a speed governed by the network, in
                   bursts, faster or slower than real time.




Sony Training Services                                                                132
Broadcast Fundamentals



Magnetic recording principles
                     Although magnetic recording heads may have got a lot smaller, the
                     materials used are better, and manufacturing tolerances a lot higher, all
                     video tape recorders depend on the same basic principles of recoding a
                     signal to magnetic tape.

            Principle of a magnetic field
                     Man has known of the existence of magnets for several thousands of
                     years. The Chinese used them to invent the compass about one
                     thousand years ago.
                     However the fact that an electric current develops a magnetic field was
                     not discovered until much later. In 1820 Hans Christian Oersted
                     discovered that a straight wire, carrying an electric current, developed a
                     magnetic field, which circulates around the wire. Andre-Marie Ampere
                     discovered that the magnetic field could be concentrated and magnified
                     by winding the wire into a coil. William Sturgeon later discovered that
                     placing an iron core inside the coil greatly increased the strength of the
                     magnetic field, and bending the coil and iron into a ‘U’ shape further
                     concentrated the field at the two ends of the ‘U’ shape. A little later
                     Joseph Henry insulated the wire, thus enabling a larger and tighter coils
                     to be wound.




                                           C o il w in d in g s
                                                                  C u rre n t




                                                                        F lu x




Figure 74                                                                       Magnetic field around a wire and coil

                     It is perhaps a pity that Oersted, Ampere, and Henry have all has their
                     names immortalised as units of magnetic flux density, current and
                     inductance, but Sturgeon’s name has sunk into relative obscurity.
                     The magnetic field is known as flux, and its strength, the magnetic flux
                     density. Flux finds less resistance through some materials than through
                     others. Many materials are magnetic. This means that they become
                     magnetised if subjected to a magnetic field. The ability to become
                     magnetised is called the remenance.



133                                                               Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                                                                  W in d in g s




                         F e rrite c o re (to ro id )




                                                        F lu x
Figure 75                                                                                  The toroid


            Principle of electromagnetic induction
                     The opposite of the principle of a magnetic field is that of
                     electromagnetic induction. When a magnetic field is applied to a wire it
                     induces a current to flow in the wire. To be more exact, when a magnetic
                     field changes, it induces a current. Indeed it does not matter how strong
                     the magnetic field is, no current will be induced if this field remains
                     constant. Conversely, a small magnetic field could induce a high current
                     if it changes rapidly.




Figure 76                                                              The basic magnetic record head




Sony Training Services                                                                          134
Broadcast Fundamentals

       Using the qualities of magnetic materials in video tape recorders
                 The purpose of a video tape recorder is to use a record head to
                 magnetise the magnetic material on the tape, and for the playback head
                 to detect this magnetisation.
                 Record heads use the same basic principles discovered by Sturgeon. By
                 bending an iron core into a ring a high flux density could be made to flow
                 around the ring. A small gap is left in the ring. Flux jumps this gap,
                 bulging slightly outwards, and could be used to magnetise the surface of
                 the tape.
                 However, although a large gap results in a greater flux bulge which has
                 a greater influence on the tape, it also offers greater resistance to flux
                 and therefore reduces flux density. Thus the design of the record head
                 meets its first compromise.
                 Playback heads use the same design. When the magnetised tape
                 passes across the head gap, it induces a small magnetic flux in the head
                 core. As the flux changes, so a small current is induced in the coils.
                 Thus record and playback head cores need to be made from magnetic
                 material with low flux resistance and low remenance. Conversely the
                 magnetic material used in tape needed high remenance so that the
                 maximum amount of signal could be recorded.


The essentials of helical scan
       The bandwidth problem
                 Humans can hear audio from about 20Hz to about 20kHz. The
                 bandwidth of audio is therefore about 20Khz. If we consider modulation
                 or sampling, the Nyquist criteria doubles this bandwidth to about 40kHz.
                 No matter how we look at it, the frequencies involved are well within the
                 capability of magnetic tape and recording head technology, using
                 stationary record and playback heads.
                 Video is very different. Broadcast channels have a total bandwidth of
                 about 6MHz. In component form we would expect to retain as much of
                 the quality as possible, and give the luminance signal as near to 6MHz
                 as we can. Each colour difference signal may have a bandwidth of about
                 3MHz.
                 Add these bandwidths, and take into account the Nyquist criteria and
                 any recording system will need at least 24MHz of bandwidth!
                 If nothing else, these somewhat crude calculations show us that we
                 cannot record video on magnetic tape in the same way we do with
                 audio. Either the recording system has to be radically different, or the
                 bandwidth must be reduced.

            Head to tape speed
                 Key to the problem of bandwidth is the relative head to tape speed. In
                 analogue audio recorders this is achieved by pulling the tape across a
                 static head.


135                                                 Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                         If the bandwidth of the video signal is higher one could simply increase
                         the tape speed. This idea was tried in the first video recorders. Ampex
                         were at the forefront of video tape recorder development and
                         demonstrated video recorders with high speed tape at the beginning of
                         the 1950’s
                         Other groups were working on the design of a video recorder, like the
                         BBC Vera (Vision Electronic Recording Apparatus) which ran through
                         tape at 21 metres per second. With reels 21” in diameter just 15 minutes
                         could be recorded. In the States RCA built a prototype that ran through
                         tape at 9 metres per second. An improvement, maybe, but it only gave 9
                         minutes of recording time.
                         It became clear to all those groups working on video recorder designs
                         that the kind of tape speed required for the kind of bandwidth normally
                         found in video made the machine difficult to control and used up vast
                         amounts of tape.
                         Designers eventually decided that, to achieve the relative head to tape
                         speeds required, the recording head itself could not stay still.

            Early scanning techniques
                         Many of the earliest video recorders used a moving head to increase the
                         relative head to tape speed. From the very first prototypes this was
                         achieved by mounting the heads on a rotating drum.

                Ampex Mark 1 arcuate recorder
                         An early notable attempt to increase the head to tape speed was the
                         Ampex Mark 1 arcuate recorder. Built in 1952, this machine wrote the
                         video information onto tape as arcs using three heads on the drum. It
                         proved unreliable and difficult to regulate. The geometry also wasted
                         more tape than was necessary. Arctuate tape machines were not
                         successful.

                                   Tap e
                                                          G u id e
                                                                          D ru m
                  R e c o rd e d
                  tra c k s




                                                                               Head




Figure 77                                                                             The arcuate recorder



Sony Training Services                                                                               136
Broadcast Fundamentals

                                 The Ampex Mark 1 did give rise to the transversal scanning technique
                                 used by their quadruplex machines.

                      The Ampex VR-1000 quad recorder
                                 The original Ampex VR-1000 machine used 4 heads fitted 90 degrees
                                 apart on a spinning drum. (Hence the name “quadruplex” or simply
                                 “quad”.) The drum spins in line with the tape. The tape itself is 2 inches
                                 wide and is curved by a vacuum chamber, to fit around the drum.
                                 Each head records a stripe of video across the tape, called a track. As
                                 soon as one head breaks contact with the tape the next one is ready to
                                 carry on. The tape moves a little more than the width of one of these
                                 tracks before the next head comes along. This keeps tape speed slow.
                                 The drum spins quickly. This makes the head to tape speed high and
                                 allows a high bandwidth signal to be recorded.

                                                                              A u d io /c u e R /P
                                                                              h e a d s ta c k

                                                                                                                               A u d io tr a c k
                                                                A u d io /c u e e r a s e
        Id le r                                                 h e a d s ta c k
                                                                                                                                                           Id le r
                                                                                                                            C a p s ta n
                          D ru m m o to r    C o n tro l
                                             h e a d s ta c k                                                 P in c h w h e e l
                                  Vacuum
                                  cham ber




         V id e o h e a d s                                                                                                                C u e tra c k
                                                                                      H e lic a l (v id e o ) t r a c k s
                                                                                                                                    C o n tro l tra c k




                                                                      D r u m d e ta il
                                                                      (V a c u u m c h a m b e r re m o v e d )




Figure 78                                                                                                                                  The Quad tape path

                                 The drum spins at 14400rpm (USA machines), recording 960 tracks per
                                 second. 16 tracks make up each video field. Each head recorded or
                                 played back just 16 lines of video. Quad recorders manage to use up 15


137                                                                                         Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                    inches of tape per second. With a 4800 foot reel of tape it was possible
                    to record just over 1 hour of video
                    The quad machine is generally not considered a true helical scan
                    machine, because the tracks’ angle is almost perpendicular to the tape’s
                    direction. The scanning method is generally referred to as transversal.
                    A longitudinal control track is also recorded along the edge of the tape
                    so that the machine can lock to it, with the playback heads exactly
                    following the tracks as they were recorded.
                    Audio can also be recorded with a static head and a longitudinal track,
                    and there is also provision for a lesser quality audio track called cue.
                     A u d io tr a c k            H e lic a l v id e o tr a c k s          G u a rd b a n d s




                                         C o n tro l tra c k                        C u e tra c k
Figure 79                                                                                              The Quad tape footprint


                    Quad remains the most long lasting video tape format of all time. This is
                    partly because there was no alternative video recording format for many
                    years, and partly because no video recording format since has been
                    able to last the length of time Quad was used for, before it has been
                    superseded by something else.
                    However Quad has its quirks. Tape operators would have to regularly
                    ensure that the machine was aligned correctly. The drum and vacuum
                    chamber were particularly sensitive. The forces applied literally stretched
                    the tape. Applying exactly the same level of brutal treatment to the tape
                    every time it was played was not easy.

            True helical scanning
                    Helical scanning uses the same basic geometry as transversal scanning.
                    However, instead of the drum spinning vertically in transversal scanning,
                    producing tracks on tape that are almost vertical, helical scanning uses a
                    drum that is nearly horizontal.
                    Another difference between transversal and helical scanning is the tape
                    wrap. In transversal scan the vacuum chamber achieves a slight wrap
                    around the drum, stretching and distorting the tape as it does so.
                    However in helical scan designs the wrap is huge, by comparison. In
                    some formats wraps of almost 360 degrees have been used, although a
                    little over 180 degrees is more common. Even though the wrap is large



Sony Training Services                                                                                                   138
Broadcast Fundamentals

                                        none of the stresses common in Quad machines is placed on the tape in
                                        helical scan machines.
                                     E x it g u id e                                      T r a c k e n d p o in t a n d                H e a d (w r itin g tr a c k )
                                                                                E x it     e n d o f a c tiv e w r a p
                                                                               p o in t

                                                                                                                                                           U p p e r d ru m
                                                                                                                                                           (r o ta tin g )




                 H e a d (n o t w r itin g )




                                                                                                                                                               Tap e

                                                                          E n tra n c e
                                                                             p o in t        T r a c k s ta r t p o in t a n d
                                                                                          b e g in n in g o f a c tiv e w r a p
                              E n tr a n c e g u id e
Figure 80                                                                                                                               Helical scan (from above)

                                        During recording and playback the tape moves slowly through the
                                        machine and round the drum. The point where the tape meets the drum
                                        is called the entrance side. The point where the tape leaves the drum is
                                        called exit side.

                                                                                           H e a d (w r itin g tr a c k )                                                  E n tra n c e
                                                          U p p e r d ru m       Tape                                                  P r e v io u s ly                   g u id e
                                                          (r o ta tin g )                                                              w r itte n
 E n tra n c e                            Tap e                                                                                        tra c k s
 g u id e
                                                                                                                                                                               Tape




                                                       E n tra n c e
                                                       g u id e                                                             E x it
H ead                                                                                                P r e v io u s ly      g u id e
(n o t w r itin g )                                                                                                                                                                E x it
                                                                                                     w r itte n                                                                    g u id e
       L o w e r d ru m                                R a b b e t in                                tra c k s
       (s ta tic )                                     lo w e r d r u m                            L o w e r d r u m (s ta tic )                                       R a b b e t in
                                                                                                                                                                       lo w e r d r u m

Figure 81                                                                                                                              Helical scan (from the side)

                                        The drum assembly itself sits in the machine at a slight angle, and in
                                        most cases consists of two halves. The top half spins and the bottom


139                                                                                                     Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   half is static. The record and playback heads are fitted to the bottom
                   edge of the top half.
                   The bottom half has a rebate cut into it, called a rabbet. The rabbet is cut
                   at an angle, and in most machines is very close to the top of the lower
                   drum near to the entrance side, and much lower at the exit side. Taking
                   into account the angle of the drum assembly as a whole, the rabbet is
                   effectively horizontal. The bottom edge of the tape rests on the rabbet as
                   it passes around the drum assembly.
                   The angled drum assembly and the way that the rabbet is cut, mean that
                   the heads prescribe a helical scan across the tape as the drum spins. In
                   some formats the track angle goes upwards, in others it goes
                   downwards, depending on the angle of the rabbet and the rotational
                   direction of the drum. In most modern tape formats the drum spins anti-
                   clockwise. The tracks recorded on tape are about 5 degrees from the
                   line of the tape, and very long, compared to those in transversal
                   scanning machines.


Modern video recorder mechadeck design
                   The mechadeck is the mechanical part of a video recorder, consisting of
                   the tape reels, any servo mechanism, including the pinch wheel and
                   capstan mechanism, tension regulation, the drum and all the record and
                   playback heads, tape cleaners, head cleaners and cassette handling
                   mechanism.
                   Most modern video tape recorders now have similar mechadeck
                   designs. Tape, normally enclosed in a cassette, travels from the left
                   (supply) reel to the right (take-up) reel during normal recording or
                   playback. The route taken by the tape from the supply real to the take-up
                   reel is called the tape path.
                   The tape path from the supply reel to the capstan/pinch wheel is called
                   the supply side. From the capstan/pinch wheel to the take-up reel is
                   called the take-up side. The supply side of the tape path is by far the
                   most important part. It contains all the record and playback heads. Good
                   supply side tension regulation is important. There is often no take-up
                   side tension regulation at all.
                   Many machines have a tape cleaner placed in the tape path as the tape
                   leaves the supply reel. There will also be a tension regulator in the tape
                   path between the supply reel and the drum, to ensure that the tension
                   around the drum is correct.
                   Many video tape machines have static heads before the drum. A full
                   erase head will be fitted to all recorders to erase everything on the tape
                   before any new recording is made. Some machines also include a
                   control head. This head records special pulses on a longitudinal track
                   either along the top or bottom edge of the tape, and plays these pulses
                   back to help the servo system lock during playback.
                   There is a guide just before the tape wraps around the drum. Called the
                   entrance guide this guide has a flange that touches the top of the tape
                   stopping it from riding up as it is wrapped around the drum. The tape is
                   prevented from dropping by the drum rabbet.


Sony Training Services                                                                    140
Broadcast Fundamentals

                 There is another top touching guide on the exit side of the drum, called
                 the exit guide. There may also be one or more static heads between the
                 exit side of the drum and the capstan/pinch wheel. These are commonly
                 used for timecode and audio, but may also be used for control.
                 The capstan is a precision servo controlled motor responsible for pulling
                 the tape through the tape path at the correct speed and position. A pinch
                 solenoid will force a soft rubber pinch wheel against the capstan
                 squeezing the tape between the two. This force is strong enough to stop
                 the tape from slipping but not so strong as to damage it.
                 Although the capstan rotates at essentially a constant speed the servo
                 system constantly speeds up and slows down by minute amounts to
                 keep the tape in the correct position relative to the drum, and to ensure
                 that the video heads are moving exactly up the centre of the helical
                 tracks. The control head and track are used in some machines to
                 accomplish this. Others use the RF signal from the helical tracks.
                 The capstan and pinch wheel isolates the supply side of the tape path
                 from the take-up side.
                 Various guides guide the tape back into the cassette and onto the take-
                 up reel. Some machines include a take-up tension regulator to ensure
                 that there is a small amount of slack on the take-up side to allow for
                 sudden changes in direction during normal operation, but not so much
                 as to start throwing tape loops. The drum assembly is at an angle and
                 some machines include an angled guide to ensure that the tape is
                 straight as it re-enters the cassette.

            Guard bands
                 Older helical scan machines record analogue composite video. Each
                 track contained both the luminance and the colour video information. As
                 with every video recorder before and since it was important to ensure
                 that the playback heads followed the recorded tracks exactly.
                 All early machines used a longitudinal control track running along the top
                 or bottom edge of the tape. The pulses recorded on this track helped the
                 machine find the beginning of each helical track.
                 The control track is not an exact method of finding the beginning of each
                 helical track. The helical tracks themselves are very thin, and it is
                 possible for the control head to be in the wrong place. Any error in the
                 position of the control head, and the video heads will not move up the
                 centre of the helical tracks.
                 Tape                  H e lic a l tr a c k s        G u a rd b a n d s




Figure 82                                                                                 Guard bands



141                                                         Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                      If the helical tracks are packed together tightly there will be a risk of
                      playing back a portion of video from another helical track. Editing also
                      becomes problematic as recorders run the risk of over-recording
                      material on tape that it should not.
                      A guard band is a space between helical tracks with no recorded signal.
                      Early machines used the concept of a guard band to prevent the video
                      heads from picking up the recording from adjacent tracks during play
                      back, if the control head is slightly in the wrong position, and to prevent
                      the machine overwriting the wrong helical track during edits.
                      However guard bands use up tape. Later machines abandon guard
                      bands in favour of track azimuth, and thus save tape.

            Helical tracks with track azimuth
                      Later video machines recorded component video, with separate circuitry,
                      record heads, playback heads, and tracks for luminance and colour.
                   R e c o rd /                                C o ils
                  P la y b a c k
                                    H ead gap
                     head




                          No                    P o s itiv e             N e g a tiv e
                      A z im u th               A z im u th              A z im u th




                                                                             H e lic a l tr a c k s
                       Tape




Figure 83                                                                                             Track azimuth




Sony Training Services                                                                                        142
Broadcast Fundamentals

                 Making a tape machine that is able to record and playback entirely in
                 component increases the quality of recording over older composite
                 machines, and removes the problems associated with editing composite
                 material. However component video recorders are more expensive than
                 composite ones as they are effectively two video recorders in one.
                 Designers needed a way for these machines to differentiate the helical
                 tracks responsible for luminance from those for colour. The method
                 adopted was track azimuth.
                 Track azimuth involves tilting the head gaps over at an angle. The
                 luminance head gaps are tilted over positively and the colour head gaps
                 negatively.
                 During recording the luminance tracks are recorded with a positive
                 azimuth and the colour tracks with a negative azimuth.
                 If the machine is badly aligned and a luminance playback head is trying
                 to play back a colour track, the angle of the recording will be incorrect. In
                 fact it will be incorrect by twice the azimuth angle. This will severely
                 reduce the signal.
                 Azimuth angles of about 15 degrees are popular. This gives a total error,
                 if the each head is on the wrong track, of about 30 degrees.
                 Azimuth replaces the need for a guard band. The colour tracks are
                 effectively guard bands for the luminance heads and visa versa. Helical
                 track can be packed next to each other, saving a lot of tape, and
                 increasing the tape’s recording capacity.

       Video head design
                 The principles used by video tape recorders to record a signal onto
                 magnetic tape have not changed since the very first tape recorders.
                 They still rely on a donut shaped head made from ferrite, or some similar
                 material, with a slot cut in its front face and a coil wrapped around its
                 back. The dimensions used in modern video recorders may be a lot
                 smaller, but you can still find a donut idea somewhere in every video
                 record and playback head.
                 The video heads, and the tracks they record, are thin. Older formats
                 used heads close to 100um thick. Modern formats are using heads less
                 than 10um thick.
                 Video heads are no longer the classical round donut shape. They are
                 square. The surface in contact with the tape is along rectangle. This
                 reduces tape bounce at the head gap and reduces wear.
                 The coils are wound on the sides of the head. Only a few turns are
                 required on each side for the head to be effective.

            Channeling flux
                 One of the challenges facing head designers is to channel as much flux
                 to the front of the record head gap as possible, where the head is in
                 contact with the tape, and where recording and playback will take place.
                 Likewise the front of the playback head gap needs to be as sensitive as
                 possible to achieve maximum signal off the pre-recorded tape.


143                                                 Sony Broadcast & Professional Europe
Part 15 – The video tape recorder


                                                                                                           H ead gap

                                                                   C o re                                          Ty p ic a l v id e o h e a d



                                          E tc h in g




    T r a c k w id th

    G a p d e p th
                                                                                            C o ils




                                                                                      C h i p w id th
                                                                                                                                   S end u st or
                                                                                                                                   S o f tm a x r e g i o n s


                                                           B a c k p r o fi l i n g
                     T h e fe rrite h e a d




       S p u tte r e d S e n d u s t r e g i o n




                                                                                           T h e M IG h e a d




                                                        The TS S he ad




Figure 84                                                                                                                     Video head designs

                                        The first modification is to cut away at the back of the head, where the
                                        head gap is. This forces flux lines forwards to the front of the head.
                                        The second modification is to introduce a different material with low
                                        reluctance to the front face of the head, just where the head gap is. Flux
                                        prefers to jump the gap at this material, rather than the ferrite behind.
                                        Several material are used, often with exotic names to hide their true
                                        composition. Materials like Softmax and Sendust are used. However
                                        these materials tend to be softer than ferrite and therefore tend to wear
                                        away quicker. Only a thin slither is placed on the head, and only close to
                                        the head gap, rather than across the whole front face.




Sony Training Services                                                                                                                                     144
Broadcast Fundamentals

       Automatic tracking
                 Another important technology that has been a vital part of modern
                 professional and broadcast video tape machines is the automatic
                 tracking playback head.
                 While servo systems using a control track were able to bring the video
                 heads, in particular the playback heads, close to the centre of the helical
                 tracks, there was a certain degree of error due to badly adjusted servo
                 electronics, or an imperfectly adjusted control head.
                 Another problem is even more annoying. The geometry of all helical
                 scan video recorders will play back correctly at play speed, because that
                 was how the tape was recorded, at play speed. If the machine is
                 speeded up slightly, slowed down, or paused altogether, the geometry
                 changes. Now the playback heads will not travel exactly up the centre of
                 the helical tracks, and will wander off track and possible cut across
                 adjacent helical tracks.
                 This is annoying for editors who regularly want to play back at other that
                 play speed, or pause the video machine altogether and look at one
                 frame or field on its own.
                 Automatic tracking video playback heads eliminate these problems.
                 Introduced by Ampex in 1977 as the Automatic Scan Tracking (AST)
                 system and by Sony in 1984 as the Dynamic Tracking (DT) system, both
                 systems relied on moving the playback heads to keep them in the centre
                 of the helical tracks.

            Automatic tracking playback heads
                 Automatic tracking video playback heads use piezo-electric crystal
                 bimorphs The bimorph consisted of two piezo-electric crystals, bonded
                 together. When a voltage is applied across the bimorph one crystal
                 expands while the other contracts. This causes the bimorph to bend.
                 Reversing the voltage reverses the bend. One end of the bimorph is
                 fixed to the drum. The playback head is placed on the other end. In early
                 designs, including the Ampex AST designs, one bimorph was used. Two
                 bimorph are used in later designs, because this keeps the head itself
                 perpendicular with the tape surface.
                 One disadvantage with this kind of tracking system is that the bimorphs
                 require a high voltage to bend sufficiently. Any machine with automatic
                 tracking heads needs brushes and slip rings to transmit these high
                 voltages to the drum. Furthermore the brushes and slip rings must
                 maintain good contact and the drum contain smoothing circuitry. Any
                 intermittence in the supply to the bimorphs could generate
                 electromagnetic radiation that could be disastrous to the delicate record
                 and playback process.

            Automatic tracking in operation
                 A small alternating voltage of about 450kHz is applied to the bimorphs
                 causing the heads to continually wobble. The wobble continually takes
                 the head slightly off track, causing a slight drop in the RF signal. The
                 servo system continually checks the level of the RF signal from the


145                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   heads, keeping the drops in RF as small as possible, by adding a DC
                   voltage to the wobble voltage.
                   Automatic tracking playback heads allow operators to change the
                   playback speed of a helical scan tape machine and still maintain a
                   steady picture. They have become an essential part of professional and
                   broadcast tape machines.

        Tension regulation
                   It is vital that the tape tension around the drum is correct and maintained
                   within a small range. If the tape is too tight the video heads and tape will
                   wear out rapidly. If the tape is too loose the video heads will not be able
                   to maintain good contact with the tape surface and there is a risk that the
                   machine will throw tape loops or stick around the drum.
                   There are two types of tension regulator, the purely mechanical type and
                   the electromechanical type. Most domestic machines, cheaper and
                   smaller professional machines, use purely mechanical tension
                   regulators. They are simple, light and cheap.
                   All high-end professional and broadcast tape machines, especially those
                   intended for studio use, use electromechanical tension regulators.
                   Although they are generally heavier, more complex and more expensive,
                   they offer much finer tension regulation, and a change for the servo
                   system to monitor the tension regulation process. This in turn allows
                   machine to have different tension regulation response times for different
                   modes of operation and fault detection in case the tape sticks or breaks.

             The principles behind good tension regulation
                   All tension regulators operate in the same basic way. During recording
                   or playback the capstan and pinch wheel pull the tape out of the supply
                   reel and around the drum. The take-up reel motor will apply a constant
                   pull on the tape. This pull is very light but it ensures that any tape that
                   has come through the capstan and pinch wheel is drawn into the take-up
                   reel in a tidy fashion.
                   The supply reel motor will be trying to resist tape from being drawn out
                   of the supply reel. This is what produces the tension. The higher the
                   resistance, the higher the tension.

             Mechanical tension regulators
                   Mechanical tension regulators have a sensing arm with a roller on the
                   end of it, around which tape moves. The arm is connected to a spring
                   and to a friction belt which is wrapped around the supply reel table. If the
                   tape tension drops the spring will pull the arm further out, tightening the
                   friction belt around the supply reel table, increasing its resistance. The
                   capstan will continue to pull more tape out and the tension will increase.
                   Likewise if the tape tension increases, the arm will be pulled in against
                   the spring loosening the friction belt around the supply reel table, and
                   decreasing its resistance to allow tape out.




Sony Training Services                                                                    146
Broadcast Fundamentals

                 Mechanical tension regulators cannot handle loose tape by pulling it
                 back into the supply reel. This is because the tension regulator can only
                 stop the supply side reel motor, it cannot make it turn backwards.

            Electromechanical tension regulators
                 Electromechanical tension regulators use a sensing arm with a roller on
                 the end of it, just as mechanical tension regulators. Likewise the arm is
                 connected to a spring, however the spring tends to be better quality, and
                 in some cases may be more that one spring to give a more accurate
                 response.
                 The arm will also have a strong magnet attached to it. One or more hall
                 effect detector are fixed to a circuit board either on the mechadeck or on
                 the tension regulator assembly. The hall effect detector will output a
                 signal corresponding to the position of the tension regulator arm. With a
                 properly aligned spring the position of the arm will also provide a
                 measure of the tension in the tape.
                 The signal from the hall effect detector is send to the machine’s servo
                 system which controls the supply side reel motor. Reel motors in this
                 kind of machine are more complex than those in machines with
                 mechanical tension regulators. The servo system is able to control the
                 direction, speed and amount or torque very precisely. Rather than using
                 friction the supply reel is effectively trying to turn backwards.
                 This ability to control the backward rotation of the supply reel motor also
                 allows electromechanical tension regulators to draw loose tape quickly
                 back into the supply reel.


Variation in tape path designs
                 Before the universal acceptance of cassettes for video tape recorders,
                 manufacturers designed several exotic tape paths that often required the
                 tape operator spend a while lacing up before the machine could be
                 used. Indeed we often take it for granted as we slam another cassette
                 into the machine that it was not always that easy.
                 Tape path designs have now settled to a just a few variations since the
                 introduction of cassettes, because the machine must be able to
                 automatically draw the tape out of the cassette before recording and
                 playing can take place, and put it neatly back into the cassette before it
                 is ejected. Any complicated lacing cannot be performed.

       Terms of confusion
                 Various terms have been given to various tape wrap patterns, and there
                 appears to be a fair degree of confusion as to which one is which. The
                 term ‘omega wrap’ has been associated with many wrap pattern that are
                 not that similar.

            Alpha wrap
                 The alpha wrap takes its name from the Greek letter ς . The tape passes
                 around the drum for a full 360 degrees. The wrap is sideways, with the
                 entrance and exit guides on the left or right. Alpha wrap would be very

147                                                 Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   difficult to achieve with cassettes because the tape passes over itself.
                   The tape must be manually laced. It is only used in machines with
                   spools. As an example alpha wrap is performed by the old Philips
                   EL3402 1” machine.

             Omega wrap
                   Omega wrap takes its name from the Greek letter Σ . The tape passes
                   around the drum for almost 360 degrees. The active wrap is about 270
                   degrees. Although the term omega is used with many cassette machines
                   it is not actually possible to perform and true omega wrap with a
                   cassette. The tape must be laced. As an example, omega wrap is
                   employed by 1” C format machines.

             C wrap
                   The wrap pattern is actually in the shape of an backward ‘C’ a little like
                   the Cyrillic letter ‘t ’. C wrap is possible and popular with cassette
                   machines. The tape is drawn from the cassette at one point and taken
                   between 200 and 300 degrees round the drum in an anti clockwise
                   direction, giving an active wrap of anything between 180 and 270
                   degrees. As an example, C wrap is popular with broadcast studio
                   machines using the Betacam SP and Digital Betacam formats and other
                   similar tape formats.

             M Wrap
                   This is the most popular wrap pattern, and is used in cassette based
                   machines. Tape is drawn from the cassette at two points. It is drawn
                   round the left side of the drum, and round the right side of the drum, to
                   give a total wrap of between 250 and 300 degrees, and an active wrap
                   of anything between 180 and 270 degrees. As an example M wrap is
                   popular with domestic VHS machines and some broadcast machines
                   like the Sony D1 and D2 machines and the PVW range of Betacam SP
                   machines.


Definition of a good tape path
                   A perfect tape path with contain a perfectly circular supply and take-up
                   reel. The tape would move from the supply reel to the take-up reel in a
                   straight line without touching anything. Video and audio would be
                   recorded and played back without any heads touching the tape.
                   Clearly this is an impossibility! Compromises have to be made. The
                   record and playback heads must touch the tape. Furthermore helical
                   scanning requires that the tape be wrapped around the drum. Thus the
                   tape must change direction dramatically. Helical scanning also requires
                   accurate tape tension control.
                   The speed of the tape must be governed and regulated. Reel motors are
                   simply not good enough to accomplish this. A capstan is required.
                   Any item like a guide, drum or static head, cleaner of capstan changes
                   the tapes direction and adds friction. Spinning guides, and the drum
                   itself, are never absolutely central and always add a slight wobble to the

Sony Training Services                                                                   148
Broadcast Fundamentals

                 tape’s motion. There are therefore opportunities for the tape to stick, be
                 forced into the wrong position or the timing to be altered.
                 The important part of any video tape recorder tape path is the distance
                 between the supply side reel and the capstan. This is where all the
                 heads are and this is where the tape must be at the correct tension and
                 in the correct position. This length of tape should be as short as
                 possible, and should pass across as few items as possible.
                 Static heads, the cleaner, the drum, entrance and exit guides, the supply
                 side tension regulator and the capstan/pinch wheel are vital and
                 therefore always present. Designers will ensure that any other guides
                 will only be added to the design if they are absolutely necessary.
                 A perfect tape path does not need top touching or bottom touching
                 guides, or a rabbet on the lower drum. The tape would pass around the
                 various items on the mechadeck in exactly the correct position.
                 Designers calculate the angle of guides, drum, static heads, etc. so that
                 the tape runs smoothly through the tape path. Although it is impractical
                 to expect a perfect level of mechanical accuracy, the rabbet and any
                 guides touching the top or bottom of the tape should do so very lightly.
                 It is not very critical what happens to the tape after the pinch wheel and
                 capstan. The pinch wheel and capstan act as a wall, isolating the drum
                 and static heads from any minor wobbles in the tape afterwards.
                 Therefore the amount of tape, number of guides and other hardware is
                 not important.


The servo system
                 Modern video machines consist of a number of servo loops. Normally
                 one item is the master, and servo loops slave off the master.
                 When a video machine is playing back the master is the drum. It obeys
                 the incoming reference taking no regard of anything else, and spins at a
                 constant rate, somehow related to the reference itself. The drums in
                 early analogue machines spin at frame rate, 25Hz for PAL (625 line)
                 based machines and 29.97Hz for NTSC (525 line) based machines.
                 Later digital machine drums spin at a multiple of frame rate.
                 The rest of the servo system slaves off the drum. The first servo loop to
                 consider uses signals from the drum and the control head and uses the
                 capstan as a control. Although the machine’s servo system will control
                 the capstan to pull the tape through at almost a constant rate, signals
                 from the drum and the control head inform the servo system of the
                 relative position of the tape and the spinning drum. By slightly altering
                 the speed of the capstan the servo system will ensure that the timing
                 between the pulses from the drum and the control head is correct, thus
                 ensuring that each playback head finds the beginning of each helical
                 track.
                 Another servo loop slaves off the capstan servo loop. This loop uses a
                 signal from the tension regulator to control the supply reel, trying to
                 maintain the tension around the drum at a constant predefined level. In
                 simpler machines this is done mechanically. In more complex machines
                 this is done electronically.

149                                                 Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   In some machines there is also a servo loop between a take-up tension
                   regulator and the take-up reel, to maintain the take-up tension.


Analogue video tape recorder signal processing
                   Video tape recorder signal processing can be divided into a number of
                   distinct areas. The first division is between record and playback
                   processing.

        The problems of recording to tape
                   The earliest pioneers of tape recoding technology discovered that it was
                   impossible to record an audio or video signal directly to tape and expect
                   a reasonable playback signal. Two characteristics ensure that the record
                   process was not going to be that straightforward, the basic behaviour
                   characteristics of inductors and magnetic hysteresis.
                   When a record head records a signal the current applied to the head
                   generates a flux which magnetises the tape. The strength and direction
                   of magnetisation is directly related to the current. When the playback
                   head plays the tape back, current is generated at the output of the head
                   that is proportional to the rate of change of magnetisation in the tape.
                   This term ‘rate of change’ is crucial. If a high DC signal is recorded to
                   tape, it will magnetise the tape a lot, but nothing will be played back,
                   because the rate of change is zero. Conversely if a smaller high
                   frequency signal is recorded to tape a large high frequency signal will be
                   played back because the rate of change will be high.
                   This characteristic is evident when looking at the control track of most
                   professional VTR’s. The control head records a 25Hz or 29.95Hz square
                   wave signal on tape. The resulting control track consists of positively
                   and negatively magnetised region. The resulting playback signal is a
                   series of large negative and positive spikes for each negative and
                   positive transition.
                   This ‘rate of change’ characteristic makes the record/playback process
                   non-linear, it integrates the record signal, and introduces a phase shift
                   between the record and playback sine waves.
                   The second characteristic is hysteresis. This defines the ‘memory’’
                   magnetic materials have. Apply a magnetic flux to a magnetic material
                   and it will remember this by becoming magnetised.
                   The answer to these problems is modulation. Modulation is the process
                   of combining a low and high frequency signal together into one signal.
                   There are two types of modulation, amplitude modulation (AM) and
                   frequency modulation (FM). AM involves changing the amplitude of the
                   high frequency signal with the low frequency signal. FM involves
                   changing the frequency of the high frequency signal with the low
                   frequency signal.
                   AM is the easier modulation system to design, and was used in the first
                   attempts at modulating the video signal before recording it to tape.
                   However FM is more resilient and was chosen as the modulation of
                   choice for video tape recorders.



Sony Training Services                                                                   150
Broadcast Fundamentals

       Input processing

            Reference input selection
                 Another important part of the input circuitry is the reference input. The
                 machine should be able to play a tape back on its own, maintaining good
                 and consistent timing. It should also be able to lock to an incoming
                 reference when playing back, locking the entire playback process to the
                 incoming reference. The machine should also be able to lock to an
                 incoming reference while recording, or lock to the incoming video signal
                 it is recording.
                 Therefore every tape machine will include a precision oscillator and sync
                 pulse generator (SPG). This module can either free run to provide a
                 good reference for the machine, or it can be genlocked to either an
                 incoming reference signal or video input. Part of the input processing will
                 include a switch to select which input will be directed into the oscillator
                 and SPG.

            Video input processing
                 All video tape recorders have input circuitry. This is required to convert
                 the input video into a common form appropriate for recording on tape.
                 For instance, component video recorders require any video input be in
                 component form before any final encoding or modulation can occur prior
                 to recording to tape. Therefore input circuits will include a composite
                 decoder, or S-video decoder, and a selection switch to allow the
                 operator to select which type of input to record.

            Input audio processing
                 As with video inputs, all video tape recorders will include switches,
                 equalisers noise reduction encoders (Dolby for instance) and even
                 provide microphone power, to convert and process the incoming audio.

            Tape encoding
                 The video signal needs processing prior to recording to tape. This will
                 include FM. It may also include pre-emphasis to improve the recorder’s
                 ability to capture sharp transitions and detail in the image. It may also
                 include a small amount of AM after the FM to reduce the possibility of
                 over modulation problems that sometimes manifest themselves as
                 bearding on the playback image.
                 The audio signal needs little further processing other than standard bias
                 modulation, before being recorded to longitudinal tracks to tape.

            Signal transfer to the drum
                 Once a decision had been made to use a rotating drum to increase the
                 relative head to tape speed, a way was need to transfer the video
                 signals onto the spinning drum and to the record heads. Wire connection
                 could hardly be used. They would very quickly tie themselves in knots
                 and wrench themselves free. Slip rings and brushes also presented
                 problems. It was impossible to maintain good enough connection.


151                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   The answer lies in the rotary transformer. This operates in a similar way
                   to a standard transformer with two windings (coils) sitting close to one
                   another. Current in the input coil produces a magnetic flux. As the
                   current changes the flux changes. The rate of change of this flux excites
                   a current in the output coil. If an AC signal is input to the input coil and
                   AC output will appear and the output coil, albeit phase shifted.
                   Rotary transformers have one coil build into the static lower drum and
                   the other built into the spinning upper drum. An RF signal can be
                   transferred from the lower to upper drum during recording, and from the
                   upper to lower drum during playback.
                   Modern video tape recorders have many rotary transformers for
                   transferring more than one signal onto and off the upper drum. This is
                   essential in component video machines where there is a separate path
                   for the luminance and colour signals. A separate transformer is often
                   also used to transfer switching information to the upper drum, so that the
                   drum itself can switch record or playback signals between different
                   heads either as the drum rotates, or for multi-format machines where
                   different playback heads are used to play back tapes from different
                   formats.

        Output processing
                   One of the challenges facing designers of early tape recorders was how
                   to play the tape back with smooth consistent timing. The timing
                   requirements of a standard broadcast video signal are very accurate.
                   Video tape recorders are essentially mechanical. No matter how well the
                   tape machine is built, and no matter how good the servo system is, there
                   will still be a slight amount of mechanical wobble that will introduce huge
                   timing fluctuations compared to the timing accuracy required of
                   broadcast video signals. The answer lies in a clever piece of circuitry
                   called a timebase corrector which is explained in a separate section
                   below.

             Signal transfer from the drum
                   RF signals from the playback heads are transferred off the drum through
                   the rotary heads described above. Head switching is required for those
                   machines with a drum wrap of less than 360 degrees. Many machines
                   use an active wrap of 180 degrees. Two sets of playback heads are
                   used, 180 degrees apart. The machine will switch between heads at
                   exactly the correct point, and thus maintain a continuous video signal
                   from the tape.
                   This switching can be performed on or off the drum. Switching on the
                   drum means that fewer rotary transformers are required to transfer the
                   video signals from the upper to lower drums. However an extra rotary
                   transformer is required to transfer switching information to the upper
                   drum.
                   Switching off the drum removes the need to transfer switching
                   information to the upper drum, but increases the number of rotary
                   transformers required to transfer the video signals off the upper drum.



Sony Training Services                                                                    152
Broadcast Fundamentals

                 Once the signal is off the drum it is buffered and equalised. There may
                 also be an automatic gain control (AGC) to automatically correct small
                 irregularities in the amplitude of the playback RF signal.

            Output video and audio processing
                 The final piece of circuitry before the outputs processes the video and
                 audio signals to provide whatever outputs the machine is designed to
                 provide. Component analogue machines may include a composite
                 encoder for a composite output, or analogue to digital converters for
                 either digital audio or digital video outputs.

       The timebase corrector (TBC)
                 Sitting between the playback equalisers and the final output processing
                 is the TBC. A TBC evens out the irregularities in timing of the signal
                 coming from the helical tracks of a video tape recorder. All TBCs do this
                 by storing a certain amount of the signal and then playing it out at a
                 constant rate.

            Clock generation
                 An important part of any TBC is the ability to generate accurate clocks.
                 Clocks are used to write video into the store and out of it. The read clock
                 needs to accurately follow the timing irregularities in the signal coming
                 from tape. The write clock needs to be locked to the machine’s SPG
                 keeping constant smooth timing.
                 Of the two clocks the read clock present the greatest challenge. A
                 horizontal syncs detector sends the horizontal sync pulses off tape to a
                 timed monostables which output a voltage depending on the rate of the
                 sync pulses. The detector may also include and window discriminator
                 which will ignore any false horizontal syncs and half line pulses during
                 the vertical interval.
                 The signal from the timed monostables is fed into a voltage controlled
                 oscillator (VCO). The VCO is designed to run at the same rate as the
                 read clock when there is no signal from the timed monostables.
                 Timing irregularities in the signal off tape will increase or decrease the
                 horizontal sync rate. This will cause the control voltage from the timed
                 monostables to increase or decrease, shifting the VCO frequency up or
                 down.

            Charge coupled device (CCD) delay line TBC
                 The first TBCs used CCD delay lines. These half analogue, half digital,
                 devices consist of a long line of cells. Each cell could contain an
                 analogue charge. A clock input transfers all the charges one cell towards
                 the end of the line. The input is connected to the first cell and the output
                 to the last cell.
                 CCD delay lines cannot input and output at the same time. Therefore
                 two delay lines are used, one for writing and the other for reading. They
                 are designed to store enough for one line of video. One line later and the
                 delay lines are switched. The one that was writing is now reading, and


153                                                 Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   visa versa. The clocks are also switched. The delay line that is writing
                   uses the write clock and the one that is reading uses the read clock.

             Semiconductor TBC
                   All new TBC designs use semiconductor memory devices instead of
                   CCD delay lines. Semiconductor memory devices are totally digital.
                   They therefore require a digital input and give out a digital output. All
                   semiconductor memory TBC used in analogue video recorders use
                   analogue to digital converters at the TBC input and digital to analogue
                   converters at the output.

             Dual TBC designs
                   All modern broadcast analogue video tape recorders record component
                   video, keeping the luminance and colour parts of the video signal
                   separate throughout the whole record playback process, even on tape.
                   Thus the luminance and colour playback signals experience their own
                   timing inconsistencies. Each signal must be timebase corrected
                   independently if quality is to be maintained.
                   A dual TBC has a separate horizontal sync detector, timed monostable
                   and VCO for luminance and colour. It also has two stores. The read
                   clock is the same for both luminance and colour.


Popular analogue video recording formats
                   This is by no means an exhaustive list. There are many analogue video
                   tape formats not mentioned here, that were only moderately successful
                   and others that were more of a failure.

        Quadruplex (1956)
                   Ampex introduced the Quadruplex tape format, commonly known as
                   Quad. Quad is a professional 4 head transversal composite format. It
                   uses spools of 2” tape. Helical tracks were originally 10 mils wide, 33
                   minutes from vertical. Drum is just over 2” dia. spinning at 1,400 rpm for
                   the original NTSC machines.
                   The most long lasting of video tape formats. The Ampex VR-1000
                   machine was the first commercial video tape machine. There are still
                   archives of Quad tape, and it is still in use in a few places, although it
                   has been superseded for new recordings by other formats.

        U Matic (1970)
                   Developed by JVC, Matsushita and Sony in 1971, sometimes called
                   Type E. U Matic is a professional 2 head helical scan composite format.
                   It uses spools of ¾” (19mm) tape. Helical tracks are 84um wide and 4.95
                   degrees from horizontal. Drum is 110mm dia. spinning at 1800 rpm
                   (1500 rpm for NTSC). U Matic will record 2 longitudinal audio tracks.
                   LTC was not designed in as a separate track to start with but was given
                   a dedicated track under the helical tracks. This meant that LTC had to
                   be recorded first and could not be re-recorded without over writing part
                   of the helical tracks. Later provision for VITC.


Sony Training Services                                                                     154
Broadcast Fundamentals

                 U Matic was a very successful format because of its wide user base,
                 from high end broadcast to professional and industrial use. Although not
                 as long lasting as Quad it was probably more popular. Eventually
                 available in higher quality SP form, in lo-band and hi-band modes.

       Betamax (1975)
                 Developed by Sony. Betamax is a domestic 2 head composite helical
                 scan format using the colour under system. It uses cassettes containing
                 ½” tape. Helical tracks are just over 30 um wide and 5.85 degrees from
                 horizontal. Drum is 74.487mm dia. spinning at 1800 rpm (1500 rpm for
                 NTSC). Betamax will record 1 longitudinal audio track and no timecode.
                 Betamax was head to head with VHS in the late 1970’s, but eventually
                 lost. Many reasons have been given for this. The reluctance of Sony to
                 licence the format. The reluctance of video rental firms to accept pre-
                 recorded Betamax tapes, the lesser record times and features of
                 Betamax machines.

       1” type C (1976)
                 Developed by Sony and Ampex. C is a professional helical scan
                 composite format. It uses spools of 1” tape. Helical tracks are 5.1 mils
                 wide, and almost flat at 2.5 degrees from horizontal. The format uses a
                 large drum of 132mm dia spinning at 3600 rpm (3000 rpm for NTSC). C
                 will record 3 longitudinal audio tracks (4 in Europe) and LTC normally
                 recorded on the last audio track.

       VHS (Video Home System) (1976)
                 Developed by JVC, adopted by many other manufacturer. VHS is a
                 domestic 2 head composite helical scan format using the colour under
                 system. It uses cassettes containing ½” tape. Helical tracks are 2.3 mils
                 wide in standard play mode, 1.15 mils wide in long play mode and 5.96
                 degrees from horizontal. Drum is 60.5mm dia. spinning at 1800 rpm
                 (1500 rpm for NTSC). VHS will record 2 longitudinal audio tracks and no
                 timecode.
                 VHS was head to head with Betamax in the late 1970’s, but eventually
                 won. VHS went on to become the most popular domestic and industrial
                 format.

       Video 2000 (1979)
                 Developed by Philips and Grundig. Video 2000 is a domestic 2 head
                 composite helical scan format using the colour under system. It uses
                 cassettes containing ½” tape. Helical tracks are 22.6 um wide and 15
                 degrees from horizontal. Drum is 65mm dia. spinning at 1800 rpm (1500
                 rpm for NTSC). Video 2000 will record 2 longitudinal audio tracks and no
                 timecode.
                 In Europe Video 2000 was the ‘other domestic format’ while VHS and
                 Betamax battled for supremacy. It boasted automatic tracking, using
                 bimorphs similar to those used in professional machines, and dual sided
                 cassettes. However Video 2000 was never going to win against either
                 VHS or Betamax. While video rental firms were pushed to provide two


155                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   versions of each movie in their shops, VHS and Betamax, it was
                   inconceivable that they would supply three.

        Betacam (1982)
                   Developed by Sony, sometimes called Type L. Betacam is a
                   professional 4 head component helical scan format. It uses cassettes
                   containing ½” tape. Helical tracks are 86um wide, and +15.25 degree
                   azimuth, for luminance and 72um, and -15.25 degree azimuth, for colour
                   tracks, and 4.679 degrees from horizontal. Drum is 74mm dia. spinning
                   at 1800 rpm (1500 rpm for NTSC). Betacam will record 2 longitudinal
                   audio tracks, LTC and VITC.
                   Betacam became a popular professional format using oxide tape similar
                   to that used by domestic Betamax. However Betacam was really just a
                   ‘rehearsal’ for the improved version, Betacam SP, which became a
                   workhorse professional and broadcast video tape format.

        8mm (1983) and Hi8 (1989)
                   Developed by a Japanese consortium. 8mm is a domestic 2 head
                   composite helical scan format using the colour under system. It uses
                   cassettes containing 8mm tape. Helical tracks are 20.6um wide 4.88
                   degrees from horizontal. Drum is 1.6” dia. spinning at 1800 rpm (1500
                   rpm for NTSC). 8mm will record 2 PCM audio channels and 2 AFM
                   audio channels and no timecode.
                   Hi8 is an enhancement of 8mm, developed by Sony, using metal oxide
                   tape
                   Both 8mm and Hi8 have gained reasonable success as a domestic
                   camcorder tape format.

        Betacam SP (1986)
                   Developed by Sony, sometimes called Type L. An improvement over the
                   Betacam format Betacam SP, with the same format dimensions, uses
                   higher FM frequencies and metal instead of oxide tape. Betacam SP
                   introduced 2 AFM audio tracks inserted into the colour helical track
                   signal, providing the format with 4 audio tracks altogether.
                   The BVW-75 and BVW-75P machines became workhorse machines
                   within the broadcast industry selling thousands of machines and millions
                   of tapes worldwide.

        M2 (1986)
                   Developed by Panasonic. M2 is a professional 4 head component helical
                   scan format. It uses cassettes containing ½” tape. Helical tracks are
                   44um wide for luminance and 36um for colour tracks, with a 15 degree
                   azimuth, and 4.29 degrees from horizontal. Drum is 76um dia. spinning
                   at 1800 rpm (1500 rpm for NTSC). M2 will record 2 longitudinal audio
                   tracks, 2 AFM audio tracks, LTC and VITC.
                   M2 was introduced as a competitor to Betacam SP and some
                   broadcasters adopted it as a standard. Although technically very similar
                   the machines gained a reputation for unreliability, probably due more to


Sony Training Services                                                                 156
Broadcast Fundamentals

                 spare parts availability and service rather that the machine’s reliability,
                 M2 did not gain the universal acceptance that Betacam SP gained.

       S-VHS (1987)
                 Developed by JVC, adopted by every other manufacturer. VHS is an
                 enhancement of the VHS format with improved luminance bandwidth. It
                 gained popularity because of its compatibility with VHS.


Digital video tape recorders
                 Practical broadcast digital video recorders began to appear at the
                 beginning of the 1980’s with the publication of CCIR 601 in 1982 and
                 CCIR 656 in 1986. These two documents proposed a method if digitising
                 component video signals and conveying them in digital form over a
                 multicore cable. Sony designed the D1 video recorder specifically to
                 record CCIR-601 signals without any loss.
                 The original CCIR 601 document specified 8 bit samples. However the
                 CCIR 656 document also specified two spare data bits which were for
                 ‘future development’. The industry grabbed these two spare bits using
                 then as ½ and ¼ resolution, increasing the samples sizes to 10 bits.
                 About the same time as the transition from 8 bits to 10 bits, there was
                 transition from the original multicore cable, parallel method of conveying
                 CCIR 601 data to a serial version using standard 75 ohm coaxial cable
                 and BNC connectors.
                 Sony and other manufacturers, notably Panasonic and Ampex, followed
                 over the years by producing broadcast quality digital video recorders to
                 record either 8 or 10 bit CCIR 601 samples, either entirely transparently,
                 or with compression.
                 Although digital video recorders have gained almost universal
                 acceptance in broadcast, the domestic and industrial markets continue
                 to use analogue tape formats, due to the overwhelming use of VHS, and
                 the introduction of analogue formats like Hi-8 which have sufficient
                 quality for most peoples’ needs.
                 DV has gained wide acceptance as a camcorder standard for domestic
                 use. The lack of any domestic DV television recorders has helped to
                 keep VHS as the only practical home television recording format.
                 It is unlikely that there will ever be a de-facto standard digital home tape
                 recording format. The imminent release of Blu-Ray optical disk recorders
                 will certainly now kill any change for a manufacturer introduce one.

            The advantages of digital video tape recorders
                 Digital video recorders have a number of distinct advantages over
                 analogue machines. The first is the record transparency. A digital video
                 signal can retain all its quality through the record playback process. In
                 theory exactly the same digital data that is recorded to tape can play
                 back. Although this is not exactly true, it is certainly true that professional
                 digital video recorders allow video to be recorded, played back and re-
                 recorded many more times than is possible with analogue recorders.
                 This is important for editing and post production.

157                                                  Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   The second is robustness. Digital data can be protected with error
                   correction data far more easily than an analogue signal. Furthermore
                   digital data can be shuffled and scrambled before recording to tape. If
                   there is a large error on tape, either during recording or during playback,
                   due to, for instance, dust, the highly concentrated group of errors can be
                   diluted over a large amount of data, as a widely spread group of small
                   errors that can easily be corrected one by one.
                   Other advantages have become apparent over the years, as digital
                   video tape formats have developed, and with the introduction of
                   computers into the broadcast production chain.
                   Later digital video recording formats now offer long recording time, and
                   small tape sizes. They also offer the possibility of transferring digital data
                   directly to the IT world, without loss, where computer based non-linear
                   editors and effects processors can perform a wide range of previously
                   unavailable creative possibilities.

             Digital video recorder or digital data recorder
                   It is important to remember that all digital video recorders are not
                   recording digital video, or audio. The data is always processed,
                   scrambled, shuffled, sometimes compressed, and has extra error
                   correction data added. What is actually recorded to tape is just digital
                   data and bears very little resemblance to the original video and audio it
                   came from.
                   Manufacturers are now using stripping the video and audio input and
                   output processing out of their digital video recorders to produce very
                   competent data recorders for the IT backup and archive markets.

             Digital video recorder mechadeck design
                   There is no difference in the requirements of a digital video recorder
                   mechadeck from that of an analogue one. Digital video recorder
                   mechadecks differ more as a result of general developments in
                   mechadeck design rather than any special requirements. Most digital
                   video recorder mechadecks use a spinning upper drum and static lower
                   drum. They employ either M wrap or C wrap, and they all incorporate
                   sophisticated supply side tension regulation, and capstans on the exit
                   side of the drum.
                   A notable difference employed by the Sony Digital Betacam, D1 and D2
                   machines was the rotating mid drum assembly. The lower drum is static,
                   as normal, but these machines also have an upper drum is fixed to the
                   lower drum, leaving a narrow slot between the two. A mid drum
                   assemble spins inside and between the upper and lower drums, with the
                   record, playback and flying erase heads fixed to its circumference and
                   protruding through the slot to touch the tape. This technique is more
                   expensive but produces equal strain on the tape at every point round the
                   drum resulting in very straight helical tracks on tape.
                   From the start, broadcast digital video recorders have recorded audio as
                   digital data somewhere on the helical tracks. Although some digital
                   formats still retain a low quality longitudinal cue track, this development
                   has resulted in very high quality audio recording and the removal of

Sony Training Services                                                                      158
Broadcast Fundamentals

                 much of the mechadeck hardware required for analogue longitudinal
                 audio recording.
                 Some digital formats have even removed the need for a conventional
                 longitudinal control track transferring all the servo lockup to the helical
                 tracks. These formats only have one longitudinal heads on the
                 mechadeck, the full erase head.

            Digital video recorder channel coding
                 Analogue video recorders use FM as a method of coding the video
                 signal before recording it to tape, and decoding it on playback, to
                 overcome the problems of recoding to magnetic tape. In general terms
                 this technique of coding and decoding is called channel coding. The tape
                 is the channel.
                 Digital video recorders cannot use FM. It is both inappropriate and
                 impossible considering the available bandwidth on tape and the required
                 recording bandwidth. Digital video recorders use a combination of Partial
                 Response type 4 (PR4 or PRIV) and Viterbi as a channel coding
                 scheme.


Popular digital video tape formats
       D1 (1987)
                 Developed by Sony. D1 is a professional digital 4 head component
                 helical scan format. It records 8 bit CCIR 601 video data with no
                 compression. It uses cassettes containing 19mm tape. Helical tracks are
                 40um wide and 5.4 degrees from horizontal. Drum is mm dia. spinning at
                 rpm ( rpm for NTSC). D1 will record 4 audio channels on the helical
                 tracks, and one on a longitudinal track. It will also record LTC and VITC.
                 D1 is expensive, both for machines and tape cassettes, but is used in
                 post production where quality is the prime concern.

       D2 (1989)
                 Developed by Sony. D2 is a professional digital 4 head composite helical
                 scan format. It records a digitised PAL or NTSC (depending on the
                 machine version) composite video signal with no compression. It uses
                 cassettes containing 19mm tape. Helical tracks are um wide and
                 degrees from horizontal. Drum is mm dia. spinning at rpm ( rpm for
                 NTSC). D1 will record 4 audio tracks on the helical tracks, and one on a
                 longitudinal track. It will also record LTC and VITC.
                 D1 is expensive, both for machines and tape cassettes, but is used in
                 post production where quality is the prime concern.

       Digital Betacam (1993)
                 Developed by Sony. Digital Betacam is a professional digital 4 head
                 component helical scan format. It records 10 bit CCIR 601 video data
                 with DCT based compression at just over 2:1. It uses cassettes
                 containing ½” tape. Helical tracks are 24um wide and 5 degrees from
                 horizontal. Drum is 80mm dia. spinning at 4500rpm ( 5400rpm for


159                                                  Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   NTSC). Digital Betacam will record 4 audio tracks on the helical tracks,
                   and a low quality cue channel on a longitudinal track. It will also record
                   LTC and VITC.
                   Certain machines capable of playing back Betacam and Betacam SP
                   tapes.
                   Digital Betacam is cheaper than D1 but offers indistinguishable image
                   quality and 10 bit sample recording. It is widely used in post production
                   where quality is the prime concern. However the compression scheme is
                   closed and proprietary. Present broadcasters are now looking to output
                   the digital stream directly from the tape machine.

        DV & Mini DV (1995)
                   A consortium of 10 companies agreed and created DV, sometimes
                   called MiniDV. DV is a domestic digital 4 head component helical scan
                   format. It records intraframe 4:1:1 or 4:2:0 compressed video data with
                   5:1 compression ratio. It uses cassettes containing ¼” tape. Helical
                   tracks are 18um wide and 9.18 degrees from horizontal. Drum is
                   21.7mm dia. spinning at 9000rpm. DV will record 2 audio tracks on the
                   helical tracks. Timecode is recorded on helical track as data (not VITC).
                   DV has become the most popular domestic digital video tape format,
                   and is available from a wide range of manufacturers. Camcorders and
                   decks offer direct compressed data outputs via the IEEE1394 interface,
                   otherwise known as FireWire (Apple) and iLink (Sony). Software
                   companies also offer good support for DV, with drivers and plug-ins for
                   DV data input to graphics, editing, and rendering software.

        DVCPRO (1995)
                   Developed by Panasonic and based on the DV format. DVCPRO is a
                   professional digital 4 head component helical scan format. It records DV
                   data but with a wider track on metal evaporated tape to increase
                   robustness and quality. It uses cassettes containing ¼” tape. Helical
                   tracks are 18um wide and 9.18 degrees from horizontal with a +20.03
                   -19.97 degree azimuth. Drum is 21.7mm dia. spinning at 9000rpm.
                   DVCPRO will record 2 audio tracks on the helical tracks and 1
                   longitudinal cue track. It will also record LTC and VITC.
                   DVCPRO is the Panasonic professional DV format, and initially gained
                   wide acceptance due to its low price and compact design.

        DVCAM (1996)
                   Developed by Sony and base on the DV format. DVCAM is a
                   professional digital 4 head component helical scan format. It records DV
                   data. It uses cassettes containing ¼” tape. Helical tracks are 15um wide
                   and 9.18 degrees from horizontal with a +20.03 -19.97 degree azimuth.
                   Drum is 21.7mm dia. spinning at 9000rpm. DVCPRO will record 2 audio
                   tracks on the helical tracks and 1 longitudinal cue track. It will also
                   record LTC and VITC.
                   DVCPRO is the Sony professional DV format. Introduced after DVCPRO
                   is lagged behind in popularity but is now beginning to gain widespread



Sony Training Services                                                                    160
Broadcast Fundamentals

                 support as an industrial format and for low budget television work.
                 Machines like the PD-150 have almost gained ‘classical’ status.

       Betacam SX (1996)
                 Developed by Sony using the Digital Betacam mechadeck. Betacam SX
                 is a professional digital 4 head component helical scan format. It records
                 8 bit CCIR 601 video data with MPEG 4:2:2P@ML based compression.
                 Betacam SX uses IB frame compression to maintain broadcast quality at
                 18Mbps and 10:1 compression ratio. It uses cassettes containing ½”
                 tape. Helical tracks are 22um wide and 5 degrees from horizontal with a
                 15.25 degree azimuth. Drum is 80mm dia. spinning at 2250rpm
                 ( 2700rpm for NTSC). Betacam SX will record 4 audio tracks on the
                 helical tracks. It will also record LTC and VITC.
                 Certain machines capable of playing back Betacam and Betacam SP
                 tapes.
                 Sony introduced a hybrid Betacam SX machine combining a
                 conventional tape mechadeck with hard disks. Compressed video and
                 audio material could be transferred to and from the tape and disks. This
                 allowed for linear and non-linear editing in one unit. However the hybrid
                 machine proved too complex for many and was not widely adopted.
                 Betacam SX was introduced as a replacement to Betacam SP and has
                 comparable digital quality. Widely used as a news gathering format.
                 However although the compressed digital stream is available at the
                 output for direct high speed transfer the compression scheme was not
                 ratified, with the standard authorise preferring 50Mbps data instead.
                 Sony responded with IMX.

       Digital S (1996)
                 Developed by JVC and otherwise known as D9. Digital S is a
                 professional digital 4 head component helical scan format. It uses 4:2:2
                 sampling, like MPEG, making it technically better than DV and the same
                 as MicroMV. It uses cassettes containing ½” tape. Helical tracks are
                 20um wide.Digital S will record 4 audio tracks on the helical tracks and 2
                 longitudinal tracks. It will also record LTC and VITC.

       HDCAM (1997)
                 Developed by Sony and based on the Digital Betacam mechadeck.
                 HDCAM is a professional digital 4 head component helical scan format.
                 It records high definition video data with mild 3:2 compression. It uses
                 cassettes containing ½” tape. Helical tracks are 22um wide and 5
                 degrees from horizontal with a 15.25 degree azimuth. Drum is 80mm
                 dia. HDCAM will record 4 audio tracks on the helical tracks and one
                 longitudinal cue track. It will also record LTC and VITC.
                 HDCAM was introduced as an alternative to film, and thus records
                 progressive 24 fps (24P), but can be switched to a number of television
                 based recording methods. HDCAM is expensive and exclusive, but
                 offers very high quality recording.




161                                                Sony Broadcast & Professional Europe
Part 15 – The video tape recorder

                   Machines due for release about the time this book is published will
                   include uncompressed high definition and machines based on the IMX
                   mechadeck.

        DVCPRO 50 (1998)
                   Developed by Panasonic as an enhancement of the original DVCPRO
                   format, with 50 Mbps recorded data to comply with the requirements of
                   standards authorities. Machines can now be equipped with a IEEE1394
                   interface, allowing high speed transfer of 50Mbps DV data.

        IMX (2000)
                   Developed by Sony using a new design of mechadeck loosely based on
                   the Digital Betacam mechadeck. IMX is a professional digital 4 head
                   component helical scan format. It records 8 bit CCIR 601 video data with
                   MPEG 4:2:2P@ML I frame only based compression at 50 Mbps. It uses
                   cassettes containing ½” tape. Helical tracks are 22um wide and 5
                   degrees from horizontal with a 15.25 degree azimuth. Drum is 80mm
                   dia. spinning at 4500rpm ( 5400rpm for NTSC). IMX will record 8 audio
                   tracks on the helical tracks. It will also record LTC and VITC.
                   Certain machines capable of playing back Betacam, Betacam SP and
                   Digital Betacam tapes.
                   IMX was introduced to comply with the standards authorities
                   requirement for a 50Mbps I frame only MPEG video recorder. Although
                   recording 8 bit samples ( a requirement of MPEG) IMX quality is
                   indistinguishable from Digital Betacam. However, unlike Digital Betacam,
                   the compressed stream is available at the output for direct transfer to
                   other machines, computer hard disk, or video servers.
                   A later modification, the E-VTR, allows video and audio material from
                   tape to be packaged and send directly out on a computer network cable.

        Micro MV (2001)
                   Developed by Sony. Micro MV is a new format intended for the domestic
                   market. However it records true MPEG data on a tiny cassette, giving it
                   comparable, if not better, quality than DV. Although technically superior
                   than DV, MicroMV has a lot of work to do to gain any ground on DV and
                   DV based formats like DVCPRO and DVCAM. Software manufacturers
                   have still to offer the kind of support for MicroMV that DV enjoys.

)




Sony Training Services                                                                 162
Broadcast Fundamentals


Part 14                                        Betacam and varieties
                 Variations of the original Betacam format have dominated the broadcast
                 industry for the last 20 years. The same basic scheme is now used in
                 analogue and digital video recorders, high definition recorders and data
                 recorders.
                 The first of these broadcast machines recorded to ½” oxide tape
                 encased in a cassette. Two cassette sizes were made available, the
                 smaller being exactly the same size as the domestic Betamax tape, and
                 was suitable for portable devices and short programme content. The
                 larger was about twice the size. It had longer record time and was more
                 suitable for studio use.

       Mechanics
                 All Beta formats are true helical scan. The drum assembly is about
                 81mm diameter and consists of two halves. The lower half is static and
                 acts simply as a support. The upper half is about 15mm thick and spins
                 horizontally. The whole assembly leans at about 5 degrees writing tracks
                 that are only about 5 degrees from the tape’s direction.
                 The tape wrap is a little over 180 degrees with the record/playback
                 heads fitted in pairs, on opposite sides of the drum. During recording
                 each head writes one track for 180 degrees. At the end of the track, just
                 before the head leaves contact with the tape, the record signal is
                 switched to the opposite head, which has just begun its 180 degrees
                 contact with the tape. This head then writes the next track.
                 The tape moves slowly through the machine, so that each track sits next
                 to the last.
                 With the original Betacam format each track writes one field of video.
                 Therefore PAL based machines have a drum that spins at 25Hz. Each
                 complete revolution of the drum records or plays back one complete
                 frame.
                 Several other tracks are recorded on the tape. There are four of these
                 and all are longitudinal. These tracks are recorded and played back by
                 two sets of static heads placed in the tape path just before and just after
                 the drum itself.
                 Two tracks run along the top edge and two along the bottom. The top
                 two tracks are responsible for audio channels 1 and 2. Channel 1 is on
                 the inside (bottom track). This is intentional. If a single channel is
                 recorded it is likely to be channel 1 and is therefore less likely to be
                 corrupted if the edge of the tape is damaged.
                 The bottom two longitudinal tracks are responsible for control and
                 timecode. The top track is responsible for control, the most important of
                 the two. If the bottom edge of the tape is damaged, timecode will be lost
                 and the machine will switch to the control track to keep timecode
                 counting, until a god timecode signal can be found again.




163                                                 Sony Broadcast & Professional Europe
Part 16 – Betacam and varieties

        The standard Betacam tape path
                   Even though the internal structure of Betacam tape machines may differ
                   from one machine to another, the basic tape path is identical in all
                   machines. It has to be, if one is able to take a tape recorded in one
                   machine and play it back in another.

             Supply reel and tape cleaner
                   Tape exits from the left hand cassette reel, commonly called the supply
                   reel. Most machine have a tape cleaner. This is a blade, made either
                   from steel of artificial sapphire and cleans any debris off the tape before
                   the machine attempts to record or play back the tape.

             Supply side tension regulator
                   The tape then passes around a tension regulator, often called the supply
                   side tension regulator. This important device measures the tape tension
                   on the whole supply side of the machine, including the drum, and sends
                   a signal back to the supply side reel to either let more tape out, if the
                   tension is too high, or hold the tape back, if the tension is too low.
                   It is very important that the tension around the drum is correct. Too tight
                   and both the tape and drum will wear out quickly. Too loose and head to
                   tape contact will be broken with resulting loss in recording and playing
                   back.

             Full erase and control heads
                   Now the tape passes across a static head responsible for erasing the
                   whole tape when a crash record is being performed. This head blasts
                   the tape with a strong alternating magnetic field deleting anywhing
                   previously recorded on it. The tape then passes across another static
                   head responsible for recording and playing back the control track, the
                   control head.
                   The exact position of the erase head is not important. As long at it is
                   before the control head, it really does not matter. The position of the
                   control head is exact, and the same on every Betacam machine.

             The drum, entrance and exit guides
                   Now the tape runs across an entrance guide, round the drum for at least
                   180 degrees, and leaves the drum to run across an exit guide. The lower
                   drum has a small step or ledge milled in its surface, called the rabbet.
                   This rabbet is at the top of the lower drum at the entrance side and
                   slopes down towards the exit side. The tape rests on the rabbet. This,
                   and the slope of the whole drum causes the helical motion of the drum
                   heads.
                   The entrance and exit guides have flanges that touch the top of the tape,
                   holding the tape down so that is enters and exits the drum at exactly the
                   correct point.
                   The upper drum spinning anti-clockwise at 25Hz (for PAL machines)
                   draws air between the tape and the drum itself. If the tape tension is


Sony Training Services                                                                       164
Broadcast Fundamentals

                 correct this makes a cushion of air between the two, and the heads
                 protrude from the surface of the drum, penetrating this cushion to touch
                 the tape.




Figure 85                                                      The basic Betacam tape path



165                                                Sony Broadcast & Professional Europe
Part 16 – Betacam and varieties

             Audio/timecode head stack
                   The tape now passes across another pair of static heads. The first is
                   responsible for erasing the two longitudinal audio tracks and the
                   timecode track. The second is responsible for recording and playing
                   back the audio and timecode tracks.
                   Some Betamax have a third static head that is responsible for playing
                   back the audio and timecode signals while in record mode. This so-
                   called confidence mode gives the operator confidence that the audio and
                   timecode has been recorded correctly.
                   The exact position of the audio/timecode stack is critical to ensure
                   proper syncronisation between the video, audio and timecode.

             Capstan and pinch wheel
                   The head now passes between the capstan and pinch wheel. The pinch
                   wheel is a small soft rubber cylinder. The capstan is a precision motor
                   with a shaft about 5mm diameter sticking out of the top of it.
                   Normally there is a gap between the pinch wheel and the capstan shaft.
                   However during recording or playing back a solenoid pushes the pinch
                   wheel against the capstan shaft squeezing the tape between the two. As
                   the capstan motor turns it pulls the tape through at a steady speed.
                   The control track signal is passed into the machines computer where it is
                   processed and converted into control signals to adjust the speed of the
                   capstan motor so that the tape is passing round the drum as the correct
                   speed and position.

             Take-up side tension regulator
                   The tape then makes its journey back into the cassette and onto the
                   take-up reel. Some machines include a tape-up tension regulator, to
                   measure the take-up tension and pass a signal back to the take-up reel
                   motor, to ensure that the tape is reasonably loose but not sloppy.

             Other guides
                   The tape path will include a series of other guides along the tape path.
                   Some of these touch the top of the tape, some the bottom and some
                   neither.

        Definition of a good tape path
                   The important part of a ½” tape path is the distance between the supply
                   side reel and the capstan. This is where all the heads are and this is
                   where the tape must be at the correct tension and in the correct position.
                   This length of tape should be as short as possible, and should pass
                   across as few items as possible.
                   Any item like a guide, drum or static head changes the tapes direction
                   and adds friction. Spinning guides, and the drum itself, are never
                   absolutely central and always add a slight wobble to the tape’s motion.
                   There are therefore opportunities for the tape to stick, be forced into the
                   wrong position or the timing to be altered.


Sony Training Services                                                                    166
Broadcast Fundamentals

                 Static heads, the cleaner, the drum, entrance and exit guides, and the
                 supply side tension regulator are vital and cannot be removed. However
                 any other guides should only be added to the design if they are
                 absolutely necessary.
                 A good tape path design will therefore have a short length of tape
                 between the supply side reel and the capstan, and as few extra guides
                 as possible.
                 What happens to the tape after the capstan is not at all critical. The
                 amount of tape, and number of guides between is not important.

       Electronics
                 The basic electronics elements of a Betacam VTR consists of two
                 halves, the audio/video circuitry and the control circuitry. The
                 audio/video circuitry can be further divided into two halves, the record
                 circuits and the playback circuits, finally these two halves can be
                 subdivided into audio and video circuitry.

            Control circuitry
                 The control circuitry is responsible for taking control signal from the VTR
                 keyboard and from any remote control ports at the back of the machine,
                 and converting them into control signals for the mechanics.
                 Control circuitry is also responsible for recording the control and
                 timecode tracks. The control track has a 25Hz signal recorded to it. This
                 signal is used during playback to ensure that the tape is sitting in the
                 correct place relative to the spinning drum, so that the playback heads
                 are moving directly up the centre of the helical tracks on tape.

            Audio/video circuitry
                 If the composite input is used, the video recording circuitry decodes this
                 input into three component signals, Y, (R-Y) and (B-Y). It then combines
                 the two colour difference signals (R-Y) and (B-Y) into one signal using a
                 compressed time division multiplexing (CTDM) technique.
                 The Y and CTDM signal are then emphasised and modulated onto FM
                 carriers. Special horizontal sync signals are added before the signals are
                 sent to the record heads for recording to tape.
                 The video playback circuitry takes the modulated Y and CTDM signals
                 from the tape, checks for tape drop-out and extracts a clock from the
                 horizontal syncs signals.
                 The signals are then demodulated and de-emphasised. The clock is
                 used to perform timebase correction on the signals before the CTDM
                 signal is broken down into individual (R-Y) and (B-Y) signals.
                 If needed the resulting component signals are combined to make a
                 composite output. If not the Y, (R-Y) and (B-Y) signals are output as
                 component signals.
                 Record audio circuitry takes the two incoming analogue channels and
                 passes them through a Dolby noise reduction system before recording
                 them directly to the two longitudinal tracks at the top edge of the tape.


167                                                 Sony Broadcast & Professional Europe
Part 16 – Betacam and varieties

                   In playback the two audio signals off tape are passed through the same
                   Dolby noise reduction system and directly out to analogue connectors.

        Betacam video record techniques
                   The normal horizontal sync is replaced by a large tri-level pulse. The
                   CTDM colour signal has no horizontal sync pulse. So a large negative
                   going pulse is added so that timbase correction can be performed on the
                   CTDM signal independently.
                   Prior to modulation the signals are emphasised. During playback the
                   signals will be de-emphasised again. This improves the signals to noise
                   ratio of the whole record/playback path.
                   Betacam uses frequency modulation to record the Y and CTDM signals
                   to tape. This is a form of channel coding. The signals themselves would
                   not playback correctly if they were recorded to tape without any
                   modulation.
                   The FM modulation signal has a straight line sloped frequency response
                   that drops to zero and about 15MHz. Again this improves the signal to
                   noise ratio.
                   The Y and CTDM signals are modulated onto their respective FM
                   carriers so that the most positive and negative excursions on these two
                   signals fits between two specific FM carrier frequencies.

        Betacam video playback techniques
                   The video playback circuitry is more complex than the record circuitry.
                   Even with all the precise mechanical engineering of a Betacam
                   mechadeck it impossible to playback a perfectly timed signal.
                   Tape speed fluctuations, drum speed fluctuations, rotary head impact
                   and tape tension fluctuations all serve to alter the exact timing of the
                   video signal as it comes off the tape.
                   Dirt and debris can get between the tape and the rotary heads, as the
                   drum is spinning. The tape may also be damaged, old or just bad quality.
                   All these factors can prevent a signal from being recorded to tape, or
                   prevent a good signal on tape from being played back. This is called
                   drop-out
                   The playback circuitry must correct any playback signal timing
                   fluctuations and somehow hide drop-out.

             The timebase corrector
                   An important part of the playback circuitry of any Betacam VTR is the
                   timebase corrector, This piece of electronic circuitry smoothes out timing
                   fluctuations in the video playback signals.
                   The timebase corrector works by storing a small amount of the signal,
                   holding it for a short while, and then releasing it smoothly. This is a little
                   like using a bucket to provide a smooth water flow from a fluctuating
                   water source. As with the bucket analogy a certain amount of the signal
                   must be stored to allow for flucturations. Hopefully the fluctuations are
                   not so great as to either completely fill or empty the store.


Sony Training Services                                                                       168
Broadcast Fundamentals

                 Timebase correctors normally use a semiconductor memory as its store.
                 Thus the analogue playback signal must be passed through analogue to
                 digital converter before the store, and through a digital to analogue
                 converter afterwards.




169                                              Sony Broadcast & Professional Europe
Part 17 – The special effects machine


Part 15                                       The video disk recorder
History
                   The idea of recording video to a disk has existed about as long as those
                   of tape. However, whereas tape technology was technically possible,
                   with good recoding times and playback quality, technology was too
                   crude to allow comparable machines to be built for many years after
                   tape recorders became popular.
                   Ampex designed and built a prototype disk recorder in 1965. It became a
                   commercial product in 1967. Called the HS-100 it used an open hard
                   disk and allowed just 30 seconds of analogue video to be recorded. It
                   was used mainly for instant and slow motion replay.
                   However it was not until the 1990’s that disk recorders started to
                   appears that had a real practical use in broadcast. Abekas produced the
                   A-64 component video recorder which recorded parallel CCIR-601 digital
                   video on two large hard disks. The total storage time was a little under 1
                   minute, but it was full uncompressed broadcast quality.
                   At the time the Abekas disk recorders were one of only a few methods of
                   manipulating full uncompressed digital video in a non linear fashion, and
                   they became popular in post production and any high quality complex
                   short form editing.
                   The secret to Abekas’s success was their ability in modifying the hard
                   disks available at the time to make them do things that they would not
                   otherwise be able to do. Hard disks generally had integrated controllers
                   mounted on them that acted as an interface between the outside world
                   and the disks themselves. Abekas bypassed this to allow the video data
                   to be recorded directly to the disk platters.
                   Abekas built a business based to a great extent on their ability to modify
                   standard hard disk technology, and successfully sold a variety of hard
                   disk digital video recorders to post production facilities, advertising
                   companies, etc..
                   As hard disk technology improved, it became less necessary to bypass
                   the disk controller. There was less need for the kind of specialist
                   techniques employed by Abekas. Companies could produce video disk
                   recorders by using standard hard disks.
                   Hard disks also became cheaper. It became possible for companies to
                   offset the low bandwidth and speed of standard hard disks by simply
                   designing in more that one hard disk. Systems became available that
                   used an array of disks to both spread the bandwidth and increase the
                   capacity.


Present day

                   Video disk recorders can now be grouped into two areas, although some
                   products are able to cross the grey area between.


Sony Training Services                                                                  170
Broadcast Fundamentals

       Transmission servers
                 The first group are the video disk recorders intended specifically for
                 transmission. These machines trade quality for storage capacity. The
                 whole system may require enough storage capacity for a day, or several
                 days of transmission. It may be required to supply many channels of
                 transmission. However the quality need not be supreme. These servers
                 can employ high compression ratios, with low data rates.
                 Reliability is very important in these servers. They need to work all the
                 time without fail. Therefore this kind of server often employs a high
                 degree of redundancy technology and hot swappable elements, like hard
                 disks, controllers and power supplies.
                 Transmission servers also do not need to be able to perform many
                 ‘tricks’. They are intended to play, and perform some simple real-time
                 switching. Nothing else. So while remote control will need to be accurate
                 and fast, it will not need to be very versatile.

       Production servers
                 This kind of video disk recorder has opposite requirements to those of
                 transmission servers.
                 Production servers need only have capacity for the programme that is
                 being worked on – far lower than the requirements in transmission
                 servers. Generally the material held on them is not the master, but the
                 work-in-progress material. Regular backups are normally performed.
                 Absolute reliability is les important than in transmission servers.
                 However production servers must maintain quality. As video is edited,
                 copied from one location to another, and generally fiddled with, there
                 must be no loss in quality. Any loss in quality would accumulate through
                 each edit generation until it becomes noticeable.
                 Production servers also need to be ‘clever’. Operators will need to be
                 able to be able to perform complex edits, and move, copy, and cut
                 material on the hard disk as though they were using a word processor.
                 Remote control ports need to be both fast, accurate and versatile
                 This kind of server may also need to be accessed by several users at
                 the same time. Therefore there may need to be more than one remote
                 control port fitted.




171                                                Sony Broadcast & Professional Europe
Part 17 – The special effects machine



RAID technology
                   An important technology for video disk recorders is RAID (redundant
                   array of inexpensive, or independent, drives). RAID consists of an array
                   of disks, and a RAID controller. The device (normally a computer) that is
                   accessing the array will see one logical drive. The RAID controller acts
                   as an interface, organising and sending data backwards and forwards
                   between the device and the array.

        History
                   In 1988 Professor Randy Katz, while at Berkeley University in California,
                   was chiefly responsible for writing a paper entitled “A case for
                   Redundant Array of Inexpensive Disks”. This paper became the model
                   for disk array design. It specified five RAID levels, 1 to 5. These levels
                   define how the disks are logically arranged and how the data, and any
                   error correction codes, are spread across them.
                   Since the paper’s publication further levels have been designed. The
                   most important of these is levels 0 and 6, both of which have gained
                   general adoption in the industry. Level 7 was later added by Storage
                   Computer Corporation. Although proprietary is has gained general
                   acceptance because the company is a major producer of RAID solutions
                   and level 7 offers some real benefits.
                   Other levels have been added more recently. These are all combinations
                   of the existing levels, and have all been added for marketing reasons.

        Reasons for RAID
                   Disk arrays have the benefit of increasing capacity. The “inexpensive”
                   part of RAID becomes important. The raid controller makes it appear
                   that there is one big expensive disk drive where there are actually many
                   smaller cheaper drives instead.
                   Another important reason for RAID is to increase the performance of the
                   array. The “redundant” part of RAID is not important in this case. Indeed
                   in some forms of RAID there is actually no redundancy at all. Using an
                   array simply makes it look as though one very fast disk has been
                   installed.

        Redundancy
                   The basic idea of RAID was originally designed to ensure that extra data
                   is written to the disk array. This so-called redundant data could be used
                   if any of the data had errors in it during read operations.
                   The simplest redundant data is to make a complete copy of the data to
                   another set of disks. However it became obvious that a complete copy
                   did not have to be made. Instead some kind of error correction data
                   could be written. These codes generally took up less space that the
                   original data. There are two kinds of error correction codes used, Parity
                   codes, dual parity codes and Hamming codes.




Sony Training Services                                                                  172
Broadcast Fundamentals

            Parity
                 Parity is a simple one bit code applied to a byte, word, or block of data.
                 There are two kinds of parity, even and odd. With both kinds the number
                 of “1”s on the data is added together. With even parity the parity code is
                 a “1” is there is an even number of “1”s in the data and a “0” if there is an
                 odd number of “1”s. With odd parity the logic is reversed.
                 Parity codes are not very powerful. They can only detect 1 bit errors. It is
                 perfectly possible for more than 1 bit to be wrong, and the parity code to
                 still be correct.

            Hamming codes
                 Hamming codes (names after the inventor) are multi-bit codes derived
                 from the data, that can be used to reconstruct the data if it is read back
                 with errors. Specific data pattern produce specific Hamming codes.
                 Hamming codes are more powerful than parity codes. They can detect
                 and correct 1 bit errors, and detect 2 bit errors.

            Dual parity codes
                 Dual parity codes are an enhanced version of simple 1 bit parity codes.
                 With these codes many bytes, words or blocks of data are grouped to
                 give a two dimensional array. Parity codes are generated in two
                 dimensions, for the array columns and rows.
                 Dual parity codes are more powerful that simple parity codes because
                 parity checks can be applied in two dimensions, detecting and correcting
                 a greater number of possible errors. They take up more space that
                 simple parity codes but a lot less space than Hamming codes.
                 Dual parity codes have been incorrectly called Reed Solomon codes.
                 However Reed Solomon codes are not single bit codes but multi-bit
                 codes. They are also generated from a complex polynomial algorithm
                 giving very powerful error correction capability. Dual parity codes cannot
                 achieve the error correction capability of Reed Solomon codes, but take
                 up far less space.

       RAID levels

            Level 0 (Disk striping)
                 Data is written in blocks, in sequence, to each disk in turn. Not really
                 RAID. Specifically designed for high performance, with increased
                 bandwidth and performance, and no redundancy.
                 Advantages : High bandwidth and performance. Simple design.
                 Disadvantages : Not a true RAID. No error correction (other than the
                 individual drives’ own internal error correction).

            Level 1 (Mirroring)
                 Exact copy of each disk written to another disk. Sometimes achieved
                 within the computer software for simplicity, but this loads the computer
                 resources. Best achieved within the RAID controller instead.


173                                                 Sony Broadcast & Professional Europe
Part 17 – The special effects machine

                   Advantages : Best error protection. Increased read performance. Good
                   for multi-user environments.
                   Disadvantages : Expensive (twice the hard disks).

             Level 2 (Bit level disk striping & Hamming code disks)
                   With this level data is striped at bit level across multiple disks. Hamming
                   codes are generated and written to a separate disk or disks.
                   Hamming codes are multi-bit error correction codes. Although more
                   powerful they take up more space than simple 1 bit parity codes.
                   Therefore Hamming codes need more disk space making level 2 disk
                   requirements closer to level 1.
                   Level 2 is a dead RAID level. None of the RAID suppliers supports this
                   level. It is said that level 2 is not used because it requires special disks.
                   This argument comes from the fact that standard hard disks have their
                   own internal error correction, and that if you are using Hamming codes
                   the disks themselves need to be non-standard, with no internal error
                   correction.
                   However any RAID error correction is applied before the data is written
                   to disk. The disk’s internal error correction just adds another level of
                   security underneath anything applied by the RAID system.
                   In truth, level 2 is probably not used because the internal error correction
                   provided by present day disks, with their overall level of reliability, is
                   good enough that Hamming code protection supplied by the RAID
                   system would be more protection than is necessary considering the
                   extra capacity required. Simple parity codes are generally sufficient.
                   Advantages : Very good error protection.
                   Disadvantages : Dead. High ratio of ECC disks to data disks.

             Level 3 (Byte level disk striping & parity disks)
                   RAID level 3 stripes across disks at the byte level rather than at the bit
                   level. 1 bit parity codes are written to a separate disk or disks.
                   Similar to level 2 but parity codes are smaller than Hamming codes and
                   take up less disk space.
                   Advantages : Good error protection. Low ratio of ECC disks to data
                   disks. Good for small and scattered file read/writes.
                   Disadvantages : Error protection not as powerful as levels 1 and 2.
                   Inefficient handling of large sequential files. Single parity drive is a
                   performance bottleneck.




Sony Training Services                                                                        174
175
                                                                                                                                                                                                                                                                                                                                                                                                D a ta b its
                                                                                                                                                                                                                                                                                                                                                                                                1 ,5 ,9 ,1 3 ,1 7 ,2 1 ...

                                                                                                                          D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                                         D a ta w o r d s                                                                      D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    1 ,5 ,9 ,1 3 ,1 7 ,2 1 ...




                                       Figure 86
                                                                                                                          1 ,5 ,9 ,1 3 ,1 7 ,2 1 ...                                                                       D a ta b lo c k s                              D a ta b lo c k s                                                                                                                                                                                                                                                                                                                               1 ,5 ,9 ,1 3 ,1 7 ,2 1 ...
                                                                                                                                                                                                                           1 to 1 0 2 4 .                                 1 to 1 0 2 4 .
                                                                                                                                                                                                                                                                                                                                                                                                D a ta b its
                                                                                                                                                                                                                                                                                                                                                                                                2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ...

                                                                                                                          D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                                         D a ta w o r d s                                                                      D a ta b lo c k s
                                                                                                                          2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ...                                                                     D a ta b lo c k s                              D a ta b lo c k s                                                                                                                                                                                                                                         2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ...                                                          2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ...
                                                                                                                                                                                                                           1 0 2 5 to 2 0 4 8 .                           1 0 2 5 to 2 0 4 8 .

                                                                                                                                                                                                                                                                                                                                                                                                D a ta b its




                                                                                                                                                                                                                                                                                                                        R A ID C o n tr o lle r
                                                                                                                                                                                                                                                                                                                                                                                                3 ,7 ,1 1 ,1 5 ,1 9 , 2 3 ...

                                                                                                                          D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                                         D a ta w o r d s                                                                      D a ta b lo c k s




                                                                                                                                                                                                                                                                                                                                                                                                                                                             R A ID C o n tr o lle r
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    3 , 7 ,1 1 ,1 5 ,1 9 ,2 3 .. .
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            R A ID C o n tr o lle r

                                                                                                                                                                                                                           D a ta b lo c k s                              D a ta b lo c k s                                                                                                                                                                                                                                                                                                                               3 ,7 ,1 1 ,1 5 ,1 9 ,2 3 ...




                                                                                         R A ID C o n tr o lle r
                                                                                                                          3 ,7 ,1 1 ,1 5 ,1 9 ,2 3 ...




                                                                                                                                                                                        R A ID C o n tr o lle r
                                                                                                                                                                                                                           2 0 4 9 to 3 0 7 2                             2 0 4 9 to 3 0 7 2

                                                                                                                                                                                                                                                                                                                                                                                                D a ta b its
                                                                                                                                                                                                                                                                                                                                                                                                4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         Broadcast Fundamentals




                                                                                                                                                                                                                                                                                                                       code
                                                                                                                          D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                                         D a ta w o r d s                                                                      D a ta b lo c k s




                                                                                                                                                                                                                                                                                                                   H a m m in g
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ...




                                                                                                                                                                                                                                                                                                                   g e n e ra to r
                                                                                                                          4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ...                                                                     D a ta b lo c k s                              D a ta b lo c k s                                                                                                                                                                                                                                                                                                                               4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ...
                                                                                                                                                                                                                           3 0 7 3 to 4 0 9 6                             3 0 7 3 to 4 0 9 6




                                                                                                                                                                                                                                                                                                                                                                                                                                                           P a r ity
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          P a r ity




                                                                                                                                                                                                                                                                                                                                                                                                                                                        g e n e ra to r
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       g e n e ra to r




                                                                                                                                                                                                                                                                                                                                                                                                H a m m in g c o d e s
                                                                                                                                                                                                                                                                                                                                                                                                1 ,3 ,5 ,7 ,9 ,1 1 ,1 3 ...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    P a r ity c o d e s                                                                   P a r it y c o d e s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ...                                                            1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ...
                                                       Level 0                                                                                              Level 1                                                                                                                                              Level 2                                                                                                                            Level 3                                                                                                          Level 4
                                                                                                                                                                                                                                                                                                                                                                                                H a m m in g c o d e s
                                                                                                                                                                                                                                                                                                                                                                                                2 ,4 ,6 ,8 ,1 0 ,1 2 ,1 4 ...




                                                                                                                   D a ta b lo c k s                                                                                         D a ta b lo c k s
                                                                                                                   1 ,6 ,1 1 ,1 6 ,2 1 . ..                                                                                  1 ,6 ,1 1 ,1 6 ,2 1 ...
                                                                                                                   P a r ity c o d e s                                                                                       D u a l p a rity c o d e s                                                                                                                                                                                                                                       D a ta b lo c k s
                                                                                                                   1 .6 .1 1 .1 6 .2 1 . ..                                                                                  1 .6 .1 1 .1 6 .2 1 ...                                                                                                                                                                                                                                          1 ,5 ,9 ,1 3 ,1 7 ...
                                                                                                                                                                                                                                                                                                                                                             D a ta b lo c k s




                                                                                                                                                                                                                                                                                               C ache
                                                                                                                                                                                                                                                                                                                                                             1 , 5 ,9 ,1 3 ,1 7 ,2 1 ...                                                                                                                                                      D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              1 to 1 0 2 4
                                                                                                                   D a ta b lo c k s                                                                                         D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                            1 ,5 ,9 ,1 3 ,1 7 ...
                                                                                                                   2 ,7 ,1 2 ,1 7 ,2 2 ...                                                                                   2 ,7 ,1 2 ,1 7 ,2 2 ...
                                                                                                                   P a r ity c o d e s                                                                                       D u a l p a rity c o d e s
                                                                                                                                                                                                                             2 ,7 ,1 2 ,1 7 ,2 2 ...                                                                                                                                                                                                                                          D a ta b lo c k s
                                                                                                                   2 ,7 ,1 2 ,1 7 ,2 2 ...                                                                                                                                                                                                                                                                                                                                                    2 ,6 ,1 0 ,1 4 ,1 8 ...
                                                                                                                                                                                                                                                                                                                                                             D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                             2 , 6 ,1 0 ,1 4 ,1 8 ,2 2 .. .
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              1 0 2 5 to 2 0 4 8
                                                                                                                                                                                                                             D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                            2 ,6 ,1 0 ,1 4 ,1 8 ...
                                                                                                                   D a ta b lo c k s
                                                                                                                   3 ,8 ,1 3 ,1 8 ,2 3 ...                                                                                   3 ,8 ,1 3 ,1 8 ,2 3 ...
                                                                                                                   P a r ity c o d e s                                                                                       D u a l p a rity c o d e s                                                                                                                                                                                                                                       D a ta b lo c k s
                                                                                                                   3 ,8 ,1 3 ,1 8 ,2 8 ...                                                                                   3 ,8 ,1 3 ,1 8 ,2 8 ...                                                                                                                                                                                                                                          3 ,7 ,1 1 ,1 5 ,1 9 ...
                                                                                                                                                                                                                                                                                                                                                             D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                             3 , 7 ,1 1 ,1 5 ,1 9 ,2 3 .. .




                                                                                                                                                                                                                                                                                        A s y n c h ro n o u s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              D a ta b lo c k s




                                                                                                                                                                                                                                                                                       R A ID C o n tr o lle r
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              2 0 4 9 to 3 0 7 2                                                          D a ta b lo c k s




                                                                                                                                                                  R A ID C o n t r o lle r
                                                                                                                                                                                                                                                                                                                                                                                                                          R A ID C o n tr o lle r




                                                               R A ID C o n tr o lle r
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          3 ,7 ,1 1 ,1 5 ,1 9 ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                R A ID C o n tr o lle r




                                                                                                                   D a ta b lo c k s                                                                                         D a ta b lo c k s
                                                                                                                   4 ,9 ,1 4 ,1 9 ,2 4 ...                                                                                   4 ,9 ,1 4 ,1 9 ,2 4 ...
                                                                                                                   P a r it y c o d e s . ..                                                                                 D u a l p a rity c o d e s ...
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              D a ta b lo c k s
                                                                                                                   4 ,9 ,1 4 ,1 9 ,1 4 ....                                                                                  4 ,9 ,1 4 ,1 9 ,1 4 ....
                                                                                                                                                                                                                                                                                                                                                             D a ta b lo c k s                                                                                                                4 ,8 ,1 2 ,1 6 ,2 0 ...
                                                                                                                                                                                                                                                                                                                                                             4 , 8 ,1 2 ,1 6 ,2 0 ,2 4 .. .                                                                                                                                                   D a ta b lo c k s
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              3 0 7 3 to 4 0 9 6                                                          D a ta b lo c k s




                                                                                                                                                                                                                                                                                          P a rity
                                                                                                                                                                                                                             D a ta b lo c k s                                                                                                                                                                                                                                                                                                                                                                            4 ,8 ,1 2 ,1 6 ,2 0 ...




                                                                                                                                                                                                                                                                                       g e n e ra to r
                                                                                                                   D a ta b lo c k s




                                                             P a rity
                                                                                                                   5 ,1 0 ,1 5 ,2 0 ,2 5 ...                                                                                 5 ,1 0 ,1 5 ,2 0 ,2 5 ...




                                                                                                                                                              g e n e ra to r




                                                          g e n e ra to r
                                                                                                                                                                                                                             D u a l p a rity c o d e s




                                                                                                                                                             D u a l p a rity
                                                                                                                   P a r ity c o d e s
                                                                                                                   5 ,1 0 ,1 5 ,2 0 ,2 5 ...                                                                                 5 ,1 0 ,1 5 ,2 0 ,2 5 ...
                                                                                                                                                                                                                                                                                                                                                             P a r ity c o d e s                                                                      S t r ip e d s e t                                                M irro re d s e t
                                                                                                                                                                                                                                                                                                                                                             1 , 2 ,3 ,4 ,5 ,6 ,7 ,8 ...


                                                     Level 5                                                                                             Level 6                                                                                                          Level 7                                                                                                                           Level 10                                                                                                                                                 Level 10




                                                                                                                                                                                                                                                                                                                                                  C o m p u te r to                                                               C o m p u te r to
                                                                                                                                                                                                                                                                                                                                                                                    D a ta c o n n e c tio n                      R A ID c o n tr o lle r                              D a ta c o n n e c tio n
                                                                                                                                                                                                                                                                                                                                                  R A ID c o n tr o lle r                                                         c o n n e c tio n
                                                                                                                                                                                                                  D a ta       P a r it y                     D ual                 H a m m in g                                                  c o n n e c tio n
                                                                                                                                                                                                                               codes                          p a r ity             codes
                                                                                                                                                                                                                                                              codes




                                       RAID levels

Sony Broadcast & Professional Europe
Part 17 – The special effects machine

             Level 4 (Block level disk striping & parity disks)
                   RAID level 4 is similar to level 3 except that the data is striped across
                   the disks in blocks rather than in bits. Block reads and writes tend to
                   increase the overall performance over level 3 for large and sequential
                   file read and write operations.
                   Advantages : Good error protection. Low ratio of ECC disks to data
                   disks. Good for large sequential file read/writes.
                   Disadvantages : Error protection not as powerful as levels 1 and 2.
                   Inefficient handling of small files. Single parity drive is a performance
                   bottleneck. Seldom used.

             Level 5 (Block level & parity disk striping)
                   This is the most popular RAID level. It is very similar to level 4, except
                   that the parity codes are not written to a separate disk. All the parity
                   codes are striped across the same disks as the data. This improves the
                   performance over RAID levels 3 and 4 by removing the bottleneck
                   associated with the separate ECC disk or disks.
                   However, because the data and parity codes are scattered over all the
                   disks, it is difficult to rebuild a new drive if one of the drives in the array
                   fails.
                   Advantages : Higher performance than level 4. Good error protection.
                   Low ratio of ECC codes to data. Good for large sequential file
                   read/writes.
                   Disadvantages : Error protection not as powerful as levels 1 and 2.
                   Inefficient handling of small files. Complex controller design. Complex
                   rebuilds.

             Level 6 (Block level & dual parity disk striping)
                   This is a similar scheme to level 5. It uses block level read and write
                   operations, and spreads the parity across the disks rather than writing all
                   the parity codes to a separate disk or disks. However level 6 processes
                   blocks of data and produces 2 parity codes, one set for the columns and
                   another for the rows.
                   This increase in the amount of parity code generation greatly increases
                   the complexity of the RAID controller, and decreases the overall
                   performance of the array.
                   Advantages : Very good error protection. Low ratio of ECC codes to
                   data (but not as low as levels 2-5). Good for large sequential file
                   read/writes.
                   Disadvantages : Poor performance due to dual parity calculations.
                   Inefficient handling of small files. Very complex controller design.

             Level 7 (Asynchronous cached data & parity striping)
                   RAID Level 7 is a proprietary technology from Storage Computer
                   Corporation. It borrows ideas from levels 3 and 4, but incorporates a
                   large memory cache between the disk array and the controller. The


Sony Training Services                                                                         176
Broadcast Fundamentals

                 controller uses the cache to read and write data to the disks
                 asynchronously. This means that each disks in the array can operate
                 independently, greatly improving overall performance.
                 This extra workload mean that RAID level 7 controllers are complex.
                 Power failures stand a greater chance of data loss, because data
                 spends time in the cache. If power is lost the cache is emptied.
                 Advantages : Very good performance for any file type. Very good error
                 protection. Low ratio of ECC codes to data.
                 Disadvantages : Proprietary design. Very complex controller design.
                 Expensive. Possible loss of data during power failure.

            Level 10 (Striped array with mirroring)
                 Level 10 is a combination of level 0 and level 1. This is not a true
                 standard and there are different definitions of exactly what level 10 is,
                 some which are no different to level 0+1. The most consistent definition
                 is that part of the array is a striped set and part a mirror set. This
                 combines the advantages of both sets.
                 Advantages : Simple design. Same error correction capability as level
                 1. Good performance. Low ratio of ECC codes to data.
                 Disadvantages : Not efficient. Not a rigid standard. No ECC data.

            Level 0+1 (Mirrored array with striping)
                 Level 0+1 is another non-standard array definition. However level 0+1
                 definitions appears to be more consistent than level 10.
                 This level has two complete striped sets. It is a level 1 implementation of
                 a level 0 array. It therefore has some of the advantages and some of the
                 disadvantages of each level.
                 Level 0+1 is not efficient but have good error protection, like level 1. Is
                 also has good bandwidth, like level 0.
                 Advantages : Simple design. Same error correction capability as level
                 1. Good performance. Low ratio of ECC codes to data.
                 Disadvantages : Not efficient. Not a rigid standard. No ECC data.

            Other levels
                 There are a variety of other levels specifies within the storage industry
                 that attempt to combine advantages of the established levels.
                 Level 30 is a level 0 array where each stripe is a level 3 array. Level 50
                 is a level 0 array where each stripe is a level 5 array. Level 53 is a level
                 5 combined with a level 0. Strictly this level should be called level 50, but
                 this description has already been used.

            JBOD
                 JBOB (just a bunch of disks) is a name given to a group of hard disks
                 that have no particular array pattern. It is a name often given to disk
                 systems in an attempt to give them the same importance as RAID levels.

177                                                  Sony Broadcast & Professional Europe
Part 17 – The special effects machine

        Stripe size considerations
                   Most RAID solutions require the files be split into small pieces called
                   stripes. Each stripe is written to a different disk in the array. Stripe size is
                   important in defining the performance of the array.
                   Striping a file means that parts of the file can be written to and read from
                   multiple disks at the same time. This greatly increases the performance
                   of the array as a whole. However the task of splitting files into stripes
                   takes time and effort. Therefore it is important that the stripe size is
                   correct.
                   Ideally every file would be divided into exactly the number of disks in the
                   stripe set. This gives the highest performance of all, with the lowest
                   splitting workload. However stripe size if fixed.

             Working with small stripes
                   A RAID system that uses small stripes works well for files systems with
                   many small files and few large files. It is easy to split the files into stripes
                   as they are already small, but most files can be divided so taking
                   advantage of the increased performance of striping. Any large files take
                   longer to split and give lots of stripes that have to be handled. But there
                   are few of these files in this kind of file system.

             Working with large stripes
                   A system that uses large stripes works well for file systems with
                   predominantly large files, and with few small files. The large files splits
                   easily into a few stripes that can be stored on the disk array quickly. Any
                   small files may well be smaller that the stripe size and cannot therefore
                   be split. There is no performance advantage of striping for these files,
                   but there are few of then in this file system, so overall performance is still
                   high.

        Software v hardware RAID
                   RAID systems are designed to perform all the RAID processing either as
                   a software solution or in dedicated hardware. In fact both these solutions
                   perform the same function, just in a different place and in a slightly
                   different way.

             RAID controller processing
                   The RAID controller performs three basic operations during write
                   operations. Firstly is splits the files into stripes. Secondly, it may
                   calculate an error correction code. This may be a simple parity code, a
                   dual parity code or a Hamming code. Thirdly it arbitrates and controls
                   data write operations to each disk’s interface.

             Software RAID
                   Software RAID performs all the file splitting and error correction
                   calculation in the computer’s processor using a small piece of software
                   resident in the computer’s memory. This processing robs processor



Sony Training Services                                                                        178
Broadcast Fundamentals

                 resource and is somewhat inefficient, but is simple to achieve, and
                 relatively simple to modify.
                 The calculation of error correction codes is generally very resource
                 hungry. Software RAID solutions are more popular for RAID level 0 and
                 1 where no error correction codes are used.

            Hardware RAID
                 Hardware RAID performs all the controller processing in dedicated
                 hardware. This removed all the workload from the central processor
                 allowing it to perform other tasks with greater efficiency. The dedicated
                 hardware is often actually a fast processor coupled to some dedicated
                 integrated firmware.
                 Hardware RAID controllers are faster than software RAID controllers, but
                 are more expensive and dedicated to specific RAID levels and stripe
                 sizes.


Realising RAID systems
                 At the time RAID was proposed by Randy Katz, and others, in 1988 the
                 idea was to allow large storage elements to be built from lots of small
                 cheap drives, with redundancy built in to allow for errors and disk failure.

       Direct disk connection
                 The fastest method of writing to and reading from hard disks is to
                 communicate directly with the disk platters. Early video disk recorders
                 used direct access as the only way of achieving the bandwidth to and
                 from the disks for full bandwidth broadcast video.
                 However this imposes extra loading on the computer that is using the
                 hard disks. All the sector, and cylinder allocation had to be done by the
                 computer’s central processor.
                 Direct access also imposes a risk of drive retirement. Disk drive
                 manufactures often improve their products and alter the layout of platters
                 and their density. This may not change the overall disk capacity, and
                 therefore will make little difference to normal computer systems.
                 However it does effect video disk recorders that rely on direct access to
                 the hard disk platters.

       Bus RAID connections
                 All hard disks are now built with some kind of integrated controller. This
                 handles all the sector and cylinder addressing. The computer is
                 presented with logical addressing that has nothing to do with the actual
                 sector and cylinder addressing the drive itself will use.
                 This removes the loading from the computer and also removes the risk
                 of disk retirement. However hard disk integrated controllers add a layer
                 between the computer and the disk platters themselves and slow data
                 transfer to and from the disks. Hard disk technology had to evolve before
                 hard disk with integrated controllers could be used in video disk
                 recorders.


179                                                 Sony Broadcast & Professional Europe
Part 17 – The special effects machine

                   The popularity of IDE/ATA, SCSI, and later, serial SCSI, fibre and
                   IEEE1394, coupled with the advances in hard disk technology made it
                   possible and easy to build RAID systems for broadcast quality video.

        SCSI
                   SCSI (small computer systems interface) was originally designed as a
                   method of connecting peripherals to a computer with a very fast data
                   link. Although SCSI has been popular as a method of connecting
                   scanners, and a number of other peripherals, the most popular
                   peripheral that uses SCSI is the hard disk.
                   The original version of SCSI allows for 8 nodes on the whole bus. One of
                   these must be the controller. The other 7 nodes can be hard disk drives.
                   Later versions of SCSI allow for 16 devices (1 controller and 15 drives).
                   The length of the SCSI bus is also important. Different versions of SCSI
                   have different maximum bus lengths. Originally the SCSI bus could not
                   be any longer than about 6 metres. Now it is possible to achieve a SCSI
                   bus of about 25 metres, although this imposes other restrictions.
                   There are about 10 different flavours of SCSI in current use, and a few
                   new ones starting to appear. The table on the next page shown these
                   different types.
                   Name is the common name given to this type of SCSI. In some cases
                   two names are in-fact identical SCSI types.
                   Standard is the official SCSI standard to which this type refers.
                   Bus is the size of the SCSI bus, in bits. This is not the number of pins in
                   the connector, or conductors in the cable, although there is a general
                   relationship.
                   Rate is the data rate for this SCSI type. This is not the bus speed. Some
                   SCSI types use special tricks to multiply the bus speed to increase the
                   actual data throughput on the bus.
                   Connectors is a guide to the kind of connectors used in the various
                   SCSI types. There is no hard and fast rule to the connector type, and
                   each SCSI equipment manufacturer has their own preference. However
                   certain connectors are specifically for external, and others for internal,
                   use.
                   Cables is a guide to the kind of cables used in the various SCSI types.
                   Signal is the kind of electrical signal this type of SCSI uses. All SCSI
                   types use a pair of wires on the bus for each bit. The original electrical
                   signal was single ended (S), in which case one of the wires in each pair
                   was the data itself while the other was connected to ground. Later types
                   used differential connections to increase the possible bus length. Each
                   pair had a positive and negative version of the signal. The original used
                   relatively high voltages for the signals and was so called high voltage
                   differential (H). The latest differential type is low voltage differential (L),
                   which allows for longer bus lengths and high data rates.
                   Max devices is the maximum number of devices allowed on the SCSI
                   bus, including the controller. In some cases the maximum number of
                   devices allowed will depend on the signal type, and will also have an

Sony Training Services                                                                       180
Broadcast Fundamentals

                 effect on the maximum length of the bus. For example Ultra 2 SCSI will
                 allow high voltage or low voltage differential connection, with low voltage
                 differential connections allowing either 2 of 8 devices depending on the
                 bus length.
                 Max cable is the maximum length of the whole SCSI bus, including the
                 cable itself, any internal connections and any circuit board tracks.
                 Different SCSI types allow for different bus lengths. For instance Ultra
                 SCSI allows for single ended or high voltage differential bus
                 connections. If 3 metres of single ended connection are used, only 4
                 devices can be connected. If the bus length is halved to 1.5 metres, the
                 number of devices doubles to 8.




181                                                Sony Broadcast & Professional Europe
Bus        R a te
                         N am e                          S ta n d a r d   (b its )   (M B /s )   C o n n e c to rs   C a b le s   S ig n a l   M a x d e v ic e s      M a x c a b le

                         S C S I-1                       S C S I-1           8          5          1 ,2 ,3 ,4 ,5        1 ,2        S ,H               8                 S :6 H :2 5
                         N a rro w S C S I               S C S I-1           8          5          1 ,2 ,3 ,4 ,5        1 ,2        S ,H               8                 S :6 H :2 5
                         Fast S C S I                    S C S I-2           8         10             2 ,3 ,7         1 ,2 ,3       S ,H               8                 S :3 H :2 5




Sony Training Services
                         F a s t N a rro w S C S I       S C S I-2           8        10              2 ,3 ,7         1 ,2 ,3       S ,H               8                 S :3 H :2 5
                         W id e S C S I                  S C S I-2          16         10             6 ,1 1             4          S ,H              16                 S :6 H :2 5
                         F a s t & W id e S C S I        S C S I-2          16         20             6 ,1 1             4          S ,H              16                 S :3 H :2 5
                         U ltr a S C S I                 S C S I-3           8         20             2 ,3 ,7         1 ,2 ,3       S ,H        S :4 o r8 H :8       S 4 :3 S 8 :1 .5 H :2 5
                         N a r r o w U ltr a S C S I     S C S I-3           8         20             2 ,3 ,7         1 ,2 ,3       S ,H        S :4 o r8 H :8       S 4 :3 S 8 :1 .5 H :2 5
                         W id e U ltr a S C S I          S C S I-3          16         40             6 ,1 1             4          S ,H        S :4 o r8 H :8       S 4 :3 S 8 :1 .5 H :2 5
                         U ltr a 2 S C S I               S C S I-3           8         40             2 ,3 ,7         1 ,2 ,3       H ,L        H :8 L :2 o r8      H :2 5 L 2 :2 5 L 8 :1 2
                         N a r r o w U ltr a 2 S C S I   S C S I-3           8         40             2 ,3 ,7         1 ,2 ,3       H ,L        H :8 L :2 o r8      H :2 5 L 2 :2 5 L 8 :1 2
                         W id e U ltr a 2 S C S I        S C S I-3          16         80             6 ,1 1             4          H ,L       H :1 6 L :2 o r1 6   H :2 5 L 2 :2 5 L 1 6 :1 2
                         U ltr a 3 S C S I               S C S I-3          16        160             6 ,1 1             4            L             2 o r1 6           L 2 :2 5 L 1 6 :1 2
                         U ltr a 1 6 0 S C S I           S C S I-3          16        160             6 ,1 1             4            L             2 o r1 6           L 2 :2 5 L 1 6 :1 2
                         U ltr a 1 6 0 + S C S I         S C S I-3          16        160             6 ,1 1             4            L             2 o r1 6           L 2 :2 5 L 1 6 :1 2
                         U ltr a 3 2 0 S C S I           S C S I-3          16        320             6 ,1 1             4            L             2 o r1 6           L 2 :2 5 L 1 6 :1 2
                         U ltr a 6 4 0 S C S I           S C S I-3          16        640                ?               ?            L                ?                        ?




182
                                                                                                                                                                                                 Part 17 – The special effects machine
Broadcast Fundamentals



            SCSI connectors
                 1 : 25 pin, D25 connector.            1
                 2 : 50 pin Centronics connector.      12
                 3 : 50 pin IDC connector.             1 2 ultra
                 4 : 50 pin D50 connector.             1
                 5 : 37 pin D37 connector.             1
                 6 : 68 pin HD68 connector.            Ultra2 lvd & ultra wide scsi3
                 7 : 50 pin HD50 connector.            23
                 8 : 30 pin HDI30 connector.           Apple
                 9 : 50 pin HPCN50 connector.
                 10 : 60 pin HDCN60 connector.
                 11 : 68 pin VHDCI connector.          Ultra scsi 2 & 3

            SCSI cables
                 1 : 50 conductor Centronics C50.
                 2 : 50 conductor ribbon cable.
                 3 : 50 conductor high density D50M cable.
                 4 : 68 conductor high density D68 cable.

       IDE/ATA
                 Early PC designs placed the hard disk on a card, integrating it with the
                 controller and providing a simple connection through one of the ISA
                 connectors into the motherboard. However this was awkward because it
                 made the card large, heavy and cumbersome.
                 Western Digital produced a card that provided an interface between the
                 16 bit ISA bus connector on the motherboard and the drive. Controller
                 electronics were placed on the drive, providing a simple interface without
                 having to communicate directly with the disk platters, just as SCSI does.
                 This was called integrated drive electronics (IDE)
                 Because the PC design this was first used in was called the PC/AT the
                 adaptor was called the AT adaptor, or ATA.
                 Several other manufacturers saw the simplicity of the IDE/ATA design
                 for PC’s. These computers did not need any of the complexity or
                 performance of SCSI, and IDE/ATA became the de-facto standard for
                 fitting hard disks into PC’s.
                 Every PC required a hard disk, some more than one. It became obvious
                 that the ATA controller should be fitted to the PC motherboard, rather
                 than wasting one of the ISA slots.
                 In the early 1990’s ATA packet interface (ATAPI) was introduced. This
                 enhancement allowed CDROM’s and tape drives to be integrated into


183                                                 Sony Broadcast & Professional Europe
Part 17 – The special effects machine

                   the same bus connection as the hard disks rather than connecting them
                   to some other proprietary interface.
                   Later versions of the IDE/ATA interface allowed direct memory access
                   (DMA) modes, and later, faster DMA modes, called Ultra DMA. (UDMA).
                   UDMA mode 2 allowed for a data transfer rate of 33MB/s and was often
                   called Ultra DMA-33 or Ultra ATA-33, or simple UDMA-33 or ATA-33.
                   Later improvements to the bus appears as UDMA-66 (ATA-66) and
                   UDMA-100 (ATA-100).
                   The performance of the whole drive/controller configuration will drop to
                   the item with the slowest speed. Therefore it is important to ensure that
                   both the ATA controller and the drives have to be designed to operate at
                   the correct bus speed.
                   Most IDE drives can be connected via a standard 40 way ribbon cable.
                   However any ATA controller and drive faster than UDMA-33 must use
                   special 80 way cable. This cable is exactly the same overall size as the
                   40 way cable, and has the same number of signal connections as 40
                   way connections. However every other wire in this 80 way ribbon cable
                   is connected to ground and separates the signal wires, improving
                   performance.
                   Modern PC’s integrate the ATA into the motherboard’s chipset. Intel’s
                   PCI chipset now integrates the entire ATA into the PCI chipset. All
                   motherboards now include two 40 pin connectors on the motherboard.
                   Each connector provides one IDE/ATA bus. Each bus allows for one
                   master and one slave drive. Thus four IDE drives can be fitted. It should
                   be remembered that ATAPI allows all drives, including CDROM and
                   DVD drives to be connected to these IDE connectors. Most PC’s have
                   this connection, with very few drives connections available to build a
                   RAID from.

             IDE/ATA RAID
                   Manufacturers have now produced plug-in cards that have multiple ATA
                   connections, and an interface controller. These cards allow small RAID
                   systems to be built into the PC.
                   Some of these cards rely on software to perform the RAID control.
                   These are little more than the standard ATA controllers found integrated
                   to motherboards. They generally only have two 40 way connectors
                   allowing four drives to be fitted. These RAID solutions are somewhat
                   restricted, and slow.
                   Other cards offer hardware RAID. Free from the constraints of the
                   normal two connector scheme these cards often include four or more 40
                   way connectors, allowing for more drives to be connected. They include
                   coprocessors and memory to perform proper interface and control.
                   Some motherboards include more ATA connectors other than the
                   normal two. These are specifically designed to allow small RAID
                   systems to be added to the PC without the need for any plug-in card.
                   Just as with the plug-in solutions, motherboard integrates solutions can
                   offer either software or hardware based RAID.


Sony Training Services                                                                  184
Broadcast Fundamentals

                 Hardware based plug-in IDE RAID cards and the motherboard
                 integrated IDE RAID controllers tend to use hardware based RAID level
                 0, 1 and 5. These are by far the most popular RAID levels for PC RAID
                 designs.

       Serial SCSI and Fibre

            The argument for serial SCSI
                 To an engineer it may appear that a parallel interface should be faster
                 than a serial one. After all if you can sent data down a serial bus 8, 10,
                 16, 20, 32, or even 64 bits as one big chunk, each clock cycle, this must
                 surely be faster than sending it one bit at a time. Surely if, for instance
                 you have a 16 bit parallel bus connection the data rate would have to be
                 16 times faster to achieve the same data rate with a serial bus.
                 However various transmission effects conspire to ensure that serial bus
                 SCSI connections in fact offer greater performance than most parallel
                 connections.
                 As the data rate increases there is an increase in cross-talk between
                 one conductor in a parallel bus to another. Each data bit becomes more
                 corrupted and the data rate is stepped up.
                 As the cable length is increased there is an increase in bit slippage. This
                 is where the data in one conductor gets to its destination before another
                 conductor simply because of the data pattern in each wire and its
                 corresponding delay. All the bits from one word of data arrive at the far
                 end of the bus at slightly different times and become more difficult to
                 read.
                 With many pins in a serial connector there is a greater chance that any
                 one pin will not make proper contact. This may invalidate transmitted
                 data.
                 Some parallel SCSI connections are still the fastest connection method.
                 Ultra 320 and the up and coming Ultra 640 versions of SCSI are still
                 parallel connections. However most SCSI installations are based around
                 10, 20 or 40MB/s buses where serial connection are better.

            Which flavour?
                 With the introduction of SCSI-3 the whole structure of the format was
                 altered giving it more of a layered and modular structure, with each
                 module communicating with others in the structure. Any one
                 implementation need not use all the modules, just enough to ensure
                 messages and data are properly transferred.
                 Although appearing complex the new approach allowed elements of the
                 interface to be altered while still maintaining the overall format.
                 An important concept for SCSI-3 is that the physical elements are now
                 separated from the command definitions. A range of different physical
                 modules exist, for traditional parallel connections as well as some serial
                 connections and network connections.




185                                                 Sony Broadcast & Professional Europe
Part 17 – The special effects machine

                   The important serial connections are Serial Storage Architecture (SSA),
                   Fibre Channel (FC) and IEEE1394.
                   IEEE1394 is more of a multimedia interface. Although popular as a
                   media interconnection, it is not popular in RAID design.
                   There is plenty of discussion on the relative merits of SSA and FC with
                   both supporters backing their own preference and decrying the other. I
                   appears that FC have more universal backing irrespective of any
                   technical merits (although it seems FC is also technically better).

             Fibre Channel
                   The main advantages for using FC in a RAID environment are :-
                   1 : FC is a network protocol, SCSI and ATA/IDE are not. This allows
                   drives to be addressed just as anything else on a computer network.
                   2 : Different connection topologies possible. Point to point (the nearest to
                   parallel SCSI and ATA/IDE), arbitrated loop and fabric.
                   3 : Huge connection distances compared to parallel SCSI or ATA/IDE.
                   4 : Each computer can access a huge number of disks, not the 2 per bus
                   on ATA/IDE or 16 on SCSI.
                   5 : Each disk can be accessed by a huge number of computers. This is
                   not possible with either ATA/IDE or SCSI and allows easy file and
                   directory sharing.
                   6 : Easy connection.
                   7 : Hot swappable connection.
                   8 : Fast. As already discussed there are some versions of SCSI that are
                   faster but these are exotic and not universally supported at present. FC
                   is as fast or faster than most common SCSI types and every form of
                   ATA/IDE.




Sony Training Services                                                                    186
Broadcast Fundamentals


Part 16                     Television receivers & monitors
The basic principle
                 A television receiver’s task is to turn the electrical video signal that is
                 connected to it back into a moving image. In most cases television
                 receivers include a tuner to accept signals from an aerial, cable or
                 satellite feed. These signals also include audio information, which the
                 television receiver will turn back into a recognisable audio signal.
                 In a domestic arena the television receiver is often referred to simply as
                 a “television”.
                 Monitors are designed for use in professional and broadcast situations.
                 They have the same technology as a television, although the input signal
                 possibilities are normally restricted to those used in professional and
                 broadcast situations. They normally have no tuner and cannot be
                 connected to an aerial, cable or satellite feed. Monitors sometimes have
                 provision for audio but its quality is not normally very good.
                 Monitor video quality has a far greater range of quality than for television
                 receivers. At the lowest end of the quality scale are mini-monitors
                 intended for CCTV and surveillance. Compact design, robustness and
                 price tend to be the defining factors in these monitors. Picture quality is
                 not so important and is often poorer than for domestic televisions.
                 Broadcast monitors have very good quality because they are used as a
                 reference in the broadcast station. They are expensive and often need
                 periodic alignment checks to retain their quality. Studio monitors are
                 graded according to their quality. A grade 1 monitor is the best quality.
                 The tube is selected for its definition and colourimetry, and the circuitry
                 is designed with no compromise to picture quality.


Input signals
       Analogue inputs

            Terrestrial
                 By far the most popular input signal to a television is the UHF signal.
                 Called terrestrial because the signal is sent over land as a radio signal. It
                 uses a transmitter mast at the broadcast station and an aerial at the
                 receiver. The radio frequency carrier holds a composite video signal and
                 its associated audio, in a bandwidth of about 6MHz.
                 Televisions include radio frequency tuners that can tune into one of
                 these terrestrial signals and demodulate the video and audio signals,
                 turning them back into baseband analogue signals.

            Composite
                 Composite video is a baseband signal, and does not include audio.
                 Audio must be input separately. This presents a difficulty in a domestic
                 environment where simple installation is very important. The most


187                                                  Sony Broadcast & Professional Europe
Part 18 – Television receivers & monitors

                   popular method of connecting composite video to a domestic television
                   in Europe is through a Scart connector. The Scart connector is a multi-
                   pin connector providing component, composite and audio connections in
                   one connector.
                   Many European domestic television peripheral equipment, like video
                   tape players, DVD players, and video games machines have Scart
                   connectors fitted and provide a simple way of connecting to domestic
                   television receivers.
                   Composite is common in broadcast stations and post-production.
                   Generally regarded as a low quality connection for monitors, compared
                   to analogue component and digital connections, composite is easy to
                   connect and still provides a reasonable monitoring image.

             Component
                   Component video connections are normally more difficult to connect
                   because they involve three connectors. Add to this the fact that, like
                   composite, component is a baseband video signal with no audio, and
                   component is not an easy option for domestic use where simplicity is
                   paramount. However analogue component provides a very high quality
                   connection for domestic purposes. An increasing amount of peripheral
                   equipment, like video tape players, DVD players, and video games
                   machines are being designed to output component signals using the
                   Scart connector, providing the home viewer with a relatively high quality
                   image.
                   Component is the preferred analogue video connection within the
                   broadcast station, and most studio monitors provide for component
                   analogue input, using three separate, usually BNC, connectors.

        Digital inputs

             Satellite/cable
                   Although there are an increasing amount of people subscribing to
                   satellite and cable channels, it is still rare to find a television receiver
                   with a built-in satellite decoder. Most receivers use an external decoder
                   and the television takes a decoded signal from the decoder. This is often
                   a UHF signal, similar to a terrestrial signal, but could be either a
                   baseband composite or component analogue signal.

             Digital terrestrial
                   Like satellite and cable, digital terrestrial is still a rare option as an input
                   for television receivers. Most people subscribing to domestic digital
                   terrestrial broadcast services use an external decoder box with the
                   television taking a UHF, composite or component analogue signal.




Sony Training Services                                                                         188
Broadcast Fundamentals


Part 17                                                                  Timecode
A short history
       Splicing tape
                 Ever since video was first recorded there has been a need to edit video
                 material. At first this process consisted of little more than removing
                 errors and any material not required in the final recording. This was done
                 by simply cutting the video tape.
                 As technology progressed efforts were made to perform the same
                 editing tasks that were already in common use in film, i.e. making up a
                 complete program from bits and pieces of video joined together. As with
                 film this was done by simply cutting the required sections of video and
                 splicing them together. Edits were often badly made, causing picture
                 breakup and rolls at the edit points, and once the edit was made there
                 was no turning back.

       Electronic editing
                 A little later electronic editing was introduced. Rather than cutting the
                 video tape up into pieces, a copy would be made from the original tape
                 onto a new tape. By electronically organising how the various bits of
                 video material were copied from the original tape to the new one it
                 became possible to edit a complete program together without affecting
                 the original video tape.
                 It soon became necessary to index the tape in some way so that
                 particular edit points could easily be found. In the early 60’s Ampex
                 introduced a system called Editek. This system allowed the editor to
                 insert an audio tone into the audio channel of the video tape at the
                 chosen edit point. The recorder and player VTR would then use the tone
                 to switch at the edit point and perform the edit electronically.
                 Although providing editors with a technical advantage over anything that
                 had gone before, Editek was still slow and not as easy to use as could
                 be. Further more, Editek was not frame accurate.

       Frame accuracy
                 Film uses sprocket holes to mechanically move it through the projector.
                 By linking the mechanics of the projector to a counter it was therefore
                 easy to get an accurate frame count as the film progressed through the
                 projector.
                 Early efforts were made to do the same thing with video tape, by
                 counting capstan rotations. This was however very inaccurate due to
                 slippage.
                 A little later the control track was used. As video uses a control track to
                 lock the player’s mechanics to the helical video tracks recorded on tape
                 a simple counter could be attached to the control track servo system to
                 count frames in much the same way as one would counting sprocket
                 holes in film.


189                                                 Sony Broadcast & Professional Europe
Part 19 – Timecode

                   However, if the film was damaged sprocket holes could be missed and
                   the overall count would slip. In much the same way, if video tape was
                   stopped and started, and wound backwards and forwards repeatedly,
                   control track pulses could be missed and the count would slip. Also if the
                   film or video tape was loaded somewhere in the middle, one would have
                   no idea how far from the beginning one was.
                   What was needed, not only for video tape editing, but also for film
                   editing, was a method of individually “marking” each frame with a unique
                   number.


Timecode
                   In the late 60’s timecode was introduced. Simply called ‘timecode’ this
                   coding method would later be called ‘longitudinal timecode’ when the
                   alternative ‘vertical interval timecode’ was introduced some ten years
                   later.
                   Timecode provided editors with the system they had been waiting for. A
                   coding system that was designed to be read both in the forward and
                   reverse directions, at a wide range of tape speeds with a numbering
                   system that was related to real time, with a unique code for each and
                   every video frame.
                   Computer based editing systems soon became popular, allowing edits to
                   be programmed as a number of timecode related points. The idea of an
                   edit list came about, and it became common to carry an edit list, either in
                   paper form or on disk, with video tapes, when moving from one edit suite
                   to another.
                   Future uses of timecode include very sophisticated computer controlled
                   equipment using timecode and related video clips or snapshots for
                   versatile off-line editing suites.


Timecode’s basic structure
                   Timecode is represented as 8 digits split into 4 pairs of 2 digits each,
                   separated by colons, as shown in figure . Each digit pair conveys hours,
                   minutes, seconds, and frames, reading from the left.




Figure 87                                                           Timecode’s basic structure




Sony Training Services                                                                   190
Broadcast Fundamentals

                     Timecode gives a 24 hour count. This amount is considered longer than
                     any single piece of video would last, and can also be set to be the time
                     of day, thus allowing the time a video recording was made to be
                     recorded on tape as well.
                     Both LTC and VITC are conveyed and recorded on tape as a serial data
                     stream. This data stream is a collection of binary bits, 80 bits for LTC
                     and 90 bits for VITC. Groups of these bits define various elements of the
                     timecode.

            Timecode address bits (BCD)




Figure 88                                                                   Binary coded decimal

                     There are 26 address bits separated into 8 BCD (binary coded decimal)
                     groups, of either 2, 3 or 4 bits each. Each BCD group defines either the
                     tens or units digit of the hours, minutes, seconds or frames count.

            User bits (binary groups)
                     There are 32 user bits separated into 8 groups of 4 bits each. These
                     groups can either be used in any way the user sees fit, or can be
                     specified to comply with 7 and 8 bit standard ISO character sets, or can
                     define another timecode value which can be the same, or different, from
                     the one defined by the timecode address bits.
                     Thus by using the user bits as timecode, LTC or VITC can hold 2
                     unrelated timecode counts.

            Sync bits
                     In LTC there are 16 sync bits placed as one group (word) at the end of
                     each code. They define the end of each code, so that an LTC timecode
                     reader can find the beginning of each code. They also define the tape
                     direction because the sync word is different in the forward direction as it
                     is in the reverse direction.
                     In VITC there are 18 sync bits separated into 8 pairs occurring
                     throughout the code after each user group.



191                                                     Sony Broadcast & Professional Europe
Part 19 – Timecode

        Flags
                   There are 6 special single bit flags. They are placed in the bits ‘saved’ by
                   2 and 3 bit BCD timecode address groups, as explained as part of the
                   timecode address bits on page.
                   The flags are defined as follows :-

             Drop frame flag
                   Used to identify if the timecode increments according to the NTSC drop
                   frame counting method. See page 131.

             Colour frame flag
                   Used to identify if the timecode is related to composite video material, or
                   component video material that has been decoded from composite video
                   material. The exact field in the colour frame sequence is found by
                   mathematically calculating on the basis that timecode 00:00:00:01 is the
                   first field of the colour frame sequence.

             Phase correction flag (LTC only)
                   Used to ‘switch’ the LTC phase if there are an uneven number of 1’s in
                   the complete code. This makes sure that each code starts low at the
                   beginning of each video frame.
                   (In VITC this flag is used as a field mark flag.)

             Binary group flags
                   Used to define how the user binary group bits are to be used as shown
                   in the table below.
                         Binary group flags                     Function
                         2       1        0
                         0       0        0            Character set not specified
                         0       0        1               Eight bit character set
                         0       1        0                    Unassigned
                         0       1        1                    Unassigned
                         1       0        0                     Page/Line
                         1       0        1                    Unassigned
                         1       1        0                    Unassigned
                         1       1        1                    Unassigned


                   The first state, with all bits at ‘0’ specify that all the user bit groups are
                   undefined and can be used in any way the user sees fit.
                   The second state specifies that the user bit binary groups are taken in
                   pairs, giving either 7 or 8 bit groups that are used to specify an ISO
                   character.



Sony Training Services                                                                         192
Broadcast Fundamentals

            Field mark flag (VITC only)
                 VITC is field sensitive, unlike LTC which is only frame sensitive. This flag
                 is used to define the field. For field 1 this flag is ‘0’, for field 2 it is ‘1’. For
                 NTSC and PAL based video material this flag is ‘0’ for odd field and ‘1’
                 for even fields
                 (In LTC this flag is used as a phase correction flag.)

            Unassigned timecode flags
                 There are two unassigned flags in timecode. These have been left for
                 future expansion if a new technology is devised that may require more of
                 timecode than in presently provided.




193                                                     Sony Broadcast & Professional Europe
Part 19 – Timecode



Longitudinal timecode




Figure 89                                                          The longitudinal timecode head

                    LTC (longitudinal timecode) was the first timecode to be proposed and
                    extensively used, during the latter part of the 1960’s. It uses a linear, or
                    longitudinal, track running along the edge of the video tape. The actual
                    position on tape varies from one standard to another. C format, for
                    instance ‘borrows’ audio track 3 along the bottom edge of the tape for
                    timecode. U-Matic machines have an extra track placed at the bottom
                    end of the helical track for timecode.
                    ½” tape formats like Betacam, Betacam SX Digital Betacam, IMX and
                    HDCAM place LTC along the bottom edge of the tape below the control
                    track.
                    LTC has the advantage that it can be read at high tape speed when
                    VITC cannot be read. However it has the disadvantage that it cannot be
                    read at zero tape speed (stop mode) because the tape is no longer
                    moving passed the longitudinal timecode head.

            LTC signal structure
                    LTC consists of 80 bits of data recorded serially on tape beginning at the
                    same time as line 5 of either the 525 or 625 line sequence is being
                    written to the helical tracks on tape.




Sony Training Services                                                                      194
Broadcast Fundamentals

                 (There is an obvious physical displacement between these two points on
                 tape, but this displacement is the same within each tape format, and
                 therefore is not a problem.)




195                                              Sony Broadcast & Professional Europe
Figure 90
                                                            0                        4                        8   12                   16                   20                   24          28                   32                   36                   40              44                   48                   52                   56         60                   64   68                     72                76   79




Sony Training Services
                                                                   F ra m e s            U s e r b in a r y        U s e r b in a ry      S econds           U s e r b in a ry   S econds     U s e r b in a ry      M in u te s        U s e r b in a ry    M i n u te s    U s e r b in a ry        H o u rs         U s e r b in a ry   H o u rs    U s e r b in a ry             S y n c r o n is a ti o n w o r d
                                                                u n it s c o u n t         g ro u p 1                g ro u p 2         u n its c o u n t      g ro u p 3           te n s      g ro u p 4         u n its c o u n t      g ro u p 5           te n s          g ro u p 6         u n its c o u n t      g ro u p 7         te n s       g ro u p 8
                                                                                                                                                                                   count                                                                      count                                                                        co u n t




                         The longitudinal timecode signal
                                                                                                                                                                                                                                                                                                                                                                                                                                   Part 19 – Timecode




196
Broadcast Fundamentals
   B it    U s e (5 2 5 )                         U s e (6 2 5 )               B it          U s e (5 2 5 )                    U s e (6 2 5 )
    0                    F r a m e u n its b it 0                              40                    M in u t e s t e n s b it 0
    1                    F r a m e u n its b it 1                              41                    M in u t e s t e n s b it 1
    2                    F r a m e u n its b it 2                              42                    M in u t e s t e n s b it 2
    3                    F r a m e u n its b it 3                              43     B in a r y g r o u p fla g 0       B in a r y g r o u p f la g 1
    4                       U s e r g ro u p 1                                 44                        U s e r g ro u p 6
    5                       U s e r g ro u p 1                                 45                        U s e r g ro u p 6
    6                       U s e r g ro u p 1                                 46                        U s e r g ro u p 6
    7                       U s e r g ro u p 1                                 47                        u s e r g ro u p 6
    8                     F r a m e te n s b it 0                              48                     H o u r s u n it s b it 0
    9                     F r a m e te n s b it 1                              49                     H o u r s u n it s b it 1
   10       D r o p fr a m e fla g        U n a s s ig n e d ( s e t t o 0 )   50                     H o u r s u n it s b it 2
   11       C o lo u r f r a m e f la g     C o lo u r f r a m e f la g        51                     H o u r s u n it s b it 3
   12                       U s e r g ro u p 2                                 52                        U s e r g ro u p 7
   13                       U s e r g ro u p 2                                 53                        U s e r g ro u p 7
   14                       U s e r g ro u p 2                                 54                        U s e r g ro u p 7
   15                       U s e r g ro u p 2                                 55                        U s e r g ro u p 7
   16                   S e c o n d s u n its b it 0                           56                     H o u r s t e n s b it 0
   17                   S e c o n d s u n its b it 1                           57                     H o u r s t e n s b it 1
   18                   S e c o n d s u n its b it 2                           58     B in a r y g r o u p f la g 1 B in a r y g r o u p f la g 2
   19                   S e c o n d s u n its b it 3                           59     B i n a r y g r o u p f l a g 2 P h a s e c o r r e c t io n f la g
   20                       U s e r g ro u p 3                                 60                        U s e r g ro u p 8
   21                       U s e r g ro u p 3                                 61                        U s e r g ro u p 8
   22                       U s e r g ro u p 3                                 62                        U s e r g ro u p 8
   23                       U s e r g ro u p 3                                 63                        U s e r g ro u p 8
   24                   S e c o n d s te n s b it 0                            64                  S y n c w o rd (s e t to 0 )
   25                   S e c o n d s te n s b it 1                            65                  S y n c w o rd (s e t to 0 )
   26                   S e c o n d s te n s b it 3                            66                  S y n c w o rd (s e t to 1 )
   27   P h a s e c o r r e c t io n f la g B i n a r y g r o u p f l a g 0    67                  S y n c w o rd (s e t to 1 )
   28                       U s e r g ro u p 4                                 68                  S y n c w o rd (s e t to 1 )
   29                       U s e r g ro u p 4                                 69                  S y n c w o rd (s e t to 1 )
   30                       U s e r g ro u p 4                                 70                  S y n c w o rd (s e t to 1 )
   31                       U s e r g ro u p 4                                 71                  S y n c w o rd (s e t to 1 )
   32                   M in u t e s u n it s b it 0                           72                  S y n c w o rd (s e t to 1 )
   33                   M in u t e s u n it s b it 1                           73                 S y n c w o rd (s e t to 1 )
   34                   M in u t e s u n it s b it 2                           74                  S y n c w o rd (s e t to 1 )
   35                   M in u t e s u n it s b it 3                           75                  S y n c w o rd (s e t to 1 )
   36                       U s e r g ro u p 5                                 76                  S y n c w o rd (s e t to 1 )
   37                       U s e r g ro u p 5                                 77                  S y n c w o rd (s e t to 1 )
   38                       U s e r g ro u p 5                                 78     S y n c w o r d ( s e t to 0 ) S y n c w o r d ( s e t to 1 )
   39                       U s e r g ro u p 5                                 79                  S y n c w o rd (s e t to 1 )

Figure 91                                                                                                                                        LTC bits


 197                                                                                   Sony Broadcast & Professional Europe
Part 19 – Timecode

                    LTC is recorded as simple polarised regions on tape according to the bi-
                    phase mark channel coding method. See page for an explanation on the
                    bi-phase mark method of channel coding.
                    The bi-phase mark signal must obey certain criteria which are outlined in
                    Fig. on page.




Figure 92                                                      Longitudinal timecode signal detail



                    The 80 bits are evenly spaced over the whole frame. They are separated
                    into groups of bits responsible for the timecode itself, user binary groups,
                    flags and syncs. The usage of these groups is described on page .
                    The LTC signal structure is shown in Fig. and on page 122.
            LTC sync bits
                    LTC contains 16 sync bits at the end of each code. These bits have a
                    particular sequence, ‘0011111111111101’. The 12 bits in the middle of
                    the sync bits are all ‘1’s. Because the timecode groups are all BCD (see
                    page this particular pattern of 12 bits cannot occur anywhere else in the
                    LTC code. Thus an LTC reader can determine where the end of the LTC
                    code is and therefore know to begin looking for the beginning of the next
                    code.
                    The bi-phase mark signal structure is direction independent (see page ).
                    In the reverse direction the code reads ‘1011111111111100’. Because
                    the reader finds ‘10’ before the 12 ‘1’s in the middle of the sync bits, and
                    ‘00’ at the end it knows the tape is running backwards. It therefore
                    knows the code will occur after the sync, not before it, and that it will be
                    backwards, and that the appropriate adjustments therefore have to be
                    made to read the code correctly.




Sony Training Services                                                                      198
Broadcast Fundamentals



Bi-phase mark coding
                 LTC uses bi-phase mark as a channel coding method, otherwise known
                 as the Manchester code 1.
                 Bi-phase mark places a transition at every bit boundary, and a transition
                 in the middle of each bit period for each ‘1’ bit. This makes it polarity
                 independent i.e. anything reading bi-phase mark is only concerned with
                 the transitions, not whether the transitions are going high or low.




Figure 93                                                               Bi-phase mark coding

                 It is also direction independent, and the original data can always be
                 decoded from a bi-phase mark signal even though it may be read
                 backwards. It is also self clocking, i.e. no matter what speed the tape is
                 going, all an LTC reader has to do is to look for regular transitions
                 corresponding to the bit boundaries, and once locked to them, search for
                 any bit periods with a transition in the middle. Any with no transitions are
                 0’s and those with, are 1’s.


Adjusting the LTC head
                 The LTC head is a static head as shown in Fig 66.




199                                                 Sony Broadcast & Professional Europe
Part 19 – Timecode

            Head to tape contact
                    The head itself hangs off a bracket. The head to tape contact can be
                    adjusted by loosening the fixing screws between the bracket and the
                    head and rotating the head about the vertical axis.
                    The head gap must be in direct contact with the tape if it is to record and
                    playback timecode properly.

            Head height
                    The bracket is held on a plate by a large spring underneath the whole
                    assembly. The spring is trying to pull the whole head downwards. Thus
                    by adjusting a small screw between the bracket and the plate you can
                    adjust the head height.




Figure 94                                                                LTC head adjustments

                    The head assembly must be at the correct height if each head is to
                    cover its track properly.



Sony Training Services                                                                    200
Broadcast Fundamentals

       Head zenith
                 The plate is fixed to the base plate by a screw and spring arrangement.
                 There is a small pivot at the back of the head which keeps the plate and
                 base plate apart. Thus by adjusting a small screw at the front whish is
                 also separating the plate and base plate, you can adjust the head’s
                 zenith, i.e. the amount of lean forwards or backwards.
                 If the head zenith is incorrect either the timecode or audio portions of the
                 head will not be in good contact with the tape and both recording and
                 playback will be bad. Furthermore, as the tape moves across the head it
                 will forced either upwards or downward by the head. This may make
                 video tracking difficult and may force the tape against the tape guides
                 and damage the edge of the tape.

       Head azimuth
                 The plate is also held from the base plate by another screw at the side of
                 the assembly. This screw can be used to adjust the head azimuth, i.e.
                 the sideways lean.
                 Incorrect head azimuth will result in incorrect audio phase and incorrect
                 relative position between the audio heads and the timecode head.

       Head position
                 The base plate is fixed to the mechadeck with a number of screws.
                 The fixing holes in the base plate are actually slots. Thus by loosening
                 the screws the head position on the tape path can be adjusted.
                 If the head position is incorrect the relative timing between the timecode
                 and audio signals compared to the control and video signals will be
                 wrong. Lip sync will be incorrect and timecode may be incorrectly read,
                 resulting in bad edits.




201                                                 Sony Broadcast & Professional Europe
Part 19 – Timecode



Vertical Interval Timecode
            The basis for VITC
                    A form of VITC was proposed at the same time as LTC. However
                    machines at the time generally found it difficult to maintain a good video
                    signal at anything other than normal play speed, thus VITC offered no
                    advantage over LTC.
                    As VTR technology progressed, video playback heads were designed
                    that could move to follow the helical tracks on tape at other than normal
                    play speed.
                    As designs improved it soon became possible for the video heads to
                    follow the helical tracks at very slow speeds and even in still mode, while
                    maintaining a steady picture.




Figure 95                                                      LTC and VITC speed comparison



                    At slow and still speeds LTC becomes unreadable, and editing becomes
                    difficult. Eventually a workable VITC was proposed about ten years after
                    LTC, to get over this problem. It uses two lines during the field blanking


Sony Training Services                                                                    202
Broadcast Fundamentals

                 period, (otherwise known as vertical blanking) to store serial data with
                 much the same format and content as LTC.
                 Because VITC is written into the video signal itself as part of the helical
                 tracks it has the advantage over LTC in that it can be read at zero tape
                 speed (still mode), because even at this speed the flying heads on the
                 scanner are still moving over the tape.
                 Another advantage of VITC over LTC is that it is field accurate, because
                 a complete code is placed during the vertical interval of each field,
                 whereas LTC requires a whole frame to convey one code.
                 Because VITC is included in the video signal itself, it also has the
                 advantage that cabling can be made simpler, with no extra cable
                 required specifically for timecode.
                 However at high tape speeds, no matter how good the heads are at
                 following the helical tracks, they eventually lose their position on the
                 helical tracks, and consequently lose the VITC signal.
                 Another disadvantage of VITC is that there is no agreed set standard for
                 the vertical blanking interval lines that should be used for the VITC
                 signal. The proposal was simply put to the industry too late, and other
                 uses had already been found for the vertical interval, teletext and vertical
                 interval test signals being 2 examples.
                 Therefore VITC can occur on just one line on field 1 anywhere between
                 lines 9 and 22 and on field 2 anywhere between lines 322 and 335.

       VITC signal structure
                 Because VITC is stored in the vertical interval it therefore conforms to
                 the same basic rules as the video signal itself does. In fact VITC
                 conforms to the same criteria as a monochrome video signal, i.e. the
                 same bandwidth limitations, maximum slew rate, maximum and
                 minimum voltage levels, and so on. Peak white level represents a ‘1’ and
                 a black level represents ‘0’.
                 VITC consists of 90 bits of data recorded serially during the chosen
                 vertical interval line. The first bit must occur between 10us and 11us
                 after the leading edge of the line sync pulse. It usually occurs at about
                 10.5us.




203                                                 Sony Broadcast & Professional Europe
Part 19 – Timecode

                                                                   0                                          1
                                        200   +_   50 ns
                                                                                                         _
                                                                                               1 = 80   + 1   0 IR E
                                                                          L e s s th a n 5 %
                                 90%



                      Peak
                        to       50%
                      peak
                    (1 0 0 % )


                                                                    _
                                                                      0
                                 1 0%                      0 = 0   + 10
                                                                          IR E




Figure 96                                                                                                                VITC signal details

                    The whole VITC code normally take up most of the vertical line, and the
                    last bit, bit 79, cannot occur less than 2.1us before the leading edge of
                    the next line sync pulse.


                    The VITC signal must also obey certain criteria which are outlined in Fig.
                    on page .

            VITC sync bits
                    VITC has a ‘10’ sequence on bits 0 and 1 and every 10 bits after. These
                    allow the VITC reader to overcome timing jitter when reading the signal.




Sony Training Services                                                                                                                204
Broadcast Fundamentals




        89
                     C R C C b it s
        84




                     S y n c r o n is a ti o n b it s
        80




                     U s e r b in a r y g r o u p 8
        76




                     H o u r s te n s c o u n t
        72




                     S y n c r o n is a ti o n b it s
        68




                     U s e r b in a r y g r o u p 7
        64




                     H o u r s u n i ts c o u n t


                     S y n c r o n is a ti o n b it s
        60




                     U s e r b in a r y g r o u p 6
        56




                     M i n u t e s te n s c o u n t
        52




                     S y n c r o n is a ti o n b it s


                     U s e r b in a r y g r o u p 5
        48
        44




                     M i n u te s u n i ts c o u n t


                     S y n c r o n is a ti o n b it s
        40




                     U s e r b in a r y g r o u p 4
        36




                     S e c o n d s te n s c o u n t
        32




                     S y n c r o n is a ti o n b it s


                     U s e r b in a r y g r o u p 3
        28
        24




                     S e c o n d s u n i ts c o u n t

                     S y n c r o n is a ti o n b it s
        20




                     U s e r b in a r y g r o u p 2
        16
        12




                     S y n c r o n is a ti o n b it s

                     U s e r b in a r y g r o u p 1
        8




                     F ra m e s u n its c o u n t
        4




                     S y n c r o n is a ti o n b it s
        0




Figure 97                                                    The vertical interval timecode signal


205                                                     Sony Broadcast & Professional Europe
Part 19 – Timecode

            B it       U s e (6 2 5 )                  U s e (5 2 5 )          B it         U s e (6 2 5 )                  U s e (5 2 5 )
             0                         S ync 1                                 45                  M in u te s u n its b it 3
             1                         S ync 0                                 46                      U s e r g ro u p 5
             2                 F r a m e u n its b it 0                        47                      U s e r g ro u p 5
             3                 F r a m e u n its b it 1                        48                      U s e r g ro u p 5
             4                 F r a m e u n its b it 2                        49                      U s e r g ro u p 5
             5                 F r a m e u n its b it 3                        50                            S ync 1
             6                    U s e r g ro u p 1                           51                            S ync 0
             7                    U s e r g ro u p 1                           52                   M in u te s te n s b it 0
             8                    U s e r g ro u p 1                           53                   M in u te s te n s b it 1
             9                    U s e r g ro u p 1                           54                   M in u te s te n s b it 2
            10                         S ync 1                                 55     B in a r y g r o u p fla g 0 B in a r y g r o u p fla g 1
            11                         S ync 0                                 56                      U s e r g ro u p 6
            12                  F r a m e te n s b it 0                        57                      U s e r g ro u p 6
            13                  F r a m e te n s b it 1                        58                      U s e r g ro u p 6
            14         U n a s s ig n e d         D r o p fr a m e fla g       59                      U s e r g ro u p 6
            15     C o lo u r fr a m e fla g     C o lo u r fr a m e fla g     60                            S ync 1
            16                    U s e r g ro u p 2                           61                            S ync 0
            17                    U s e r g ro u p 2                           62                    H o u r s u n its b it 0
            18                    U s e r g ro u p 2                           63                    H o u r s u n its b it 1
            19                    U s e r g ro u p 2                           64                    H o u r s u n its b it 2
            20                         S ync 1                                 65                    H o u r s u n its b it 3
            21                         S ync 0                                 66                      U s e r g ro u p 7
            22                S e c o n d s u n its b it 0                     67                      U s e r g ro u p 7
            23                S e c o n d s u n its b it 1                     68                      U s e r g ro u p 7
            24                S e c o n d s u n its b it 2                     69                      U s e r g ro u p 7
            25                S e c o n d s u n its b it 3                     70                            S ync 1
            26                    U s e r g ro u p 3                           71                            S ync 0
            27                    U s e r g ro u p 3                           72                    H o u r s te n s b it 0
            28                    U s e r g ro u p 3                           73                    H o u r s te n s b it 1
            29                    U s e r g ro u p 3                           74     B in a r y g r o u p fla g 1 B in a r y g r o u p fla g 2
            30                         S ync 1                                 75     B in a r y g r o u p fla g 2      F ie ld m a r k fla g
            31                         S ync 0                                 76                      U s e r g ro u p 8
            32                S e c o n d s te n s b it 0                      77                      U s e r g ro u p 8
            33                S e c o n d s te n s b it 1                      78                      U s e r g ro u p 8
            34                S e c o n d s te n s b it 2                      79                      U s e r g ro u p 8
            35      F ie ld m a r k fla g       B in a r y g r o u p fla g 0   80                            S ync 1
            36                    U s e r g ro u p 4                           81                            S ync 0
            37                    U s e r g ro u p 4                           82                            C R C C
            38                    U s e r g ro u p 4                           83                            C R C C
            39                    U s e r g ro u p 4                           84                            C R C C
            40                         S ync 1                                 85                            C R C C
            41                         S ync 0                                 86                            C R C C
            42                M in u te s u n its b it 0                       87                            C R C C
            43                M in u te s u n its b it 1                       88                            C R C C
            44                M in u te s u n its b it 2                       89                            C R C C

Figure 98                                                                                                                                    VITC bits


Sony Training Services                                                                                                                            206
Broadcast Fundamentals



Drop frame timecode
                 Drop frame timecode is only applicable to 525 line, NTSC based
                 systems. It arises from a basic problem associated with the fact that
                 NTSC systems do not count an exact number of frames per second.
                 Instead NTSC systems run at a rate of 29.97 frames per second.
                 This means that if timecode were to count at a rate of 30 frames per
                 second continuously it would eventually count an extra 108 frames per
                 hour, which amounts to about 3.6 seconds.
                 Over a 24 hour period, the maximum possible with timecode this extra
                 amounts to almost 1 minute!
                 Drop frame timecode was devised to allow for this problem by jumping
                 the timecode generator’s counter at certain specific times. Frames are
                 therefore dropped from the count.
                 Losing 108 frames from the timecode count is performed by first jumping
                 the timecode generator two frames at the beginning of every minute.
                 Therefore when the timecode generator reaches 09:42:59:29, for
                 instance, it will increment to 09:43:00:02 instead of 09:43:00:00, missing
                 out frames 09:43:00:00 and 09:43:00:01.
                 This effectively losses 120 frames per hour. This is too much so the
                 second counting scheme is used whereby the timecode generator is not
                 jumped at the beginning of every 10 minutes, i.e. at 00, 10, 20, 30, 40,
                 50 & 60 minutes.
                 Thus when the timecode generator reaches 09:49:59:29 it will increment
                 to 09:50:00:00 as normal instead of jumping to 09:50:00:02. This chops
                 12 frames off the 120 frames that would originally be lost using the first
                 counting scheme on its own to leave 108 frames lost in total per hour,
                 just the number required!


Which timecode am I using ?
                 The generally accepted standard timecode is LTC. There are a number
                 of reasons why. Firstly it was the first timecode to be proposed and used
                 extensively, and therefore had a head start over VITC in general
                 acceptance.
                 Secondly, as explained in the description on VITC, there is an inability to
                 standardise the VITC line on either field 1 or 2 of a video signal.
                 This is because VITC was proposed later and other uses had already
                 been found for the vertical interval lines before VITC could ‘grab’ any
                 particular vertical interval line for its own exclusive use.
                 Thus the VITC standard allows VITC to be put on any one of a number
                 of vertical interval lines, and VITC timecode reader/generators often
                 have to be altered to operate with the particular lines chosen, having first
                 made sure that it is free from use by anything else.
                 Thirdly, tapes used in an edit suite are often striped with continuous LTC
                 timecode. This then becomes the timing reference for the tape.



207                                                 Sony Broadcast & Professional Europe
Part 19 – Timecode

                   As editing takes place, using insert edits, the LTC track will not be
                   recorded. Insert edits are made to the video and audio tracks only.
                   Thus LTC became accepted as the timecode that would be guaranteed
                   not to change in editing.
                   This third reason is a little difficult to justify in modern VTR’s where there
                   is the capability to replace VITC during video insert edits and guarantee
                   the same code is replaced after the edit.
                   However editors habits soon tended to regard LTC as the timecode to
                   depend on. Habits account for a lot, to the extent that the Sony DVW-
                   A500P series Digital Betacam machines, which could only playback
                   analogue Betacam SP tapes, where modified so that the LTC track
                   could be re-recorded to analogue tapes even though nothing else could.


Timecode use in video recorders
                   Most modern professional tape recorders have the capacity to read,
                   generate, record and playback both LTC and VITC.
                   When recording, timecode is used in two distinctive ways. The first is to
                   record the time of day the recording was made. Camcorders and
                   portable machines are often set to record the time of day as they are
                   often used to record news or sport, and this means the time is also
                   recorded.
                   The second is to provide continuous timecode throughout a tape. Studio
                   machines are often set to record continuous timecode. This means that
                   although an edit may take days or weeks to complete, and is made up
                   from many bits and pieces of video put together, the final master tape
                   will have a continuous seamless timecode, starting from zero at the
                   beginning of the tape.


Typical VTR timecode controls
                   There are a number of controls that can be found in a typical modern
                   professional video tape recorder. It is not possible to mention all of them
                   here, or the different names that might be given to each control within a
                   particular machine. However a few are considered here to give a rough
                   idea of the kind of things to look for.

        Rec Run / Free Run switch
                   This switch allows a VTR operator to either select continuous timecode
                   recording (Rec Run), or time of day recording (Free Run). With the
                   switch set to Rec Run the machine’s internal timecode generator
                   increments only when the machine is recording.
                   With the switch set to Free Run the timecode generator continues to
                   increment all the time. If the time of day is to be recorded the timecode
                   generator then needs to be set to the time of day.

        VITC On Off switch
                   As LTC is the industry accepted timecode, rather than VITC, it is often
                   possible to switch the internal VITC reader/generator off, if it isn’t in use.

Sony Training Services                                                                      208
Broadcast Fundamentals

       VITC/AUTO/LTC switch
                 This switch is often included in a machine. It allows the user to either
                 force the machine to operate with VITC or with LTC, or to automatically
                 select the timecode signal it is able to find. If both VITC and LTC have
                 been recorded to tape, and played back at a variety of speeds, ranging
                 from stop to fast-forward at 50 times play speed, the machine’s
                 capacity to pick up both LTC and VITC is similar to Fig. below.
                 With the machine stopped LTC cannot be read off the tape. At play
                 speed both LTC and VITC can be read. As the machine’s speed is
                 increased eventually VITC becomes unreadable. (If the machine’s speed
                 is increased further,eventually it becomes difficult to pick up even LTC,
                 but that situation is beyond this discussion.)
                 Most machines will default to LTC, the preferred industry standard, at
                 any speed where both LTC and VITC are detectable off tape.

       Drop Frame On Off switch
                 This switch is only found on 525 line (NTSC) machines to allow the
                 operator to work with continuous timecode, which ironically would not be
                 related to time at all, or drop frame timecode which keeps to time by
                 dropping frames at certain specific points.
                 625 line (PAL) machines do not include this switch, and a blank space
                 will often be found where one would be fitted.

       Real Time to LTC/VITC User Bits switch
                 As explained on page , timecode includes groups of bits in amongst the
                 time address bit groups that can be used to store anything the user
                 might wish. This switch allows the user to store the real time (time of
                 day) into the user bits of either LTC of VITC on tape.

       Timecode Reset, Advance or Set buttons
                 Machines generally have one or more buttons which allow the user to
                 either reset the machine’s internal timecode generator to 00:00:00:00, or
                 to set it at any predetermined count.
                 If running in Rec Run mode the user might reset to 00:00:00:00 before
                 beginning an edit session. Alternatively in Free Run mode the user might
                 preset the timecode generator to some time count, say just one minute
                 in the future, using something like the Advance button, and press Set the
                 moment that time comes up, to ‘synchronise’ the machine to the time of
                 day.

       External User Bits switch
                 Some machines might include a switch to allow the user to either record
                 the user bit code input to the machine or to use the user bit code set
                 within the internal timecode generator, irrespective of what is happening
                 to the time address bits of timecode.




209                                                Sony Broadcast & Professional Europe
Part 19 – Timecode

        VITC line selector
                   This selector may be a switch or a menu item. It allows the user to select
                   which of the vertical lines will be used by VITC.

        LTC phase correction switch
                   A bi-phase mark signal can be read no matter which way round it is (see
                   page . However the LTC specification includes a polarity bit (bit 59 in
                   625 line, and bit 27 in 525 line systems). This bit ensures that each LTC
                   code begins low. Normally this makes no difference to the code, and
                   many VITC readers do not care if the code has been inverted. However
                   if two signals are to be edited together, and one of the LTC signals has
                   somehow been inverted there will be an error at the edit point.
                   This switch inverts the incoming LTC timecode signal if there appears to
                   be a problem with the edit.


The future
                   Most modern machines that can store video can also store timecode.
                   LTC was designed specifically for tape recorders, where the timecode
                   signal is recorded on a longitudinal track somewhere near the edge of
                   the tape. VITC is, however, not designed specifically for tape recorders,
                   but was designed purely for the video signal itself, regardless of where
                   that signal might be put into.
                   As technology progresses different types of video recorder are starting
                   to appear. The two types that should be considered are hard disk
                   recorders, and RAM recorders.
                   Hard disk recorders store video on a hard disk, or, more commonly, on a
                   hard disk array. Hard disk recorders generally have the disadvantage
                   that they cannot store anything like the amount of video that a tape
                   recorder can.
                   RAM recorders are even worse in this respect. Although very fast and
                   flexible, they generally store only a fraction of the amount of video a tape
                   recorder can.
                   However both hard disk and RAM recorders have no longitudinal track
                   and must therefore ‘fake’ an LTC from VITC. If tape recorders are ever
                   universally replaced by more advanced hard disk or RAM recorders, the
                   difference between LTC and VITC may become more confused.
                   Alternatively VITC may become the industry standard instead of LTC,
                   which may eventually be dropped altogether.




Sony Training Services                                                                    210
Broadcast Fundamentals


Part 18                              SDI (serial digital interface)
Parallel digital television
                 During the late 70’s the television industry had begun to investigate the
                 idea of digitising television signals. At first such attempts were confined
                 to pieces of equipment such as standards converters, where the
                 analogue video signal was input to the unit, converted to a digital signal,
                 standards converted in the digital domain, converted to an analogue
                 signal again, and output from the unit.
                 The conversion to a digital video signal and back again was confined to
                 within the unit itself, and thus the methods of conversion tended to vary
                 from one manufacturer to another. It soon became obvious that by
                 maintaining the video signal in the digital form as it passed from one
                 piece of video equipment to another, the quality could be maintained at a
                 higher level than was normal for conventional analogue video signals. A
                 universal standard for digitising analogue video signals would thus be
                 very useful to the video industry.
                 In 1982 the CCIR met in Geneva. During their proceedings a
                 recommendation was made for digitising analogue component video
                 signals. The recommendation was CCIR 601 and, although only a
                 recommendation, very quickly became a de-facto standard throughout
                 the video industry.
                 CCIR 601 was a very basic document when it was published in 1982.
                 But it did promote thought throughout the video industry, and a few
                 manufacturers started to design equipment that conformed to it.
                 When the CCIR met again in 1986, CCIR 601 had been rewritten and
                 much improved. It had been altered in some places and added to in
                 others. A second recomendation was also drafted, called CCIR 656.
                 Although not a strict and absolute division, CCIR 601 described the
                 digital video signal itself and its conversion from an analogue signal, and
                 CCIR 656 described the physical interface, including the type of
                 connectors and cable.

       CCIR 601/656
                 CCIR 601/656 involves the digitising of component analogue video
                 signals, and is commonly referred to as D1.
                 (A standard involving the digitising of composite video signals is
                 commonly referred to as D2. SDI is not based on composite video
                 signals or D2.)
                 There are two elements of CCIR 601/656 that need to be explained, the
                 quantisation levels and the sample structure.

            Quantisation levels
                 The CCIR decided that component signals would be described at a
                 series of 8 bit binary numbers (words). This gives 256 possible
                 quantisation levels, 0 to 255.


211                                                 Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)




                                                                                                                                              H e x a d e c im a l
                                                                                                                             D e c im a l




                                                                                                                                                                         B in a ry
                                                                                                                             255              FF                     11111111
                                                          N ot used
             P e a k w h i te                                                                                                235              EB                     1 1 1 01 01 1




                                                                                  2 2 0 q u a n ti s a t i o n l e v e l s
    Y


                B la c k                                                                                                        16                10                 0001 0000
                                                          N ot used
                                                                                                                                 0                00                 00000000


                                          S yncs not
                                          d ig itis e d




                                                                                                                             255                FF                   11111111
            Peak + ve                                     N ot used
                                                                                                                             240                F0                   1 1 1 1 0000
            c o lo u r le v e l
                                                                                  2 2 5 q u a n ti s a t i o n l e v e l s




  R -Y
   or       'B l a c k ' l e v e l                                                                                           1 28                 80                 1 0000000
  B -Y


            P e a k -v e
            c o lo u r le v e l                                                                                                 16                10                 0001 0000
                                                          N ot used
                                                                                                                                 0                00                 00000000


Figure 99                                                                                                                                   CCIR-601 digitisation

                                 In the case of the analogue Y signal, only the brightness part of the
                                 signal is digitised, i.e. any sync pulses added to the Y signal are ignored.
                                 Black level of the Y signal is set at 16 (10 hexadecimal or 00010000
                                 binary). Peak white level is set at 235 (EB hexadecimal or 11101011
                                 binary). The area between 0 and 15 is not used for digitising the Y
                                 signal, and although CCIR 601 actually specifies that the Y signal may
                                 occasionally be digitised beyond 235 for ‘super white’ signals, in almost
                                 all cases the area between 236 and 255 is not used.
                                 The samples resulting from the Y analogue component signal are
                                 referred to as Y samples.



Sony Training Services                                                                                                                                                               212
Broadcast Fundamentals

                                           In the case of the analogue (R-Y) and (B-Y) signals, ‘black level’ or the
                                           zero point is set at 128 (80 hexadecimal or 10000000 binary). The peak
                                           positive excursion of each colour difference is set at 240
                                           (F0 hexadecimal or 11110000 binary), and the peak negative excursion
                                           is set at 16 (10 hexadecimal or 00010000 binary). As with the Y signal,
                                           the area between 0 and 15 is not used for digitising the colour difference
                                           signals, and the area between 240 and 255 is also not used.
                                           The samples resulting from the (R-Y) analogue component colour
                                           difference signal are referred to as Cr samples, the samples resulting
                                           from the (B-Y) analogue component colour difference signal are referred
                                           to as Cb samples.

                              Sample (word) structure
                                           Samples are taken from the three analogue component signals at a rate
                                           of 13.5 MHz. This frequency was chosen to give the greatest degree of
                                           commonality between 625 and 525 line television standards.


                                                     Y                     Y                     Y                     Y                     Y                     Y                     Y                     Y                     Y                     Y
O rig in a l 1 3 .5 M H z s a m p le s
                                              B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y       B -Y       R -Y




F in a l 2 7 M H z s a m p le s           C   b      Y          C   r      Y          C   b      Y          C   r      Y          C   b      Y          C   r      Y          C   b      Y          C   r      Y          C   b      Y          C   r      Y



                                         C o - s it e d tr i p l e t           C o - s it e d tr i p l e t         C o -s i te d tr i p l e t            C o -s i te d t r i p l e t           C o - s i te d tr i p l e t
                                                                   S in g le Y                           S in g le Y                          S in g le Y                          S in g le Y                            S in g le Y

Figure 100                                                                                                                                                                                   CCIR-601 sample structure

                                           Each 13.5MHz ‘position’ therefore has a Y sample, a (B-Y) sample and a
                                           (R-Y) sample. The first sample position of each line gives one Cb, Y and
                                           Cr word. The Cb word is derived from the analogue (B-Y) component
                                           signal. Likewise the Cr word is derived from the (R-Y) signal. Although
                                           these three word occur sequentially in the CCIR 601/656 data stream, it
                                           is important to remember that they originate from the same point on the
                                           original image. They are therefore referred to as co-sited samples, or as
                                           a co-sited triplet.




213                                                                                                                                              Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)

                                                                                                                    1 3 . 5 M H z s a m p l e p o s it i o n s




              A n a lo g u e Y
         c o m p o n e n t s ig n a l




           A n a l o g u e ( B -Y )
         c o m p o n e n t s ig n a l




           A n a l o g u e ( R -Y )
         c o m p o n e n t s ig n a l




F in a l 2 7 M H z s a m p le s          C   b     Y        C   r     Y        C   b      Y        C   r     Y        C   b     Y        C   r     Y        C   b     Y        C   r     Y        C   b     Y         C   r     Y



                                        C o -s i t e d tr i p l e t           C o - s i te d tr i p l e t          C o - s i te d tr i p l e t             C o - s i t e d tr i p l e t           C o -s i t e d tr i p l e t
                                                                  S in g le Y                           S in g l e Y                           S in g l e Y                           S in g le Y                            S in g le Y

Figure 101                                                                         Component analogue video and CCIR-601 sample comparison

                                             The colour content of the second sample position is ignored, leaving a
                                             single Y word.
                                             The third sample position is treated as the first, the forth like the second,
                                             as so on, giving a co-sited triplet, single Y, co-sited triplet, single Y, etc.
                                             structure.
                                             Thus CCIR 601/656 words occur at 27 Mhz, with the Y words at 13.5
                                             MHz and each Cr and Cb word at 6.75 MHz.

                               CCIR 601/656 syncs
                                             Up to this point the digital data stream does not contain any syncs. As
                                             mentioned on the last page. the digital Y signal does not contain any
                                             sync information as it is not digitised from the original analogue Y
                                             component signal. Syncs need to be added somewhere to the
                                             CCIR601/656 27MHz data stream so that the receiver can lock to the
                                             incoming digital signal.


Sony Training Services                                                                                                                                                                                                 214
Broadcast Fundamentals

                                      The CCIR recommended that a special sync signal would be added to
                                      the beginning and end of every video line, even during the vertical
                                      interval lines.


                                                                                                   C   b    Y        C   r     Y




 C C IR 6 0 1 /6 5 6    C   b     Y        C        Y        C   b     Y        C         Y                                             C   b     Y        C         Y       C   b     Y        C         Y
                                               r                                    r                                                                          r                                    r
 d a ta s tr e a m



                       C o -s i te d t r i p l e t           C o -s i te d tr i p l e t         C o -s i te d tr i p l e t            C o -s i te d t r i p l e t           C o -s i te d t r i p l e t
                                                 S in g le Y                          S in g le Y                          S in g le Y                          S in g le Y                            S in g le Y




                                                                                                                               T
                                                                                                       P re a m b le           R
                                                                                                                               S

                                                                                              T i m in g R e fe r e n c e C o d e

Figure 102                                                                                                                                                                           CCIR-601 syncs

                                      This signal is referred to as a Timing Reference Code (TRC). It consists
                                      of four data words. The first three words are a preamble, a particular
                                      sequence of data that will not occur anywhere else in the digital video
                                      data stream. The forth word is referred to as a Timing Reference Signal
                                      (TRS). This code will enable the receiver to ‘find its place’ in the digital
                                      video signal.
                                      As mentioned before the analogue Y signal is digitised between 16 and
                                      235 and the colour signals are both digitised between 16 and 240. None
                                      of the samples will ever reach 0 or 255 (00 or FF hex.). These values
                                      are retained for the sync preamble words. The first preamble word if 255
                                      (FF hex.), and the second and third preamble words are both 0 (00
                                      hex.).

                       Timing Reference Signal structure
                                      The TRS consists of 8 bits, as all other CCIR 601/656 words. The 8 bits
                                      are specified as shown in the picture.
                                      The most significant bit, bit 7, is always a ‘1’.Bit 6 is the F bit. It defines
                                      which video field this particular TRS is in. A ‘0’ signifies field 1, and a ‘1’
                                      signifies field 2.
                                      Bit 5 is the V bit. It defines whether the TRS is in the active part of the
                                      video field, or the vertical blanking interval. A ‘0’ signifes the active
                                      portion, and a ‘1’ signifies the vertical blanking interval.


215                                                                                                                    Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)

                   Bit 4 is the H bit. It defines whether the TRS is at the beginning or the
                   end of the line. A ‘0’ signifies the start of active video (SAV), and a ‘1’ the
                   end of active video (EAV).

                     M .S .B . b it 7                 1                            0                                    0                                     1                                                           0 : F ie ld 1
                                                                                                                                                                                                     F ie ld b it         1 : F ie ld 2
                                                      1                            0                                    0                                     F
                                                                                                                                                                                                                              0 : A c ti v e fi e l d p o r ti o n
                                                      1                            0                                    0                                     V                                      V e rtic a l b it        1 : F ie ld b la n k in g p o rtio n

                                                      1                            0                                    0                                     H                                                                     0 : S ta r t o f a c ti v e v i d e o ( S A V )
                                                                                                                                                                                                     H o riz o n ta l b it          1 : E n d o f a c ti v e v i d e o ( E A V )
                                                      1                            0                                    0                            P3
                                                      1                            0                                    0                            P2
                                                                                                                                                                                                     P r o t e c ti o n b i t s
                                                      1                            0                                    0                            P1
                     M .S .B . b it 0                 1                            0                                    0                            P0
                                                                                                                                                  T i m i n g R e fe r e n c e S i g n a l
                                         F irs t p re a m b le w o rd
                                                                        S e c o n d p re a m b le w o rd
                                                                                                               T h ird p re a m b le w o rd




Figure 103                                                                                                                                                                                                                    CCIR-601 timing reference structure

                   Bits 3 to 0 are Hamming code protection bits P3 to P0. A different
                   combination of these four bits occurs for each combination of F, and H
                   bits. The Hamming distance between each combination means that it is
                   possible to detect and correct or one bit errors, and simply detect two bit
                   errors, in any TRS. The allocation of protection bits is shown in the
                   picture.
                     M .S .B . ( a lw a y s 1 )                                                            1                                  1                                              1   1     1      1       1         1
                                                                    F                                      0                                  0                                              0   0     1      1       1         1
                                                                    V                                      0                                  0                                              1   1     0      0       1         1
                                                                 H                                         0                                  1                                              0   1     0      1       0         1
                                                      P3                                                   0                                  1                                              1   0     0      1       1         0
                                                      P2                                                   0                                  1                                              0   1     1      0       1         0
                                                      P1                                                   0                                  0                                              1   1     1      1       0         0
                                L .S .B . P 0                                                              0                                  1                                              1   0     1      0       0         1

Figure 104                                                                                                                                                                                                                                   CCIR-601 TRS protection bits


             How a D1 reciever locks to an incoming D1 signal
                   Imagine a receiver is switched on and a D1 source is connected. The
                   first thing it will do is search for any words that have the value 255, i.e.
                   all 8 bits are ‘1’.
                   When the receiver finds this word it checks that the next two words are
                   0, i.e. all 8 bits of each word are ‘0’.




Sony Training Services                                                                                                                                                                                                                                                                216
Broadcast Fundamentals

                       If these three words check out, the forth word is placed into a special
                       register which examines each bit individually. Bit 7 must be a ‘1’.
                       The combination of F’ V and H bits must also correspond to the
                       combination of four protection bits.
                       If any part of this process fails the receiver disregards these words and
                       starts the search again.
                       If all this checks out the receiver looks at the H bit. If it is a ‘0’ the
                       receiver knows it is at the beginning of the video line. If it is a ‘1’, it is at
                       the end of the line. The receiver is now said to be line locked.
                       The receiver now knows where each TRS will occur because it knows
                       how many clock cycles occur between each one. It now looks at the F bit
                       of each TRS until this bit changes state, i.e. either changes from a ‘0’ to
                       a ‘1’ or from a ‘1’ to a ‘0’.
                       A change from a ‘1’ to a ‘0’ signifies the beginning of field 1. A change
                       from a ‘0’ to a ‘1’ signifies the beginning of field 2. The receiver is now
                       frame or field locked, and the locking process is completed.
                       The V bit is not actually required for locking, but is used by some
                       systems to check the actual position of vertical blanking.
                       The receiver then simply checks that each TRS after this is valid. If there
                       is an error some systems ignore the complete TRC and check the next
                       one. Other systems are not so ‘clever’ and immediately fall out of lock.

               The CCIR 601/656 interface driver
                       As shown already CCIR 601/656 signals are transmitted at 27 MHz. This
                       kind of frequency is far too high to transmit over any great distance with
                       ‘normal’ logic circuits like TTL and CMOS drivers.
                       D i ff e r e n t ia l t r a n s m it t e r        T w i s te d p a i r        D i ffe r e n t i a l r e c e iv e r




    In p u t                                                                                                                                            O u tp u t




                0 v o lts                        S y s te m g ro u n d                   C a b le s c re e n
Figure 105                                                                                                                                  CCIR601/656 ECL driver

                       ECL (emitter coupled logic) is capable of operating at higher frequencies
                       than TTL or CMOS circuits can. Differential ECL is capable of
                       transmitting over long distances with the proper sheilded cable.
                       Thus CCIR 601/656 signals require one differential ECL driver for each
                       bit and a further driver for the clock. Each driver has two output wires,


217                                                                                                  Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)

                   giving a total of 18 wires for 8 bit CCIR 601/656 video (16 for video and
                   2 for clock).

             The CCIR 601/656 connector
                   CCIR 656 recommends a ‘D’ 25 connector be used to connect digital
                   video signals. (However, although the CCIR recommended slide locks
                   be used, most users prefer screw fitting D25 connectors, because they
                   tend to be more secure and easier to fix in place than slide locks.) This
                   pinout for this connector is shown in the picture.

                                              P in 1
                                                                              P in 1 4




                                                                              P in 2 5
                                            P in 1 3



                      P in                                                 P in
                                          F u n c tio n                                           F u n c tio n
                     num ber                                              num ber
                        1      C lo c k +                                   14       C lo c k -
                        2      S y s te m g r o u n d                       15       S y s te m g r o u n d
                        3      D a ta b i t 7 +                             16       D a ta b i t 7 -
                        4      D a ta b i t 6 +                             17       D a ta b i t 6 -
                        5      D a ta b i t 5 +                             18       D a ta b i t 5 -
                        6      D a ta b i t 4 +                             19       D a ta b i t 4 -
                        7      D a ta b i t 3 +                             20       D a ta b i t 3 -
                        8      D a ta b i t 2 +                             21       D a ta b i t 2 -
                        9      D a ta b i t 1 +                             22       D a ta b i t 1 -
                       10      D a ta b i t 0 +                             23       D a ta b i t 0 -
                       11      S p a re b it A +                            24       S p a re b it A -
                       12      S p a re b it B +                            25       S p a re b it B -
                       13      C h a s s i s g r o u n d ( s h ie l d )




                                E x a m p le o f c o n n e c to r



Figure 106                                                                                    CCIR601-656 D25 parallel connector

                   Pins 1 and 14 are allocated to the 27 MHz clock positive and negative
                   differential ECL respectively.
                   The clock is separated from the data pins by two system ground pins 2
                   and 15. There are eight data pin pairs starting with pins 3 and 16 for
                   data



Sony Training Services                                                                                                     218
Broadcast Fundamentals

                 bit 7, and finishing with pins 10 and 23 for data bit 0, positive and
                 negative differential ECL respectively.
                 Pin 13 is allocated as a chassis ground and can be connected to the
                 connector shell itself.

            The increase to 10 bits
                 Pins 11 and 24, and pins 12 and 25 were originally allocated by CCIR as
                 two spare differential ECL pairs.
                 It took very little time for the video industry to start using these two extra
                 bits to extend the original 8 bit samples specified by CCIR to 10 bits.
                 However it is important to remember that these two bits are used as half
                 and quarter resolution bits. i.e. below the decimal point.
                 Thus Y samples now extend from 16.0 to 235.75 in 880 increments of
                 0.25 each. Cr and Cb samples now extend from 16.0 to 240.75 in 900
                 increments of 0.25 each.




219                                                  Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)



Serial digital television
                   Parallel D1, based on CCIR 601/656, became widely used by the
                   professional video industry in the latter part of the 80’s. However there
                   was resistance to using D25 connectors. Compared to the BNC
                   connectors and co-ax cables used for analogue video, these connectors
                   were more expensive and unreliable, and the cable required was
                   expensive and heavy. It was also found that parallel D1 could not
                   reliably be made over long distances.
                   CCIR 656 specified a serial interface, but this was based on 8 bit
                   samples, and the industry had already moved ahead to 10 bit samples.
                   The CCIR 656 serial interface could not be used.
                   In the latter part of the 80’s Sony introduced two devices that helped set
                   an industry de-facto standard for serial D1 digital video standard, the
                   SBX1601A and the SBX1602A.
                   SBX1601A is a parallel to serial converter for 10 bit D1 signals, an
                   SBX1602A is a serial to parallel converter.




Sony Training Services                                                                    220
Broadcast Fundamentals



Serial digital audio
                 At the same time as digital video was developing, advancements were
                 made to audio as well. However because the frequencies of audio were
                 basically lower than those of video development was more rapid and a
                 number of parallel digital audio standards became popular comparatively
                 quickly. In the professional world a sampling frequency of 48 kHz
                 became popular and in audio CD’s a sampling frequency of 44.1 kHz is
                 used. 32 kHz was also used. Sample widths of 8 bits were used on
                 cheaper older systems, but in the professional arena 18, 20 or 24 bit
                 were becoming popular.
                 The AES and EBU collaborated to draw up a standard for transmitting
                 audio signals through a serial digital channel. With so many different
                 parallel audio standards already in use,<N>any serial standard had to
                 somehow be able to encompass all the popular professional parallel
                 standards, and have some method of informing the receiver which
                 parallel standard was being transmitted.
                 The standard, known as the AES/EBU/IEC 958 standard, or simply as
                 AES/EBU audio, contains two channels of audio, channel A & B, with a
                 maximum sample size of 24 bits, and a maximum sample frequency of
                 48kHz, neatly covering the most demanding of the popular professional
                 parallel standards.

        Channel coding
                 The channel coding method chosen for AES/EBU audio was Bi-phase
                 mark, otherwise known as the Manchester 1 code. This is the same
                 channel coding method as is used for longitudinal timecode and for
                 Ethernet in computer networks.
                 As shown in the picture, Bi-phase mark places a transition at every bit
                 boundary, and a transition in the middle of each bit period for each ‘1’
                 bit.




Figure 107                                                      Bi-phase Mark signal structure

                 Thus Bi-phase mark is not only polarity and direction independent, but is
                 also self clocking, i.e. if there are phase changes during transmission, all
                 an AES/EBU receiver has to do is to look for regular transitions
                 corresponding to the bit boundaries, and once locked to them, search for
                 any bit periods with a transition in the middle.
                 Any with no transitions are ‘0’s, and those with, are ‘1’s.




221                                                 Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)

        Signal structure
                   The standard organises data into serial blocks. Each block contains 192
                   frames. Each frame contains two sub-frames, one for channel A and one
                   for channel B. Every subframe contains one audio sample. This is
                   shown in following picture.




Figure 108                                                 AES/EBU digital audio signal structure


             Sync bits (4 bits)
                   These first four bits are used to define the beginning of a sub-frame.
                   They are different to normal data because they violate the laws of the bi-
                   phase mark channel coding system.

                   Normally in bi-phase mark there is always a transition at every clock
                   cycle, i.e. between one bit and the next. However syncs drops two of
                   these clock transitions at specific points, as shown in the picture.
                   There are three forms of syncs. Form X defines the start of sub-frame A,
                   form Y defines the start of sub-frame B, and form Z defines the start of
                   the block (which is also a sub-frame A).
                   Thus the receiver searches for these ‘illegal’ portions of the signal, and
                   decodes them to find out if it is a form X, Y or Z sync. From that it is able
                   to determine where it is in the AES/EBU signal structure.




Sony Training Services                                                                      222
Broadcast Fundamentals




Figure 109                                                                 AES/EBU audio syncs


             Auxillary bits (4 bits)
                  These four bits serve two main purposes. They can be used to extend
                  the twenty bits of audio sample to 24 bits to give greater resolution to the
                  audio samples.
                  Alternatively they can be used to provide an extra channel of audio. By
                  combining three consecutive groups of auxiliary bits together you can
                  make an extra audio channel with only twelve bit resolution at a third the
                  sampling frequency.
                  Using the auxiliary bits for an extra audio channel for both sub-frames
                  gives a four channel audio transmission system with two high quality
                  channels and two low quality channels.

             Audio data (20 bits)
                  The next twenty bit are the audio sample itself. The LSB is just after the
                  last auxiliary bit and the MSB is just before the V flag bit.

             Flags (4 bits)
                  V flag - Validity - This flag indicates that the audio sample data is error
                  free.
                  U bit - User - This bit is not defines and can be used for any purpose.
                  C bit - Channel status - See the section below.
                  P flag - Parity - Parity for all bits in a sub-frame except the four sync bits.

        Channel status bits
                  The third bit of the flags in each sub-frame is for channel status. Thus,
                  with 192 channel A sub-frames in a block there are 192 channel A status
                  bits. Channel B also has 192 channel status bits, one for each subframe
                  throughout the entire block.
                  The channel status bits define the type of audio being transmitted. The
                  table in the picture shows the allocation of the 192 channel status bits.
                  The same table can be used for either channel A or channel B.


223                                                   Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)




Figure 110                                                         AES/EBU channel status bits


             Channel status for Digital Betacam
                   Digital Betacam uses a very specific type of AES/EBU. Staring at the top
                   of the table, Digital Betacam uses professional use of the channel status
                   block, thus bit 0 of byte 0 should be ‘1’.
                   Bits 2, 3 & 4 of byte 0 define if the analogue signal from which the digital
                   signal was sampled was emphasised, so the de-emphasis can occur
                   when the signal is converted beck to analogue.
                   Digital Betacam uses either no emphasis or CD emphasis. It does not
                   use CCITT emphasis.
                   Bits 6 & 7 of byte 0 define the sampling frequency of the AES/EBU audio
                   signal. This can be either 32, 44.1 or 48 kHz.
                   Digital Betacam uses a digital audio sampling frequency of 48 kHz only.



Sony Training Services                                                                    224
Broadcast Fundamentals

                 Bits 0, 1, 2 & 3 define how the two channels are related to one another.
                 Two channel mode means the two channel are not realated to one
                 another at all and are two independent channels. Single channel mode
                 means channel B is not used at all i.e. even though the sync and flag
                 bits are as they should be, all the audio sample data bits are ‘0’.
                 Primary/secondary channel mode means that channel A is copied to
                 channel B to increase the chance that the signal will get through without
                 error. Stereophonic mode means the two channels are related and
                 should not be separated. Any processing that is applied to one channel
                 A should also be applied to channel B. Channel A is taken as the left
                 channel.
                 Digital Betacam uses two channel mode. If the user chooses to make
                 the two channels a stereo pair Digital Betacam will still treat them an two
                 independent channels.
                 The last useful part of the table is bits 0, 1 & 2 of byte 2. These define
                 the maximum sample length. The sample can be defines as 20 bits, and
                 the auxiliary bits are simply not used and left as ‘0’. The sample can be
                 up to 24 bits, with the auxiliary bits being used to extend the normal 20
                 bits to 24 bits. The sample can also be 20 bits but using the auxiliary bit
                 as the coordination channel.
                 Digital Betacam uses AES/EBU as 20 bits, with the auxiliary bits not
                 used.

            Relationship between the two channels
                 Both channels in AES/EBU audio must have the same sampling
                 frequency. This is a basic requirement.
                 If one channel is defined as a stereo pair channel then the other channel
                 must also be stereo. Further more if the two channels are a stereo pair
                 every other aspect of the two channels must be the same, i.e. Their
                 channel status bits must be identical.
                 If one channel is defined as a primary/secondary channel, the other
                 channel must also be, just as for stereo. However in this case the audio
                 data itself must also be identical.
                 If both channel are in two channel mode other aspects of the channel
                 status bits may also differ. For instance channel A may be emphasised,
                 channel B not. The sample length may also differ.

       AES/EBU audio bit rate frequency
                 The final bit rate for AES/EBU audio depends on the original sample
                 frequecy. Taking the Digital Betacam particular use of AES/EBU audio
                 the sample frequency is 48 kHz, there are 32bits in each sample sub-
                 frame and two channels. Thus the bit rate can be calculated from the
                 following simple equation :-
                                  48 x 1000 x 32 x 2 = 3.072 Mhz




225                                                 Sony Broadcast & Professional Europe
Part 20 – SDI (serial digital interface)



SDI
                   SDI (Serial Digital Interface) is a particular implementation of serial
                   digital video based on CCIR 601 and CCIR 656, which also incorporates
                   two channels of AES/EBU digital audio (four audio channels altogether).
                   SDI has become very popular within Sony broadcast, professional and
                   industrial equipment as a method of carrying video and audio from one
                   piece of equipment to another.
                   As shown before the CCIR 601/656 digitises the whole analogue video
                   signal. This not only includes the active part of the signal, i.e. the picture
                   itself, but also the vertical and horizontal blanking intervals as well. This
                   part is always black and the digital data during this part of the signal is
                   always the same.
                   This represents a waste in terms of information density. The blanking
                   interval would be best used to contain useful information. This is where
                   embedded audio comes in.

        Embedded audio
                   Given the bit rate frequency of AES/EBU audio as used in Digital
                   Betacam, 3.072 Mhz, and the frequency of serial digital video, 270 MHz,
                   there is actually enough space during the horizontal blanking interval of
                   a video signal to pack nearly twenty channels of audio.
                   Infact SDI uses only a portion of the available space during the
                   horizontal blanking interval to embed two AES/EBU audio channels, or
                   four audio channels.


Video index
                   The final addition to the SDI signal as far as Digital Betacam is
                   concerned id video index. This system replaces vertical interval
                   subcarrier (VISC) and colour frame identification (CFID) used in
                   Betacam SP.
                   Video index uses line 11 and 324 in 625 line video (PAL). These two
                   lines are within the vertical interval of the video signal.
                   Bit 2 of each colour sample is used so that video index is still transmitted
                   even in 8 bit digital video.




Sony Training Services                                                                      226
Broadcast Fundamentals


Part 19                                              Video compression
                 The reason for compression is space, or lack of it. In a transmission
                 system like an aerial transmitter, satellite link or cable television link,
                 there is a constant fight to get as much use out of the link as possible.
                 Television companies want to put as many different television channels
                 into one link as they can.


Traditional analogue signals
                 In traditional analogue transmission systems there is a severe limit to the
                 number of channels that can be squeezed into each link. A traditional
                 analogue satellite link can take just one channel. It is much the same for
                 cable. Aerial transmission systems can take a few analogue channels
                 but each one takes up a lot of the aerial’s capacity.

       Compressing analogue signals
                 There are in fact various compression techniques used to compress
                 analogue signals and save bandwidth. The analogue colour difference
                 signals are compressed from the original (R-Y) and (B-Y) signals to U
                 and V signals before combining them with the luminance, Y, signal to
                 make a final composite signal. This is a form of compression.

            The problem of analogue compression
                 An analogue signal is always susceptible to change and interference by
                 cross talk, noise and other unwanted additions. In analogue
                 transmission you try to make the analogue signal big enough or different
                 enough from the noise or cross talk to make it easy to differentiate them
                 at the receiving end.
                 Analogue compression tends to push the original signal down into the
                 noise, making it more difficult to separate later on.


Analogue to digital conversion
                 Analogue to digital conversion involves converting the television signal
                 (video and audio) from a continuously changing signal to a signal
                 comprising a series of defined numerical values.
                 Converting a television signal (video and audio) into a digital signal does
                 not save any space in the transmission system, in fact digital signals
                 demand greater bandwidth than analogue signals.
                 As digital signals use an entirely different method of encoding they are
                 less susceptible to interference by the kind of unwanted additions that
                 plague analogue signals.


Compressing digital signals
                 Compressing digital signals involves replacing the digital information
                 with a smaller amount of different digital information. The essential point
                 is that the original data and the compressed data are both digital data,


227                                                 Sony Broadcast & Professional Europe
Part 21 – Video compression

                   and therefore just as resistant as each other to interference from
                   analogue noise.


Digital errors in transmission
                   Digital data is very resilient to the kind of interference that affects
                   analogue signals. However if analogue noise is too excessive it can alter
                   digital data as well.
                   If there is a break in the transmission path this can cause digital data to
                   be corrupted.


Compensating for digital errors
                   There are techniques for removing errors from transmitted digital data.
                   You can either use error correction techniques to replace the error data
                   with the original data, or, failing that, use concealment techniques to
                   replace the error data with data calculated to be similar to what the data
                   should have been.


The advantage of digital compression
                   Analogue compression tends to reduce the signal’s resolution and force
                   it into the noise. This reduces its quality.
                   Digital compression can be compressed as much as you like. The
                   compressed data is still digital data, and is still not affected by analogue
                   noise.
                   As mentioned, excessive analogue noise does alter digital data.
                   However the amount of noise required is greater than with analogue
                   signals.
                   Both analogue signals and digital signals can be corrupted by
                   intermittent breaks in the transmission link. However there are
                   techniques for correcting or concealing errors in digital data.


Entropy and redundancy
                   Any signal, data, or multimedia material of any sort, may be divided into
                   two basic types, entropy and redundancy.

        Entropy
                   Entropy is another word for chaos. It is essentially unpredictable. In
                   many systems entropy is something bad. Something to be eliminated.
                   Entropy destroys order and introduces uncertainty.
                   However in multimedia signals entropy is the information we want to
                   keep. It represents the interesting parts of the data or signal.

        Redundancy
                   Redundancy is something that can be dropped or eliminated. In
                   multimedia signals redundancy represents the parts of a signal that are
                   entirely predictable, and repetitious. If any part of a signal or data can be


Sony Training Services                                                                     228
Broadcast Fundamentals

                 predicted then it is unnecessary to include it in the signal or data at all. It
                 is, be definition, redundant.

       Entropy and redundancy is video signals
                 If we assume a full quality digital video signal is the kind of signals
                 specified by CCIR601 and as used by SDI links, the total data rate is
                 270Mbps.
                 Thus digital video comprises a proportion of entropy and redundancy
                 totalling 270Mbps.
                 For the simplest possible video signal all of this 270Mbps is redundant
                 data. For the most complex all of it is entropy.
                 In practice video signals are never that simple or complex. However it is
                 possible to draw a very simple graph showing the proportion of entropy
                 and redundancy for video signals of all different types, from the simplest
                 to the most complex.




                 In reality video signals never become so complex that they comprise
                 entirely of entropy. There is always some redundancy somewhere in the
                 signal. It may be spatial redundancy, because each frame comprises
                 very little detail, or it may be temporal redundancy where each frame is
                 similar to the last.

       Considering a 2:1 compression ratio
                 Imagine you wanted to compress the video signal to a half of its original
                 size, i.e. a 2:1 compression ratio. If it was reasonably simple it would be
                 easy to reduce the signal by the required amount by removing a
                 proportion of the redundancy. In many cases you may not even need to
                 remove all the redundancy.

229                                                  Sony Broadcast & Professional Europe
Part 21 – Video compression

                   However it the video signals became particularly complex, the amount of
                   entropy may exceed half of the original signal. In this case you would
                   have to throw away some of the entropy to reduce the signal by the
                   amount required.
                   The key point is that there is no loss except for the most complex video
                   signal. In many cases the video signal can be compressed and
                   decompressed with no loss of details at all.

        Considering a 5:1 compression ratio
                   Now imaging you want to compress the same video signals by a 5:1
                   compression ratio, i.e. to a fifth of its originals capacity.
                   In this case the video signal needs to be that much simpler for there to
                   be no loss in compression. For anything but the most simple video
                   signals you would need to throw away some entropy in order to achieve
                   the required compression ratio.


The purpose of any compression scheme
                   The sole purpose of any compression scheme is to separate entropy
                   from redundancy, keep the entropy part and throw away the redundancy
                   part.
                   If the compression scheme is poor it will not be able to find all the
                   redundancy in the video signal and will not be able to compress the
                   signal enough without needlessly throwing away entropy.
                   A good compression scheme will be able to use a selection of tools and
                   techniques to find enough redundancy so that it is able to achieve the
                   required compression ratio without having to throw away any entropy.


Lossless and lossy compression
                   There are two basic types of compression systems used, lossless
                   compression and lossy compression. Both have advantages and
                   disadvantages, and are appropriate to the kind of data or signals they
                   are compressing.

        Lossless compression
                   As the name suggests, lossless compression is a scheme where there is
                   no loss in the compression / decompression path. After decompression
                   you end up with exactly the same data or information as you started with
                   before you compressed.
                   Lossless compression is vital for computer data compression. If you are
                   compressing an executable, or peripheral driver file, you cannot accept
                   any loss during compression at all. If just one bit is wrong after
                   decompression the compression system has failed.
                   Lossless compression will remove as much redundancy as possible, but
                   will not remove any of the entropy.
                   The disadvantage of lossless compression is that you cannot specify a
                   compression ratio. It will depend on the kind of information that needs to


Sony Training Services                                                                     230
Broadcast Fundamentals

                 be compressed. A simple 1Mb file, like a bitmap image of a snowy scene
                 where most of the image is white, will compress much more than a
                 complex 1Mb file, like an executable file.

            Examples of lossless compression schemes
                 A perfect example of a lossless compression scheme is PKZip or
                 WinZip. These programs are designed to compress computer files by as
                 much as possible without loss.

       Lossy compression
                 The advantage of lossy compression is that you may specify the
                 compression ratio. This is vital in multimedia transmission systems,
                 satellite and cable television links, digital video and audio tape and disk
                 storage systems.

            Examples of lossy compression schemes
                 The most popular lossy compression schemes for video signals is
                 MPEG and DV. For audio they are MP-3 and ATRAC.
                 There are others. For video tape there is Digital Betacam, Betacam SX,
                 IMX, and derivatives of DV like DVCAM and DVC Pro. For computers
                 there are higher compression ratio schemes like .AVI and Real Motion.
                 There are other lossy compression schemes intended for still images,
                 like the JPEG, GIF, and TIFF formats.


Inter-frame and Intra-frame
                 For video there are two basic methods of compression, inter-frame and
                 intra-frame compression.

       Inter-frame compression
                 Inter-frame compression looks at the difference between frames. It is not
                 actually a compression scheme at all, but a way of processing the video
                 signal before compression takes place in order to achieve more efficient
                 compression.
                 There are three types of inter frame, the P (predicted) frame, the B
                 (between) frame and the R (reverse) frame.

            The P frame
                 The P frame is a frame derived from a comparison between the frame in
                 question and the previous frame.
                 In most cases the difference between a frame and the previous frame is
                 small.

            The B frame
                 The B frame is a frame derived from a comparison between the frame in
                 question and the average of the previous frame and the following frame.



231                                                 Sony Broadcast & Professional Europe
Part 21 – Video compression

             R frame
                   The P frame is a frame derived from a comparison between the frame in
                   question and the next frame.

        Intra-frame compression
                   Intra-frame compression is a scheme for reducing the amount of data in
                   one video frame. The data is reorganised in order to separate
                   redundancy from entropy and the redundancy is discarded.

             The toolbox
                   Intra-frame compression uses a series of tools. This set of tools is
                   sometimes referred to as a toolbox. The tools that are generally used
                   are DCT (discrete cosine transform), quantisation zig-zag scanning, run
                   length coding, variable length coding and data buffering.


What is DCT?
                   DCT (discrete cosine transform) is a method of describing data as a
                   discrete weighted set of cosines. Put simply the original data is
                   described not as its original samples or data words, but as how these
                   samples change.


The church organ
                   In understanding DCT a good place to start is to look at the church
                   organ.
                   Organ pipes produce a nice clean tone. The note they produce is pretty
                   close to a sine. Organists add the sound from other pipes to the note
                   they are actually playing to produce different tones. Organists call these
                   stops.

        Turning the church organ upside down
                   It should be possible to analyse any of the many different sounds and
                   tones coming from a church organ and work out exactly which stops the
                   organist had pulled out.

        The scientific equivalent
                   In fact it is possible to do this with any tone. You can break down any
                   note from any musical instrument into a series of sine waves. The lowest
                   frequency sine is the note itself, often called the fundamental. The rest
                   are all higher frequencies, normally multiples of the fundamental,
                   generally called the harmonics.
                   Taking things one stage further, it is actually possible to break any
                   repetitive or periodic signal into a series of sine waves or cosine waves,
                   or, to be more exact, a series of sine and cosine waves with associated
                   phases.
                   This process is commonly referred to as a Fourier analysis or a Fourier
                   transform.


Sony Training Services                                                                   232
Broadcast Fundamentals


The Fourier transform
                 A Fourier transform is a method of describing any periodic signal as a
                 series of sines and cosines.
                 When you do a Fourier transform of pure sine wave you get a single
                 spike at the frequency of the wave.
                 A violin creates something approaching a sawtooth waveform. A
                 sawtooth is made up from the fundamental and all the harmonics. The
                 amplitude of the fundamentals falls exponentially as the frequencies
                 increase.
                 A classic rich organ sound approaches a square wave. This wave is
                 made up from the odd harmonics of the fundamental. The amplitude of
                 these harmonics falls with increased frequency in the same way as it
                 does for the sawtooth wave.




233                                               Sony Broadcast & Professional Europe
Part 21 – Video compression

                   In both the pure sawtooth and square waveforms there are an infinite
                   number of harmonics. This is impossible. The high frequency harmonics
                   will always be lost no matter how small the loss is.Thus it is also
                   impossible to create an absolutely perfect sawtooth or square wave.




        The mathematics of Fourier transforms
                   The mathematical formulae used for Fourier transforms looks hideous
                   but is actually quite simple. The basic Fourier transform expression is :-


                                            ∞
                                 F ( s) =    ∫
                                            −∞
                                                 f (t ) exp − j 2πst dt
                   Where F(s) is the Fourier transform, f(t) is the original signal. The “exp”
                   is a neat mathematical shorthand for a particular addition of a sine and
                   cosine, which goes like this :-

                                 exp − j 2πst = cos(2πst ) + j sin(2πst )

                   The reason for this is that although a signal is made up from a number of
                   pure sine waves, the amplitude and phase of each one may be different.
                   The expression above is a way of describing a sine wave at any
                   amplitude and phase by describing it in terms of a sine and cosine in the
                   real and imaginary plane.


Sony Training Services                                                                     234
Broadcast Fundamentals

                 The integral is simply an infinite sum of all the “f(t) exp” for every point in
                 time (t) from the beginning of time (-∞) to the end of time (∞).

       The pros and cons of Fourier transforms
                 Fourier transforms work well for continuous periodic signals, and are
                 therefore very useful for things like analysing music, signals from deep
                 space, or vibration analysis on racing cars or aircraft.
                 However they do not work well for digital signals where there are a
                 series of discrete samples in time. For this we need a discrete version of
                 the Fourier transform.


The Discrete Fourier Transform (DFT)
                 DFT is a special form of Fourier transform for periodic signals made up
                 from discrete values, i.e. digital signals.
                 With analogue signals we use normal Fourier transforms which use
                 integrals. This is because analogue signals are continuous and any
                 summing analysis like Fourier transforms need to imagine the sum made
                 up from an infinite number of infinitely small time periods all along the
                 analogue signal we are studying.
                 We are not interesting in taking infinitely small points in time any more.
                 The samples now occur at specific points in time related to the sample
                 rate, or period. Therefore the integral in a normal Fourier transform can
                 be replaced by a simpler summing function ‘∑’.
                 A few other changes will take place. Rather than talking about a
                 continuous signal we are now looking at discrete samples. Thus let us
                 replace f(t) with f(k). Likewise the continuous Fourier function, F(s), will
                 be replaced by a discrete one, F(r).
                 The normal integral based Fourier transform changes in the case of the
                 DFT to :-
                                           k =∞
                               F (r ) =    ∑ f (k ) exp
                                          k = −∞
                                                            − j 2πrk



       Taking a set number of samples.
                 We have managed to get rid of the integral and replace it with something
                 more applicable to samples. However we are still stuck with this concept
                 of the signal stretching from the beginning of time to the end of time, i.e -
                 ∞ to ∞.
                 We can adjust the expression so that we can find the DFT of a set
                 number of samples. To do this we imagine the DFT will repeat the
                 samples we are interested in again and again, and take the sum across
                 just the samples we want. We do not want to decide how many at the
                 moment so we will give the quantity of samples a letter. N would be
                 good.
                 This means that the DFT will change to :-




235                                                  Sony Broadcast & Professional Europe
Part 21 – Video compression


                                               k = N −1                − j 2πk
                                         1
                                F (r ) =
                                         N
                                                ∑
                                                k =0
                                                          f (k ) exp      N




                   It is conventional that the first sample is “0” and the last is (N-1). There is
                   no particular reason for this other than it makes the equations a little
                   simpler, but it means that if, for instance, you have 8 samples in your
                   selection “k” will go from “0” to “7”.

        The judder problem of DFT
                   There is a problem with DFT over a set number of samples. We are
                   imagining the samples repeating again and again. However there is no
                   guarantee that the last sample will be anywhere near the same value as
                   the first sample.
                   This will produce a judder in the signals as it repeats. This judder is
                   energy and thus has its own harmonics that are nothing to do with the
                   original samples.




Sony Training Services                                                                      236
Broadcast Fundamentals




Discrete Cosine Transform (DCT) solution to judder
                 One neat way of removing the sudden jump between the last sample
                 and the first sample is to take twice as many samples, with half of them
                 being a mirror image.

       Mirrored sines and cosines
                 An interesting thing happens to sines and cosines when they are
                 reflected about zero. The sines cancel out!
                 Put mathematically sin(-x) = -sin(x) and cos(-x) = cos(x)




237                                                Sony Broadcast & Professional Europe
Part 21 – Video compression

                   This can help to reduce the complexity of the DFT. We need only
                   consider the cosine parts of the transform.
                   Thus by taking twice the number of samples and considering only the
                   cosines the original expression drops to :-


                           2 k = N −1            (2k + 1)rπ
                  F (r ) =     ∑
                           N k =0
                                      f (k ) cos
                                                    2N
What does the result of DCT look like?
                   The original Fourier transform shows the frequencies that make up the
                   original signal. DCT does the same thing. The samples are replaced by
                   numbers that describe frequencies that make up the original sample
                   data. These numbers are called coefficients.
                   The samples are discrete, so the number of frequencies is also discrete.
                   In fact there are as many DCT coefficients as there are samples. If you




                   take more samples in your analysis, you get more coefficients.


                   The formula also places the lowest frequency coefficient first, in place of
                   the first sample. Remember that we are imagining that the set of
                   samples we are doing a DCT of is repeated again and again, because
                   DCT is based on Fourier transforms which only work on periodic signals
                   that go on for ever. The odd conclusion from thi is that the lowest
                   frequency coefficient is actually the average level of the samples,
                   sometimes called the DC coefficient. It is the second coefficient that is
                   the same as the fundamental frequency we looked at with church organs
                   right at the beginning.


DCT in video
                   Imagine a video picture made up of individual pixels. The pixels are laid
                   out in rows and columns.
                   As we have seen DCT operates on a group of samples. It cannot
                   operate on just one sample, because it is analysing how the samples are
                   changing, and what frequency coefficients the samples are made from.

Sony Training Services                                                                   238
Broadcast Fundamentals

                 The same thing applies to DCT as it is used in video. We need to
                 analyse a group of pixels in both the horizontal (x) and vertical (y)
                 directions.

       Simple 2 by 2 DCT block
                 The smallest group of pixels DCT can work with is a 2 by 2 block. So let
                 us look at how this might work.
                 The whole picture is now split into 2 by 2 blocks. DCT will then remove
                 each block and replace it with 4 coefficients that describe how the
                 original pixels vary across the block.
                 The top left corner of the block will now hold the DC coefficient. The top
                 right pixel is replaced by a number describing how all 4 pixels are
                 changing in the horizontal direction. This number is called the horizontal
                 AC coefficient.
                 The bottom left pixel is replaced by a number describing how all 4 pixels
                 are changing in the vertical direction. The principles are the same as the
                 top right pixel, but for the vertical direction. This number is called the
                 vertical AC coefficient.




                 The bottom right pixel is replaced by a number describing how the 4
                 pixels are changing at a 45 degree angle from top left to bottom right.
                 This is called the diagonal AC coefficient.
                 Thus the 4 DCT coefficients describe the original pixels in 2 dimensions
                 in much the same way as a simple DCT operates on simple 1
                 dimensional samples. There is nothing lost in making this transform.
                 Describing the pixels in terms of their frequency coefficients, i.e how
                 they vary, is just as accurate a method as describing them as individual
                 pixels.



239                                                 Sony Broadcast & Professional Europe
Part 21 – Video compression

        A practical 8 by 8 DCT block
                   In virtually all practical compression schemes the simple 2 by 2 DCT
                   block is too small. There are benefits of selecting a larger block size.
                   The practical size most commonly used is an 8 by 8 DCT block,
                   transformed from 64 original pixels.
                   The basic principles are the same. The top left corner of the DCT block
                   is the DC coefficient, in the same basic way as it was in the simple 2 by
                   2 block.
                   The 8 coefficients along the top describe how the original 64 pixels vary
                   in the horizontal direction, in the same way as the top right coefficient did
                   in the simple 2 by 2 block. However there are now 8 possible
                   coefficients. Thus the left most of these coefficients describes the overall
                   low frequency horizontal variation of the 64 pixels. This is the
                   fundamental in the 1 dimensional DCT we looked at before.
                   The next coefficient to the right corresponds to the next highest
                   frequency change, and so on until the top right most coefficient
                   describes the highest frequency horizontal change in the original 64
                   pixels. These are the same as the harmonics in the 1 dimensional DCT.
                   The same principle applies to the coefficients down the left side for the
                   vertical direction, and likewise for the coefficients down the centre of the
                   block from top left to bottom right for the 45 degree diagonal direction.
                   However, with 64 original pixels there is now opportunity to provide
                   coefficients that describe how the original 64 pixels are varying at other
                   angles between vertical, 45 degrees and horizontal.




Sony Training Services                                                                    240
Broadcast Fundamentals




                 Thus the 8 by 8 group of DCT coefficients now describes a wide range
                 of frequency changes and angles. In fact the DCT block exactly
                 describes the original pixels, but in a different way.


The mathematics of DCT as used for video
                 The mathematical description of DCT so far has been 1 dimensional.
                 The expression we eventually concluded is :-

                                  2   k = N −1
                                                               (2k + 1)rπ
                         F (r ) =
                                  N
                                       ∑
                                       k =0
                                                 f ( k ) cos
                                                                  2N


241                                                   Sony Broadcast & Professional Europe
Part 21 – Video compression

                      Now we need to replace the one dimensional DCT F(r) part with a two
                      dimensional part, F(u,v). The f(k) will be replaced by a two dimensional
                      f(x,y).
                      The cos part of the expression now needs to be done in two dimensions
                      as well, once for the x direction and another for the y direction to create
                      the u direction and v direction in the final DCT respectively. We therefore
                      have :-


                           2           x = N −1 y = M −1
                                                                             (2 x + 1)uπ     (2 y + 1)vπ
        F (u , v) =
                        N M
                                        ∑ ∑
                                        x =0         y =0
                                                            f ( x, y ) cos
                                                                                 2N
                                                                                         cos
                                                                                                 2M
                      Where the original pixel block is N pixels wide by M pixels high.
                      This is not exactly correct. In practice we need a ‘fiddle factor’ for all the
                      coefficients across the top row or along the left column, i.e. when either
                      u=0 or v=0. This factor is √2. Thus the two dimensional DCT as used in
                      video is :-

                               2          x = N −1 y = M −1
                                                                                (2 x + 1)uπ     (2 y + 1)vπ
    F (u , v) = Cu Cv
                           N M
                                           ∑ ∑x =0      y =0
                                                               f ( x, y ) cos
                                                                                    2N
                                                                                            cos
                                                                                                    2M

                      Where :-
                                          1
                               Cu =               for u = 0 and C u = 1 for u = 1to N
                                           2

                                          1
                               Cv =               for v = 0 and C v = 1 for v = 1to N
                                           2

                      This expression looks pretty hideous, but you can see the vaious
                      elements in it, and where they come from. It is also worth bearing in
                      mind that it may look a lot simpler if you know more about the size of the
                      group of pixels you are looking at.
                      For instance, if the group is always square, we could say that the group
                      is N pixels by N pixels and eliminate the M and the annoying square
                      roots at the beginning, Thus :-

                          2    x = N −1 y = N −1
                                                                      (2 x + 1)uπ     (2 y + 1)vπ
     F (u , v ) = C u C v
                          N
                                ∑ ∑
                                x =0       y =0
                                                     f ( x, y ) cos
                                                                          2N
                                                                                  cos
                                                                                          2N

                      If we also say that the pixel block is going to be 8 pixels by 8 pixels,
                      which it is for JPEG and all MPEG compression schemes, the
                      expression becomes even simpler, thus :-

                         1 x =7             y =7
                                                                      (2 x + 1)uπ     (2 y + 1)vπ
        F (u , v) = Cu Cv ∑                ∑ f ( x, y) cos                        cos
                         4 x =0             y =0                           16              16
Sony Training Services                                                                                   242
Broadcast Fundamentals




DCT in audio
                 Right at the beginning we looked at how audio signals can be broken
                 down into a fundamental and a group of harmonics.
                 DCT is very appropriate to breaking down digital audio signals. Audio is
                 a one dimensional stream of data, and there is no need to use the kind
                 of two dimensional DCT we use for video signals.
                 MPEG uses DCT in compressing audio signal. The modern trend for
                 MP-3 players are based on MPEG, which in turn uses DCT.


Basis pictures
                 Machines often do not go through the tedium of calculating DCT values
                 from scratch. They use pre-calculated values in a kind of look-up table.
                 These look up tables are called basis pictures. There are as many basis
                 pictures as there are samples. And each basis picture contains as many
                 numbers as there are samples.
                 Thus for an 8 by 8 group of video pixels there are 4096 basis picture
                 numbers.
                 Machines use simple matrix multiplication to perform DCT using basis
                 pictures.


Why bother?
                 DCT is used to rearrange the pixels in a video picture into frequency
                 coefficients because more video pictures have their energy in the low
                 frequency and DC coefficients.




243                                                Sony Broadcast & Professional Europe
Part 21 – Video compression

                   Therefore if you do a DCT of a complete video frame you will invarably
                   find all the big numbers in the top left corner. The bottom right corner
                   tends to be full of zeros.
                   Looking at it another way DCT allows us to separate the video frame’s
                   entropy and redundancy, with all the entropy in the top left corner of
                   each DCT block and the redunancy in the bottom right corner.
                   This helps in the variable length coding part of a compression system to
                   reduce the number of bits required.


                   Huffman codes are a particular type of variable length code. Huffman
                   codes are particularly efficient at reducing the number of bits required to
                   describe data.
                   While the Huffman coding principle is almost certainly not the best
                   variable length coding system for every eventuality, and there may be
                   new coding systems in the future that are more efficient than Huffman
                   coding, it remains the most common variable length coding system for
                   data compression.


Huffman’s three step process
                   Huffman coding is performed in three steps. The first is to analyse the
                   probability of the original data. The second is to use this analysis to
                   generate a series of Huffman codes. The third is to use these codes to
                   reduce the original data.

        Step 1 - Analysis
                   In the analysis step, a probability is assigned to each possible number in
                   the original data. Depending on the chance of it occurring.
                   Let us assume we want to reduce the text on the following page.
                   There are 7462 words including spaces, and the quantity of letters,
                   numbers and symbols is as follows :-


                    Letter    Quantity     Letter    Quaniity    Letter    Quantity
                         A      557          Q          5          7           1
                         B      114          R         377         8           0
                         C      167          S         367         9           1
                         D      266          T         520         0           4
                         E      653          U         144          (         11
                         F      150          V          54          )         11
                         G      142          W         125          .         43
                         H      287          X          3           ,         69
                         I      382          Y         124          ‘         44
                         J        3          Z          3           !          1



Sony Training Services                                                                   244
Broadcast Fundamentals

                     K   67    1     0         ?         2
                     L   275   2     2        @          1
                    M    133   3     0         -        29
                    N    422   4     0       Space     1336
                    O    473   5     1         &         0
                     P   94    6     0        %          0




245                                Sony Broadcast & Professional Europe
Part 21 – Video compression

   F9or many who live in Argyll, occasional visits to Glasgow are a part of life, especially if you live in the nearer mainland parts of
   Argyll as I do. Unless you've had to head there in its rush hour, the city is less than an hour and a half from Strachur by the 'Rest
   and be Thankful' pass and Loch Lomond-side. (In passing, why Rest and be Thankful? The original military road is steepest of all
   near its very top and my guess is that your horse did the resting and you did the being thankful - that the poor animal hadn't
   collapsed with the effort and let your cart, carriage or whatever run back over the edge - but I'd be interested in other versions,
   especially the correct one if different from this. Doubly welcome if with sources).
   But back to Glasgow. Why do we go? is a question I've often asked myself when traipsing around wet streets and sticky shops. I
   suppose there are almost as many answers as visits, but apart, obviously, from the much greater number and variety of shops than
   Argyll (population 90 000 or so including all its towns) can offer, there are also all sorts of entertainments and cultural and
   commercial reasons for making the effort. Despite too much continuing poverty and some areas of awful housing, it's also a very
   attractive place that's had a long, hard time getting past the 'no mean city' image in the minds of outsiders, amongst whom I have
   to count myself. To-day we were mainly going to take Ewan out for a meal in the evening, but shops and a film were to be added
   in.
   At least as good as the shops and films, as far as I was concerned, would be the chance to explore another corner of the city on
   foot and, since we had Tess the dog with us, I had a good excuse. Despite forecast warnings of blizzards on northern hills, they
   certainly didn't apply here and now.
   I've sometimes heard Glasgow referred to, tongue-in-cheek, as the 'dear green place' and I'd be at least as keen to know the
   derivation of this one as of the 'Rest and be Thankful'. I do know that you can find yourself in several places within the city
   boundaries where that title just doesn't sound ridiculous at all and I was trying to find myself a new one of these to-day. I was
   heading for the north-west of the city, a little beyond Anniesland and between the road to Bearsden and Maryhill Road, home of
   Partick Thistle FC. In this little corner, with the aid of a partially-remembered scrutiny of the A to Z, I reckoned to put together a
   fairly peaceful 2 mile triangle from the end of 'Switchback Road': one side being the towpath of the Forth and Clyde Canal, a
   second the banks of the River Kelvin (a tiny part of the 'Kelvin Way') and the third a crossing of Dawsholm Park and so it turned
   out (the fact that the canal had been recently drained on this stretch was a bit of a surprise - but it's too early in the year for the
   mud to be smelly and, as it's being restored (hooray!), it'll be full again in time).
   I started near Lock 27, where the new-looking canalside pub of the same name had towpath tables and outdoor drinkers to go with
   them - not bad, pre-Easter. An AA roadsign nearby indicated a microbrewery, which might repay following up (memo to non-
   Brits, AA can be 'Automobile Association', as well as Alcoholics Anonymous). Having hardly started, I put aside thoughts of a
   pint of real ale and headed east for a couple of enormous gasometers that weren't yet industrial archaeology but, like the mud,
   were smell free. Just as well.
   A little further on, where a road crossed the canal, I came across the first sign of canal restoration work in the form of a brand new
   bridge bearing the carved legend 'Forth and Clyde Canal' and a relief of what I think must be the giant new wheel arrangement
   near Falkirk. When completed, this will lift boats from the Forth and Clyde to the Union Canal so that they can sail on to
   Edinburgh for the first time in decades. A man in the contractor's hut nearby reckoned that the work in this drained Glasgow
   section would be ready in a handful of months, but that the restoration of the whole canal would take a couple more years. Time
   enough yet to book your canalboat holiday across central Scotland.
   Continuing on, with only a few people and a pair of mute swans for company (having passed the temporary earth dam holding
   back the water from the canal's dry section) and a mile, now, from my starting point, I came all of a sudden on one of the canal's
   biggest engineering achievements - the aqueduct spanning the steep-sided den of the River Kelvin Beyond the aqueduct, a flight of
   locks rose to cross Maryhill Road. Between the locks seven pairs of brand-new, heavy wooden lock gates lay stacked around a
   small basin waiting to be installed. The flight was further than I wanted to go, but I paused briefly on the aqueduct to admire a pair
   of cormorants sitting somewhat grandly on top of an abandoned sandstone pier that once carried a railway across the Kelvin.
   Bereft of its railway track and all connection with either bank, the pier looked for all the world like a sea-stack lost in the middle
   of this most urban of rivers and the cormorants, therefore, seemed oddly 'at home'.
   'Most urban' isn't very fair. Descending to the far bank of the Kelvin took me down through some fine broadleaved woods to a
   quiet riverside and turning to go upstream soon revealed as fine a patch of primroses as I've seen this spring. I passed no-one at all
   by the river (5 ish on a Saturday afternoon), though beyond a road bridge there was soon more nearby housing. In fact it was all
   green and pleasant to the next railway bridge (West Highland line) and green beyond, past low sandstone cliffs, to a low-level road
   bridge where it's possible to cross back to the right bank again and search west for a way into Dawsholm Park.
   Crossing this bridge brought great news in the form of a man fishing. Added to the cormorants' presumed need for food it was
   becoming clear that the Kelvin may be urban, but certainly isn't dead. The fisherman said it was greatly improved, claiming trout
   and sea-trout for it, and even suggested Partick Mill (downstream) as a place to go between September and November to watch the
   salmon passing over the weir. I think I shall. I heard recently that, down in England, the River Tame, which rises in the
   ominously-named 'Black Country' before flowing through Birmingham and which I remember, from more than thirty years ago, as
   possibly the dirtiest and deadest in the whole of industrial Britain, now also sees anglers on its banks. Without wanting to be
   Pollyanna-ish, not all environmental news is black.
   Leaving the angler to his rod, and wondering idly about the less-contemplative youth suggested by the scars on each of his cheeks,
   I headed through some scrub by guesswork, crossed a group of all-weather football pitches and found the gates of Dawsholm
   Park, again by guesswork. By more guesswork I climbed a pine-wooded ridge to be all of a sudden re-oriented by the middle-
   distant gasworks and then by a grand view along the length of the industrial Clyde through Glasgow and out to Clydebank with its
   shipyard cranes. Sure of my position again, a balcony of a path took me along a drumlin-crest above a pasture containing four very
   hairy Highland Cattle (Glasgow is full of surprises) to a place where I could slip down easily to my car. Too late to seek out the
   microbrewery, but another day.
   It really is a dear green place if you look.
   John Fisher
   aboutargyll@compuserve.com




Sony Training Services                                                                                                                       246
Broadcast Fundamentals

                 Now the letters are rearranged in order of the number of times they
                 occur in the text. The actual quantity is expressed as a probability with
                 regard to the total number of letters, 7462 :-


                   Letter     Prob.       Letter     Prob.       Letter      Prob.
                     1          0           Z      0.000402        G        0.01903
                     3          0           0      0.000536        U        0.01930
                     4          0           Q      0.000670        F        0.02010
                     6          0           (      0.001474        C        0.02238
                     8          0           )      0.001474        D        0.03565
                     &          0           -      0.003886        L        0.03685
                    %           0           .      0.005762        H        0.03846
                     5      0.000134        ‘      0.005896        S        0.04918
                     7      0.000134        V      0.007237        R        0.05052
                     9      0.000134        K      0.008979        I        0.05119
                     !      0.000134        ,      0.009247        N        0.05655
                    @       0.000134        P      0.012597        O        0.06339
                     2      0.000268        B      0.015277        T        0.06969
                     ?      0.000268        Y      0.016617        A        0.07464
                     J      0.000402       W       0.016752        E        0.08751
                     X      0.000402        M      0.017824      Space      0.17904


                 Now the two lowest probabilities are summed together and the letters
                 are lumped together.
                 A new table is made in the same way with, and the two lowest
                 probabilities are summed with the two letters expressed together.
                 This process is repeated until there are just two letters left.




247                                                  Sony Broadcast & Professional Europe
Part 21 – Video compression



                   Variable length codes are used predominantly in digital compression
                   schemes, and is the most effective method of reducing the amount of
                   data.
                   Entropy codes is another term used for variable length codes. Hamming
                   codes are a particularly efficient type of variable length, or entropy,
                   codes.


The principle behind variable length codes
                   The idea behind variable length codes is to know which number are
                   more likely to occur that others in your digital data, and replace these
                   with a special code that is smaller that the original number.
                   Numbers that are likely to occur a little less often are changed for slightly
                   larger codes. Numbers that and very unlikely to occur are replaced with
                   codes larger than the original number.
                   The hope is that there is a stronger chance that likely numbers will occur
                   in the data than unlikely numbers and there will be a lot more small
                   codes than big codes.


The results of discrete cosine transforms
                   DCT (discrete cosine transforms) replace original video pixel data with
                   numbers corresponding to how the data is changing. These numbers are
                   called coefficients.
                   DCT operated on a matrix of pixels. MPEG uses a matrix of 8 by 8
                   pixels. Other matrix sizes are used but 8 by 8 pixel matrices are the
                   most popular.
                   Conventionally the DC coefficient is placed in the top left corner of each
                   matrix, replacing the pixel data that original occupied this position. This
                   DC coefficient has a flat statistical probability curve. This means that is
                   just as likely that that it will contain any number.

        AC coefficient bell curves
                   The rest of the coefficients are AC coefficients. The AC coefficients all
                   have bell shaped statistical probability curves. That is to say that there is
                   a greater chance that the number is somewhere in the middle.
                   The high frequency coefficients have a sharper bell curve than the low
                   frequency coefficients. That is to say there is a stronger chance that the
                   high frequency coefficients will contain a number closer to the middle
                   that with the low frequency AC coefficients.


Using bell curves for variable length coding
                   It is very useful that all the DCT AC coefficients have a bell curve
                   probability. It is possible to design variable length codes that takes
                   advantage of this bell curve by having small codes for numbers at the
                   peak of the curve and large codes for numbers at the outer edges of the
                   curve.

Sony Training Services                                                                     248
Broadcast Fundamentals

       Numbering systems for the bell curves
                 The original video samples are all 10 bit values and have a range from 0
                 to 1023. The DCT AC coefficients use a negative numbering system and
                 therefore have a range from –512 to +511.
                 The peak of the bell curve is therefore centred about zero. In terms of
                 the original video the DCT AC coefficients are centred around the colour
                 grey.

       Using a simple variable length coding system
                 Imagine a simple variable length coding system based on the bell curves
                 for the DCT AC coefficients.
                     DCT AC coefficient                 Variable length code
                             -9                            11111111101
                             -8                             1111111101
                             -7                             111111101
                             -6                              11111101
                             -5                              1111101
                             -4                               111101
                             -3                                11101
                             -2                                1101
                             -1                                 101
                              0                                  0
                             +1                                 100
                             +2                                1100
                             +3                                11100
                             +4                               111100
                             +5                              1111100
                             +6                              11111100


                 As you can see the code for a zero is just one bit, a “0”. It is also easy to
                 see the pattern for negative and positive numbers. The number itself
                 indicates the number of “1”s with negative numbers ending in “01” and
                 positive numbers ending in “00”.
                 Let us consider a stream of 14 DCT AC coefficients thus :-
                 +1 –2 0 +4 –3 0 0 +6 –1 –3 +4 0 –1 +2
                 Converted into their simple variable length codes gives us :-
                 100110101111001110100111111001011110111110001011100
                 The original DCT AC coefficients were all 10 bit samples. Thus in this
                 stream there are 140 bits. The variable length codes for this stream have
                 51 bits.



249                                                 Sony Broadcast & Professional Europe
Part 21 – Video compression

                   It is important to realise that the variable length codes are exactly the
                   same data as the original coefficients, they are just described in a more
                   efficient way.


Decoding variable length codes
                   If it is a little difficult to believe how variable length codes can be such an
                   efficient bit saver, decoding the original data from the apparent random
                   stream of variable length codes will seem a miracle.
                   However decoding variable length codes, while fiendishly clever, is
                   actually very simple.
                   Starting from the beginning you look to see if there is a code
                   corresponding to the first bit. If there is not then look to see if there is a
                   code for the first and second bit together. If not try the first, second and
                   third bits together.
                   In this case the first bit is a “1” which is not a code, The first and second
                   bits “10” is also not a code. However the first, second and third bits give
                   a good code “100”. The original coefficient for this code is “+1”.
                   Now discard these bits and start again, looking for the first time you see
                   a good code. The next good code is “1101”. Anything less than this is
                   not a good code. “1101” is the code for “-2”.
                   Cut this off and start again. The next bit is a “0”. This is a good code in
                   itself and gives a “0” as the coefficient data.
                   This same method can be used all the way through the variable code
                   stream replacing the good codes as you find them with the original
                   coefficients.


Disadvantages of variable length codes
                   Variable length codes can suffer from errors in the same way as any
                   other signal, both analogue and digital. Variable length codes can also
                   suffer when they are read from the middle, rather than from the
                   beginning.
                   However where variable length codes fail is when the original data does
                   not fall into the assumed statistical pattern, and when you want to
                   decode it starting in the middle.

        Errors in variable length codes
                   Let us imagine that there is and error in the variable length codes thus :-
                   100110101110001110100111111001011110111110001011100
                   The eighth “1” has been received as a “0”. Decoding this data starts OK
                   but goes wrong when we reach the mistake thus :-
                   +1 –2 0 +3 0 –3 0 0 +6 –1 –3 +4 0 –1 +2
                   Instead of “+4” we now have “+3” “0”. A mistake in the variable length
                   codes can ripple down the data giving rise to a few incorrect coefficients.




Sony Training Services                                                                       250
Broadcast Fundamentals

                 In practice the variable length codes and remarkably resilient and errors
                 dot not ‘travel’ too far down the variable length stream and only a few
                 coefficients are affected. Error correction schemes can be used to
                 correct these minor mistakes and replace the original coefficients again.

       Decoding variable length codes from the middle
                 The second disadvantage of variable length codes is that you will get an
                 error if the variable length code stream is decoded from the middle.
                 To illustrate this assume that we start reading the original variable length
                 stream we made starting at the tenth bit.
                 Discard these 10 bits 100110101 and start reading here…
                 111001110100111111001011110111110001011100
                 The result of this is :-
                 +3 –3 0 0 +6 –1 –3 +4 0 –1 +2
                 So, even though the first coefficient is in error, the rest of the stream has
                 been decoded correctly.

       Bad statistical patterns
                 Variable length codes can stand a few numbers that do not follow
                 statistical assumptions.
                 Assume that there are a few large numbers, both negative and positive,
                 in the stream mentioned above, thus :-
                 +1 –2 0 -50 –3 0 0 +6 –10 –3 +4 0 –1 +2
                 One of the coefficients has been replaced by –50 and another by –10.
                 The new stream is now thus :-
                 100110101111111111111111111111111111111111111111111111111
                 1011110100111111001111111111011110111110001011100
                 While this appears to be a massive increase in the number of bits we
                 had before it is still only 106 bits and still a saving on the 140 bit we had
                 originally.
                 However, what if the original stream were replaced by something like
                 this :-
                 +23 –12 +56 +1 +8 –3 -112 –31 +90 –121 +53 +27 +78 –94
                 This is still 14 coefficients. Each one is 10 bits, giving 140 bits
                 altogether. However the variable length stream corresponding to these
                 coefficients is :-
                 111111111111111111111110011111111111101111111111111111111
                 111111111111111111111111111111111111110010011111111001110
                 111111111111111111111111111111111111111111111111111111111
                 111111111111111111111111111111111111111111111111111111110
                 111111111111111111111111111111110111111111111111111111111
                 111111111111111111111111111111111111111111111111111111111
                 111111111100111111111111111111111111111111111111111111111
                 111111111111111111111111111111111111111111111111111111111
                 111111111111111111101111111111111111111111111111111111111

251                                                  Sony Broadcast & Professional Europe
Part 21 – Video compression

                   111111111111111110011111111111111111111111111100111111111
                   111111111111111111111111111111111111111111111111111111111
                   111111111111001111111111111111111111111111111111111111111
                   11111111111111111111111111111111111111111111111111100
                   This is 737 bits! A huge increase on the original 140 bits. Thus it is very
                   important that variable length codes are as closely related to the
                   statistical pattern of the original data. This is the reason why variable
                   length codes cannot be used on the DCT DC coefficients, because there
                   is no guarantee that it will be close to zero at all.




Sony Training Services                                                                   252
Broadcast Fundamentals


                                                  The television station
The studio
                 The studio is the area where programmes are made. It may be small,
                 being not must more and a small room, to a large enclosure big enough
                 to fit a small group of houses.
                 Studios have sets and lighting. A set is a construction that provides a
                 background, or foreground, to the action taking place in the studio.
                 Lighting can be hung from the ceiling or placed on the floor. It not only
                 provides light to the set, but also adds mood and colour. Both set design
                 and lighting design require experience and skill to perfect.
                 Studios also contain video cameras. There are often fewer cameras in
                 news studios that those of drama, pop music shows, etc. However,
                 cameras in news studios can be very complex computer controlled
                 cameras, able to move between a few certain but very specific positions.
                 Studios can be used for news programmes, entertainment programmes
                 like pop music shows, chat shows and games shows, and for drama.
                 Drama, which includes everything from soaps to high profile period
                 drama productions, probably involve the most complex set and lighting
                 design in any studio. Indeed a soap will involve continuous use of a very
                 complex set design that operates like a well oiled machine.

       The gallery
                 Studios often have an associated gallery. A gallery is a control room
                 generally placed next to the studio. It is also often set at a higher level to
                 the studio, so that gallery staff can look down on the studio.
                 Studios with galleries are generally used for live, or near live,
                 programme making. The gallery crew can direct the programme making
                 process to either create a programme that goes directly out to air, or a
                 programme recording that requires very little subsequent editing.


The post production studio
                 The post production studio is not really a studio at all. In fact it is more
                 like a studio gallery. Post production studios do not have sets or lighting.
                 They are designed specifically to take the recorded material from a
                 studio and edit it into a final programme (or series of programmes).
                 Post production studios are not as hectic and busy as galleries. The
                 work that is done in postproduction is not done to the clock, with all the
                 time critical nature of the work done in the live studio and gallery.
                 Post production may take many days or weeks for a single programme.
                 Although the equipment used in post production is similar to that used in
                 the gallery, there are a few obvious differences. For instance, special
                 effects machines in the gallery tend to be expensive pieces of hardware.
                 They can produce a few effects at high speed. In post production the



253                                                  Sony Broadcast & Professional Europe
Part 22 – The television station

                   special effects machine will produce some very complex effects, and not
                   necessarily in real time.


The edit suite
                   The edit suite is really another name for the post production studio,
                   although there is probably the expectation that the post production
                   studio is more complex that the simple edit suite.

        The linear edit suite
                   The linear edit suite generally used tape. All edits involve playing back
                   one or more tapes and recording to another tape. It is called linear
                   because edit have to be performed in a linear fashion, as they are
                   recorded to tape.

        The non-linear edit suite
                   Non-linear edit suites are a recent invention compared to the linear edit
                   suite. Non-linear edit suites used to have a reputation for lower quality
                   results compared to the linear edit suite. They generally use computer
                   technology to allow ‘drag & drop’ timeline style editing.
                   Non-linear edit suites offer a very much more flexible way of working
                   than linear edit suites.
                   Non-linear editing is slowly increased in quality as the power of
                   computers has increased. It is now possible to perform non-linear edits
                   with the same high quality as conventional linear edits.


The news studio
                   The news studio is not really a single room, although there will be a
                   studio in the basic sense. A news studio will contain maybe three rooms.
                   There will be a studio proper, where the news programme will be made.
                   There will be an associated gallery as well.
                   The third room will be the news room itself. An integral part of the studio
                   complex, the news room is used to bring all the news items together,
                   with their associated script, video or film footage, and audio material.
                   The news room is a little like the opposite to a post production suite. The
                   work done in it is before the programme. There is some editing, although
                   it is not as complex as the editing equipment found in post production.
                   News editing is always simpler, and often done to very short and rigid
                   deadlines.


The outside broadcast vehicle
                   The outside broadcast vehicle, often called the outside broadcast truck,
                   or OB truck, is a small self contained transportable production facility. It
                   contains all the equipment to shoot, record, control, and edit a
                   programme. And when finished, transmit the result back to the television
                   station.



Sony Training Services                                                                     254
Broadcast Fundamentals

                 OB trucks are used for recording sports events, national celebration
                 events, important news events, etc. or any situation requiring production
                 facilites, where there are none normally.
                 OB truck are sometimes huge with separate rooms for video editing and
                 production, audio and camera control. They can just as easily be small,
                 with one room inside for everything.
                 OB trucks are often measured by the number of cameras they have. A 1
                 camera OB truck is small, an 8 camera OB truck is large.




255                                                Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance


Part 20                           CCTV, security & surveillance
What is CCTV?
                   CCTV stands for closed circuit television. It encompasses any television
                   systems that is not connected to any kind of transmitter, and is generally
                   a single connection between the source and destination. There would
                   normally be just one or two television screens at the destination,
                   CCTV also includes the use of microwave or infra-red links, video over
                   IP, and other such technologies. With these technologies the signal is
                   directed to a specific destination, rather than being broadcast to anyone.
                   As such, it is still closed and can therefore be regarded as CCTV.
                   CCTV does not include terrestrial broadcast television, satellite or cable
                   network television. It does not even include subscriber television. Such
                   services are only available to the customers who buy the subscription
                   and who have the relevant technology to receive the signal. Although
                   they could be regarded as ‘closed’ they do not fall into the description
                   ‘CCTV’ because the number of receivers is relatively high.


CCTV privacy & evidence
                   CCTV has gained a reputation in many peoples minds as the instrument
                   of “big brother”, and a method for the authorities to pry on innocent
                   individuals. Sensible use of CCTV should never be an infringement of
                   privacy. In many cases CCTV is used for situations where personal
                   privacy is not an issue. Instrument monitoring, remote monitoring in
                   harzardous conditions, search and rescue, and many other applications
                   for CCTV do not involve personal privacy at all.

        Data Protection Act 1998
                   Laws around the world are designed to protect people from incorrect use
                   of CCTV technology. In Great Britain the Data Protection Act 1998
                   includes 62 legally enforceable points to ensure correct use of CCTV
                   technology, and 30 suggested good practice points to improve public
                   perception of the technology. Details of the Data Protection Act 1998
                   can be found at http://www.dataprotection.gov.uk .

        Continuity of evidence
                   CCTV equipment can be invaluable at collecting evidence as part of
                   legal proceedings. Recordings and images can all help build up a
                   convincing case. However evidence is useless if it has been tampered
                   with.
                   It is important that any images, video or sound material are not placed in
                   a position where they can easily be altered, or deleted, between the
                   point where they were recorded and where they are presented in court.
                   There may also be a necessity to guarantee that material is not
                   tampered with after it is presented in court, right up to the time it is
                   destroyed.



Sony Training Services                                                                   256
Broadcast Fundamentals

                 The process of ensuring that CCTV material is not altered between
                 recording, through its court appearance and its eventual destruction is
                 called continuity of evidence.
                 It is impossible to absolutely guarantee continuity of evidence. All that
                 can be done is to reduce the likelihood of tampering to such a level that
                 it is improbable.
                 Two methodologies can be used to ensure continuity of evidence,
                 trusted personnel and technology. Using both methods can provide very
                 convincing of evidence.

            Trusted personnel
                 Recorded evidence can be placed in the hands of trusted personnel.
                 These people are trusted to prevent the material from being tampered
                 with either because it is their job or because their reputation depends on
                 it.
                 Trusted personnel include security guards, bonded store keepers etc.,
                 as well as notaries, like judges, doctors, police etc.. All trusted personnel
                 may be corrupted, but it is unlikely. It is this unlikelihood that provides
                 continuity of evidence.

            Technology
                 Recorded CCTV material can be protected at every point from the
                 camera through the court room and eventually to destruction by using
                 technology.
                 Simple technology includes lock & key, safes, security doors etc.. These
                 are the kind of technologies that security guards and bonded store
                 keepers would use to back up their trusted personnel status.
                 CCTV material can also employ signal scrambling techniques and
                 electronic watermarking to make tampering difficult.
                 Making multiple copies of recordings, at separate remote sites can
                 improve security, and prove tampering.


CCTV use
                 CCTV’s primary use is in security & surveillance, and has gained a ‘big
                 brother’ reputation, with all the concerns of civil liberties and privacy.
                 However CCTV is important is increasing levels of safety in public areas,
                 offering better levels of monitoring and control for inspection, machine
                 operation, medical applications, and for work in remote or hazardous
                 environments.

       Examples of CCTV usage

            Town centre surveillance
                 Local councils and police are using CCTV increasingly to monitor city
                 and town centres. Cameras are mounted on buildings or posts at
                 strategic points. Many are fitted inside motorised environmental


257                                                 Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                   housings. The signals from these cameras are fed back to a central
                   control office, where staff can monitor activity in shopping areas, public
                   amenities, car parks, etc.. Although there are concerns about public
                   liberties, these systems have been very successful in reducing
                   robberies, vandalism and mugging in town centres.

             On-the-spot views for sports events.
                   CCTV, and especially the use of miniature camera technology, is being
                   used increasingly to allow people to experience the thrill of sports events
                   by mounting cameras to racing cars, motorcycles and jockeys. Signals
                   can be fed back to the production studio where they can be fed into the
                   broadcast chain.
                   As well as allowing people at home to see what the racing driver or rider
                   is seeing while racing down the track, it also provides valuable
                   information to pit crews and officials.

             Train crew assistance systems
                   Train platforms can be very long and often curved. It is often difficult for
                   drivers and guards to see the whole length of a platform as the train is
                   pulling out. CCTV is often used as a means of checking that doors are
                   closed and passengers are clear of the edge of the platform before
                   starting the train.
                   Cameras are mounted to the wall or ceiling at strategic points along the
                   length of the platform fed to two or three small monitors mounted just
                   outside the train window, so that the driver or guard can easily see them
                   without having to turn or stretch.

             Biometric identification
                   At the cutting edge of CCTV is personal identification using biometrics,
                   Whilst not generally regarded as CCTV by many people, biometrics uses
                   the same basic systems as all other CCTV systems. Entry systems to
                   high security areas can involve specialist cameras linked to digitizers
                   and computers. These can be used to perform facial scans, fingerprint
                   scans or retinal scans to help identify people.

             Search and rescue
                   CCTV is used extensively when searching for survivors in fires,
                   collapsed building, caves and pot-holes. Miniature cameras can be
                   pushed into places humans cannot get into. Small cameras can be fitted
                   to robot crawlers and sent into dangerous environments that are too
                   dangerous for humans.
                   Another important area or search and rescue takes advantage of the fact
                   that cameras can see part of the electromagnetic spectrum humans
                   cannot see, i.e. infra-red. Helicopters can search for people or animals in
                   open country, or sea, from their heat signature. Special cameras that are
                   sensitive to heat can make people or animals shine out like beacons
                   even at night.



Sony Training Services                                                                     258
Broadcast Fundamentals

            Medical procedure monitoring
                 CCTV is now being used to monitor the progress of medical operations
                 and procedures. From endoscopes to remote control microsurgical
                 equipment. CCTV equipment is becoming increasingly important.
                 Linking CCTV equipment to shape recognition and 3D modelling
                 software, and systems can be built to assist medical teams in diagnosis.
                 Another important area of development is in remote consultation. By
                 using CCTV with video over IP, or streaming technology, it is possible
                 for top surgical consultant to assist difficult surgical procedures
                 anywhere in the world.

            Microchip production and inspection
                 Many modern production processes operate at dimensions far too small
                 for humans to see with the naked eye. As dimensions become smaller
                 and smaller, it is also impossible see things using normal light. The
                 wavelength of light itself becomes a problem and other forms of radiation
                 must be used. CCTV sensitive to these wavelengths are used to allow
                 people to see these tiny dimensions.
                 Probably the most important of these is microchip production. CCTV is
                 used extensively in the production process, by coupling it to microscope
                 technology using ultra-violet and X-ray radiation.


CCTV terminology
                 CCTV technology has parallels with professional and broadcast video.
                 However some of the terms and jargon used in the CCTV industry is
                 entirely different to those used in broadcast video

            Activity detection
                 The ability of a system to react to movement. A processing unit will
                 compare video frames to check for differences, i.e. movement. This can
                 be sent out as a signal to the systems control to switch to that camera.
                 Multiplexers can be adjusted to devote more, or all time to the camera
                 that has sensed movement. It is also possible to send zoom pan and tilt
                 control signals to the camera to make it zoom in closer on the detected
                 movement.

            Alphanumeric video generator (AVG)
                 The CCTV equivalent to a character generator. Quality is generally not
                 as good as broadcast character generator, but the requirements for
                 character insertion in CCTV are not as stringent, and cost saving is an
                 issue.

            CCD iris
                 A term used in CCTV to describe auto iris.




259                                               Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

             C mount
                    Cine mount. The original lens mounting system for CCTV cameras. 1”
                    (25.4mm) diameter with 32 threads/inch. Back flange to CCD distance
                    standardised at 0.69” (17.526mm).

             Conditional refresh
                    A system used to save transmission bandwidth by transmitting video
                    frames only when a change is detected.

             CS mount
                    Cine short mount. An more contemporary lens mount to the C mount
                    standard offering cheaper smaller lenses. CS mount has exactly the
                    same dimensions as C mount, but the back flange to CCD distance is
                    reduced to (12.5mm)

             Back light compensation (BCL)
                    A camera feature that automatically compensates for strong background
                    lighting to improve detail in darker areas of the image that would appear
                    as just a black shape.

             Dense waveband division multiplexing (DWDM)
                    A technology that places a large number of video channels onto a fibre
                    optic cable.

             Dome camera
                    Any CCTV camera installed in a dome. Modern dome cameras a pre
                    built as a complete assembly, often with pan, tilt and zoom capability,
                    and are sometimes referred to a PTZ cameras. Some dome cameras
                    have a network output rather than a conventional video output, so that
                    the video signal can be sent out on a computer network as a
                    compressed data stream.

             Duplex
                    Used in CCTV to describe a multiplexer that can perform more than one
                    function at one time, like displaying and recording multiple images.

             Dwell time
                    The time a multiplexer stays on one camera in its rotation.

             Hi-Z
                    A common term in the CCTV industry to denote an unterminated
                    analogue video cable connection. The line impedance of video cable is
                    75ohm. Many coaxial analogue video connectors have a switch that can
                    be switched between Hi-Z and Lo-Z or 75ohm.
                    Equipment can be daisy chained on the same video connection. All
                    equipment except for the first and last are set to Hi-Z (high impedance –

Sony Training Services                                                                   260
Broadcast Fundamentals

                 terminator switched off) The first and last pieces of equipment are set to
                 75ohms impedance, either by switching the equipment’s terminator
                 switch to 75ohms or by fitting a BNC terminator to the cable end.

            Kangaroo lens
                 A lens with two fixed iris positions – fully open and partially closed.
                 Designed to be used with cameras that have electronic (sensor) auto-iris
                 capability as a way of providing 2 ‘gears’ for the auto-iris.

            Lambert radiator
                 A primary source of light that is designed to be imperfectly diffused.

            Lambert reflector
                 Like a Lambert radiator but for a secondary (reflected) light source.

            Minimum object distance (MOD)
                 The closest distance a particular CCTV lens can focus to, measured
                 from the front of the lens to the object.

            Multiplexer
                 A unit that multiplexes a number of video signals.
                 A multiplexer can be designed to show more than one video image on
                 one screen by down converting the incoming video images and placing
                 these smaller images into one outgoing signal. Can be refered to as
                 spacial multiplexing. Each image has poorer resolution that the original
                 but is running at real time. Probably the most popular of these is the
                 Quad.
                 A time division multiplexer divides each video frame, or series of frames,
                 between a number of inputs. The output effectively chops between the
                 inputs on a rotational basis. Each video input has the same resolution
                 but is not as smooth. The output of a time division multiplexer is not easy
                 to view because the image flickers from one image to another. Time
                 division multiplexers are generally used as a way of recording multiple
                 video signals to one tape. Timecode is also recorded and is used by the
                 multiplexer during playback to demultiplex the recorded signal.

            Pan & tilt head (P/T head)
                 A motorised camera mount that allows the camera to be panned (move
                 round) and tilted (moved up and down) remotely. Pan & tilt heads are
                 often combined with a zoom camera to give a pan tilt zoom assembly
                 commonly called a PTZ camera. Many dome cameras are also PTZ
                 cameras




261                                                 Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

             Pre-position lens
                   A CCTV lens which outputs signals for zoom and focus positions, so that
                   hey can be stored in the control station and thus allow pre-set positions
                   to be called up by the controller quickly.

             PTZ camera
                   See Pan & tilt head.

             Quad
                   A unit that spacially multiplexes four video signals into one signal; to
                   show all four images on one monitor. The resolution of each image is a
                   quarter of the original but is running at real time.

             Repeater
                   A unit that can be placed part way along a very long transmission path to
                   amplify the signal back to full amplitude again. Cable repeaters can be
                   used to re-amplify video and audio signals. Microwave repeaters do the
                   same thing for video and audio signals on microwave links.
                   Analogue repeaters also amplify noise, so there is a limit to the number
                   of analogue repeaters that can be used before the noise level becomes
                   so great that it swamps the original video or audio signal

             Retained image
                   A term used in CCTV to describe an image that remains on the camera
                   sensor after the object has gone. Retained image is a temporary artifact
                   as a result of a delay in the camera sensor, but is sometimes used to
                   describe the burnt-in image because a CCTV camera is looking at the
                   same thing all the time.

             Vari-focus lens
                   A manual zoom lens. All other industries regard all lenses with a variable
                   focal length as zoom lenses. The CCTV industry need to be able to
                   differentiate manual zoom lenses from motorised ones, because many
                   CCTV cameras are operated remotely. The focal length of a vari-focus
                   lens can be altered during installation, then must be considered set from
                   the point of view of the operator in the control room. (See zoom lens.)

             Z


             Zoom lens
                   A lens with a remotely controlled motorised variable focal length. The
                   CCTV industry differentiates between manual and motorised zoom
                   lenses. The manual variety are called vari-focus lenses. (See vari-focus
                   lens.)



Sony Training Services                                                                  262
Broadcast Fundamentals


The typical CCTV chain
                 A typical CCTV chain consists of a number of devices communicating
                 with one another. Video and audio signals go in one direction, from the
                 camera or microphone to the monitor or speaker. Control signals go in
                 the opposite direction, from the control point to the camera or
                 microphone.

            The camera
                 All CCTV systems start with the camera. This is the main input to the
                 whole system. There may be just one, or many cameras. Cameras may
                 be monochrome or colour, and can vary in size and quality. Some
                 CCTV cameras are fitted into environmental housings to protect them
                 from weather or harzardous conditions. Some are hidden behind
                 screens or domes to make them more discrete. Many cameras may
                 have controls for zoom and iris, as well as motors for pan and tilt.

            The microphone
                 Many CCTV systems are video only systems. Some have associated
                 audio. In many cases microphones are fitted to the cameras themselves.
                 Increasingly microphones are fitted separately from the camera. This
                 allows it to be strategically sited to pick up the best sound.

            The transmission chain
                 The transmission chain relays video and sound back to the control or
                 processing station. In most CCTV systems the transmission system
                 consists of simple video cables routed directly from each camera to the
                 control station, and audio cables routed directly from the microphones to
                 the control station.
                 The transmission chain may consist of switch gear, routers, signal
                 compressors and decompressors, IP packetisers, microwave or infra-red
                 links.

            The control station
                 The control station consists of a control panel to either switch between
                 incoming video and audio signals directly, or to send control signal to a
                 remote switcher or router to perform the switching elsewhere.
                 Video signals are switched between cameras and recorders and the
                 monitors, and between the microphones and monitors and the speakers
                 and headphones
                 The control station can also send signals to cameras to perform zoom,
                 pan and tilt movements, as well as iris control and filters, depending on
                 the camera’s capability. Outside cameras may have wipers to clear
                 water and dust from the camera housing window and heaters inside the
                 housing to heat the camera if the temperature falls below freezing.
                 These could be automatic or controlled manually from the control
                 station.



263                                                Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

             The processing station
                   In some CCTV systems the control station is replaced by a processing
                   station. This is popular in automated CCTV systems for applications like
                   automatic recognition systems. The images from the cameras are not
                   viewed by an operator but are processed by computer and either placed
                   in a database of compared to a database.

             The speakers & headphones
                   Audio signals are routed from the control station’s panel into either
                   speakers or headphone, so that the operator can hear the sound. In
                   most cases this will be a mono, rather than a stereo feed.

             The video recorder
                   Many CCTV systems have recording capability. This allows the
                   controller to review past events, and acts as a good source of evidence.
                   Video recorders can be simple VHS machines or more expensive digital
                   machine like those based on the DV tape format.
                   If the CCTV system has audio capability this is recorded on the same
                   tape as the video signal.
                   In many cases the video recorder records one camera feed. In some
                   cases the video recorder is able to record more that one camera feed on
                   the same tape.

             The monitor
                   The monitor is the final destination for the video signals. They can be fed
                   from any of the cameras or from one of the video recorders.




Sony Training Services                                                                     264
265
                                       Figure 111
                                                                 C a m e ra &                                                                                       M o n ito r s & s p e a k e r s
                                                                 m ic r o p h o n e


                                                             M o to r is e d                                       R o u tin g m a tr ix
                                                             c a m e r a in
                                                             h o u s in g &
                                                                                                                                                                                                                           Broadcast Fundamentals




                                                             m ic r o p h o n e




                                                                                                                                                                      S w itc h e r
                                                                   M o to r is e d
                                                                   c a m e r a in
                                                                   h o u s in g




                                                                                                                  M ic r o w a v e lin k
                                                                                                                                                                                                 C o n tr o l s ta tio n
                                                                        C a m e ra &
                                                                        m ic r o p h o n e

                                                                                                                   In te r n e t
                                                                        M in i c a m e r a
                                                                                                                                                                                      V id e o r e c o r d e r s

                                                                                             C o m p re s s o r                            D e c o m p re s s o r
                                                                                             /p a c k e tis e r                            /d e p a c k e tis e r




                                       Typical CCTV system

Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance


CCTV cameras
                   CCTV cameras are essentially the same as those used for broadcast,
                   although there tends to be less emphasis on quality.

        General purpose cameras
                   Most CCTV cameras are designed for general indoor and outdoor use.
                   They vary in size from about 10x3x3cm to 30x10x10cm. General
                   purpose cameras normally sell without a lens, and one needs to be
                   bought and fitted. Both the older C and newer CS lens mounts are
                   popular.
                                                                                  General
                                                                                  purpose
                                                                                  cameras
                                                                                  normally
                                                                                  have an




                   analogue composite output. Some have an analogue component, and a
                   few have digital video outputs.
                   General purpose cameras can be fitted into weather housings. This
                   housing can be fitted with a heater to ensure it still operate in cold
                   weather, and a wiper to ensure the housing window is kept clear of rain
                   drops and dust.
                   General purpose cameras can also be fitted to pan and tile mechanisms.
                   This allows then to be repositioned remotely.
                   The control for these cameras is through separate RS-232, RS-422 and
                   RS-485 connection with proprietary control protocols.

        Net cameras and web cameras
                   This is a relatively new type of camera. It incorporates all the
                   functionality of a standard camera but with a single network connection.
                   This connection can be used for both the video signal from the camera
                   and the control signals to the camera. Software is installed on a
                   computer which allows
                   the camera to be
                   controlled.
                   Network connected
                   cameras are easy to
                   install. Images can be
                   sent to any location on

Sony Training Services                                                                 266
Broadcast Fundamentals

                 a company network, or out over the Internet. Control can, likewise, be
                 sent from anywhere on the network, or the Internet, with the appropriate
                 software.
                 Net cameras use some form of image compression to reduce the
                 amount of data sent on the network link, while keeping image quality as
                 high as possible. JPEG is a common compression format. Some use the
                 more complex MPEG compression to improve the compression/quality
                 ratio.
                 Some net and web cameras have pan & tilt as well as zoom capability
                 built in. small motors in the camera assembly allow for this kind of
                 control, which is achieved by sending control signals over the network
                 link, from a controlling computer.

       Dome cameras
                                                                  This type of CCTV
                                                                  camera is becoming
                                                                  more popular. They are
                                                                  not discrete, as most
                                                                  people recognise these
                                                                  domes for what they
                                                                  are. Older dome
                                                                  cameras were really
                                                                  housings for a number
                                                                  of cameras, with each
                                                                  one pointing in a
                                                                  different direction.
                                                                  Modern dome cameras
                                                                  now have pan and tilt
                 built in. Many also have zoom control as well. These are commonly
                 called PTZ cameras.
                 Domes are generally made from some kind of high impact plastic, to
                 give some protection from vandalism. They are generally tinted. This
                 hides the actual orientation of the camera inside, but reduces the
                 sensitivity of the camera.
                 Many dome cameras have standard analogue composite video outputs.
                 The control for pan and tilt for these cameras is through proprietary
                 RS-232, RS-422 or RS-485 connection, just like their general purpose
                 camera equivalent.
                 An increasing amount of dome cameras have network connections.
                 These cameras offer the same advantages as network and web
                 cameras, but in a protective dome.

       Night vision cameras and dual condition cameras
                 Night vision cameras are
                 designed to operate after
                 dark. Two methods are




267                                               Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                      used. The first is through intensification, and the second through the use
                      of infra-red.
                      Intensification cameras cannot work in complete darkness. They use
                      techniques to intensify the sensor, allowing then to pick up objects with
                      just the slightest amount of light. Images tend to be lower quality than
                      normal because the noise is intensified as well.
                      Infra red cameras have extended sensitivity beyond the normal electro-
                      magnetic spectrum into the infra-red. They can pick up heat, and build
                      up an image from a combination of what little light there is and the
                      temperature of the objects in the scene. Infra-red night vision CCTV
                      cameras sometimes have infra-red lamps mounted in the same
                      construction as the camera itself. This provides a good picture through
                      the camera, while remaining completely dark to the human eye.
                      Night vision cameras are all monochrome because they are looking for
                      basic form and shape with whatever light or heat is available. Their
                      colourimetry is wrong.
                      Dual condition cameras are able to switch between normal daylight
                      colour operation and monochrome low light operation. This can either be
                      achieved by switching on and off the sensitivity to infra-red, or switching
                      on and off intensification.



              Wireless CCTV cameras
                                                                                              This kind
                             A e r ia l
                                                                                              of camera
 C a m e ra                                                                                   has no
                                                                                              cable




                           Pow er
                                          A e r ia l   R e c e iv e r

                                                                 Pow er




                                                                 V id e o
                                                                 o u tp u t




                      connection, other than power. It has an in-built wireless transmitter
                      operating in the GHz range of frequencies, and can transmit short
                      distances to a receiving station. Wireless CCTV are easy to install, and
                      reposition, and are ideal for temporary installations, and installations
                      were cameras may need to be moved on occasions.
                      However wireless cameras can be easy to tap into. All you need is the
                      same receiver on the same frequency.



Sony Training Services                                                                              268
Broadcast Fundamentals

       Pin-hole & bullet cameras
                 Pin hole cameras used to refer to basic cameras consisting of a box with
                 a pin hole in the front. The name has been ‘stolen’ and now also refers
                 to a classification of sub-minature camera with a very small lens at the
                 front.
                 These types of CCTV camera remain at the fringe of main stream
                 CCTV, with all the concerns of privacy, spying, and discrete surveillance.
                 The largest of these is the bullet camera, sometimes called the lipstick
                 camera. The processing electronics is designed into a separate unit, and
                                                                             the camera
                                                                             head is
                                                                             nothing more
                                                                             than the lens
                                                                             and the
                                                                             sensor. This
                                                                             makes it as
                                                                             small as
                                                                             possible, like a
                                                                             bullet or
                                                                             lipstick tube
                                                                             (hence the
                                                                             name). The
                                                                             separate unit
                                                                             is often called
                                                                             the camera
                 control unit (CCU), although, in truth, it contains as much of the
                 electronics that can be removed from the camera head itself.
                 These cameras offer a good compromise between reasonable quality
                 and a discrete camera.
                 The smallest cameras are all single unit, single CCD or CMOS sensor
                 type. The lens, sensor and electronics are all integrated onto the same
                 small circuit board. There is no electronic control, and very little iris or
                 focus control. What there is, is always manual.
                 This type of CCTV camera is popular for fitting into clock faces, wall
                 mounted electrical sockets, light fittings, etc.. Image quality tends to
                 suffer because of the restricted space for electronics and the small lens.
                 True pin-hole CCTV cameras, sometimes called SWAT cameras, have a
                 thin shaft or flexible tube, protruding from the front of the camera head
                 with a very small lens mounted at the front. The shaft itself is a light
                 guide. These cameras provide for the smallest identifiable intrusion in to
                 a room space and are the most discrete of all CCTV cameras.
                 A standard camera can be fitted with a pin-hole lens. This lens is often
                 long and thin, and comes to a point. The whole camera can be fitted
                 behind a wall with the only visible part being the tiny front element of the
                 lens.
                 Pin-hole cameras tend to be wide angle and the image is often greatly
                 distorted.



269                                                  Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

        Biometrics cameras
                   Biometric cameras are
                   specifically designed to scan
                   specific human physical
                   features. This includes hand
                   geometry, face, iris, & retina.
                   Biometrics can also be used
                   for signature recognition.
                   Biometric camera outputs can
                   either have conventional video
                   composite or component
                   outputs or direct computer
                   connections. These outputs
                   are connected directly into a
                   computer.
                   Video connections need to be fed into a plug in board with an
                   appropriate video input. The board performs the image capture. The
                   computer then analyses the image.
                   Computer connections include RS-232 and RS-422 connections, USB
                   and 10-BaseT connections. This type of connection is becoming more
                   popular because it is easier to fit. The image capture is performed by the
                   camera into an internal frame store. The computer output is a digital
                   download of the frame store.
                   Some biometric cameras can now perform some of the image analysis,
                   internally in dedicated hardware, searching the image for relevant
                   information, discarding the rest, and even breaking the relevant data
                   down into a digital code. This greatly reduces the computer’s workload,
                   and speeds up transmission by only sending relevant data to the
                   computer rather than the whole image.
                   Once in the computer the image, or code, is compared to a database of
                   known patterns to recognise the person.

             Instrumentation cameras
                   This is a loose definition of CCTV camera. Instrumentation cameras are
                   similar to other CCTV camera types in many respects, and cameras
                   designed for other purposes can be used as instrumentation cameras.
                   Instrumentation cameras are specifically designed to be fitted to
                   precision machines and instruments. They are intended for monitoring of
                   inaccessible areas, or looking on very low light level areas or in special
                   lighting conditions, like microscopes.
                   Many instrumentation cameras use the same design techniques as
                   some pin-hole cameras. Some have small camera heads linked via light
                   guides. Many have separate camera controllers to reduce weight and
                   size on the instrument itself.
                   Instrumentation cameras are often able to operate in very low light
                   levels. However this feature should not be confused with night vision
                   cameras. Night vision cameras often use sensors sensitive to infra red,
                   and illuminate the scene with an infra-red lamp. Intensification night

Sony Training Services                                                                  270
Broadcast Fundamentals

                 vision cameras are very sensitive, but are also designed to be
                 reasonably robust. Either they only employ mild amounts of
                 intensification, or the camera will not be damaged by exposing it to
                 bright light.
                 However instrumentation cameras designed for very low light levels are
                 designed for only for low light levels, not for a different kind of light, like
                 infra-red. They are not at all robust, and can often be damaged by
                 exposing them to normal light levels. They require expert setting up,
                 care and maintenance.
                 Intensified CDD (ICCD) cameras use an intensifier before a CCD
                 sensor. This boosts the amount of signal produced by the light before
                 the sensor reads it.


Reading CCTV camera specifications
                 All CCTV camera manufacturers produce specifications, making the
                 figures and details look as appealing as possible. Therefore, it is worth
                 investigating exactly how some of these specifications are found, and
                 things one should bear in mind when reading them.

       Camera format
                 CCTV cameras are designed in a variety of formats depending on the
                 size of their sensor. All sensors have a 4:3 aspect ration, in common
                 with standard domestic television.
                 It is a common misconception that the camera format is the same as the
                 distance from one corner of the CCD sensor to the opposite corner, i.e.
                 a ½” sensor is ½” across its diagonal. This is not so. Sensor diagonals
                 are about 0.6 times the format size. The reason for this goes back to the
                 days of tube cameras where the sensitive area of the old 1” tube was
                 only about 0.6 of the overall tube diameter.




271                                                   Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance




                                                                            T h e tr a d itio n a l 1 " c a m e r a tu b e




             1"

                                                                                   S e n s o r d ia g o n a l a p p r o x
                                                                                     6 0 % o f tu b e d ia m e te r




                                       1"                   2 /3 "                    1 /2 "                  1 /3 "                1 /4 "


            CCD
         s e n s o rs

                                  1 2 .8 x 9 .6           8 .8 x 6 .6              6 .4 x 4 .8             4 .8 x 3 .6         3 .2 x 2 .4
     A ll d im e n s io n s
                               1 5 .9 d ia g o n a l   1 1 d ia g o n a l         8 d ia g o n a l        6 d ia g o n a l    4 d ia g o n a l
      in m m u n le s s
          s p e c ifie d



Figure 112                                                                                                         CCTV format sensor sizes

                              The table below shows sensor dimensions for various camera formats
                              and the ratio differences between the format and the sensors.
                                                1”                          2/3”                     ½”                      1/3”                 ¼”
 Sensor horizontal                            12.8                          8.8                      6.4                      4.8                 3.2
 Sensor vertical                                9.6                         6.6                      4.8                      3.6                 2.4
 Sensor diagonal                              15.9                          11                        8                        6                   4
 Sensor ratio                                 1:1.6                    1:1.53                     1:1.59                     1:1.41              1:1.59


                              The camera format has an effect on the kind of lens that can be fitted,
                              and how it will behave. This is covered in more detail in the section on
                              CCTV lenses.

            Resolution
                              Resolution is a measure of the resolving power of the camera.
                              All CCTV cameras, colour or monochrome, are the single sensor type.
                              The sensor pixels in colour CCTV cameras are divided between the
                              three primary colours. Thus, for the same sensor density, there is a
                              difference in resolution between monochrome and colour CCTV
                              cameras. Monochrome CCTV cameras will therefore tend to have a
                              higher resolution than colour CCTV cameras.


Sony Training Services                                                                                                                                 272
Broadcast Fundamentals

                 Still cameras often use the number of pixels in the sensor as a measure
                 of resolution. However this is not a good method of defining resolution in
                 video cameras. Sensor resolution will give a basic figure for the sensor
                 itself, not of the eventual output signal. In many cases only a proportion
                 of the pixels are actually used in the picture. If specifications mention
                 ‘active pixels’ or ‘effective pixels’ rather than simply ‘pixels’, this will give
                 greater assurance that all these pixels are part of the picture.
                 The camera’s circuitry will also affect the sensor’s resolution. Badly built
                 circuitry will have a poor bandwidth that will reduce the resolution
                 provided by the sensor by the time the signal reaches the output. Having
                 a good sensor and bad circuitry is a waste. CCTV camera resolution
                 figures should always be related to the final output signal.
                 Resolution figures are sometimes given as vertical resolution. This is the
                 number of active lines in the picture. All PAL based CCTV cameras are
                 built around the PAL television system with 625 lines per frame. Of this
                 575 lines are active. All PAL based CCTV cameras should be able to
                 achieve a vertical resolution of 575 lines.
                 Most resolution figures normally define the horizontal resolution. This is
                 a measure of the number of individual pixels per line the camera is able
                 to resolve, and is measured in vertical lines. Horizontal resolution can
                 never be higher than the sensor’s horizontal resolution, and is often
                 lower, due to bandwidth limitations of the circuitry between the sensor
                 and the output.
                 Horizontal resolution and bandwidth are related by the equation :-
                                                         1 
                                            Bandwidth =         
                                                         Period 
                 Each horizontal line lasts about 50uS long (exactly 52uS). The pixels, or
                 vertical lines, are divided up into this 50uS. The period is one clock
                 cycle, producing two vertical lines, one black, one white.
                 Therefore :-
                                                          50 × 10 −6
                                               Period =
                                                           Lines 
                                                                  
                                                           2 


                                                              1 × 10 −4
                                                          =
                                                               Lines
                 Therefore the bandwidth can be found by combining these two
                 equations :-
                                                                    
                                                                    
                                                              1     
                                           Bandwidth =          −4 
                                                         1 × 10  
                                                                  
                                                         Lines  
                                                                 


273                                                   Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance



                                                     = Lines × 10000

                   These equations boil down to a very simple rule. If the number of lines or
                   pixels is measured in hundreds, and the bandwidth in MHz, the two are
                   equal, i.e. 400 vertical lines = 4MHz bandwidth, 600 lines = 6MHz
                   bandwidth. This is a rough approximation as the exact PAL line duration
                   is 52uS not 50uS.
                   Bandwidth, probably more than any other parameter, is the figure that is
                   more difficult to achieve. Bandwidth costs money and separates the
                   good cameras from the bad ones. For square pixels the horizontal
                   resolution would need to be 768 vertical lines, or pixels, which gives
                   almost 8MHz bandwidth! No CCTV camera can achieve this. Cameras
                   achieving 600 vertical lines are considered good quality.

        SNR
                   A camera’s SNR is found by comparing the amount of video signal to the
                   amount of noise, in decibels, with the equation :-
                                                       video 
                                                20 log       dB
                                                       noise 
                   As a guide, an SNR of about 20dB is poor and is probably not viewable.
                   30dB will give a barely distinguishable image. 50dB is acceptable and
                   60dB good.
                   As a ratio of video signal to noise, 20dB is 10:1, and 60dB is 1000:1.

        Sensitivity
                   Sensitivity is a measurement of how much signal the camera produces
                   for a certain amount of light.
                   Sensitivity can be measured as the minimum amount of light that will
                   give a recognisable picture, and is sometimes called ‘minimum
                   illumination’. Figures of below 10 lux should be possible for standard
                   CCTV cameras. However although this method provides an easy guide
                   to CCTV planners and installers, it is a highly subjective measurement.
                   What is a recognisable picture to one person may be unrecognisable to
                   another.
                   Professional and broadcast cameras use a different, more quantifiable
                   method for measuring sensitivity. The camera is pointed towards a
                   known light source. This is often a 2000 lux source at 3200K light
                   temperature (colour). The iris is then closed until the output is exactly
                   700mV.
                   Thus a reasonable sensitive camera may be f11 at 2000lux, whereas a
                   less sensitive camera may be f8 at 2000lux.
                   CCTV camera specifications are often not so consistent. Different lux
                   levels are specified. In the case of low light and night cameras normal
                   colour temperatures are meaningless because the camera is not
                   designed to be lit with standard 3200K light! These cameras often


Sony Training Services                                                                   274
Broadcast Fundamentals

                   specify the minimum illumination sensitivity, and should quote figures
                   very much less than 1.
                   Dome camera manufacturers specify sensitivity with the dome removed,
                   because the figure is better than with it fitted, Some give figures with the
                   dome fitted as well. The camera would normally be used with the dome
                   fitted. This factor needs to be remembered. Dome cameras need to be
                   more sensitive than other cameras, if they are to overcome the losses
                   through the dome itself.

        Cameras with AGC
                   CCTV cameras with automatic gain control (AGC) add another
                   complication to the specifications. Manufacturers will quote sensitivity
                   figures with AGC switched on. However they will generally quote SNR
                   figures with the AGC switched off. The reasons for this are obvious. It
                   makes the figures look better!

        Output formats
                   CCTV cameras use many different video output formats, from the simple
                   analogue composite output fitted to most cameras, through the analogue
                   Y-C output format, digital formats of one kind or another, and direct
                   computer network outputs used by some of the latest cameras.
                   Specifications always show the SNR, sensitivity, etc. from the best
                   output. The most common output connection people use is the analogue
                   composite output. Many cameras have it fitted and it is a simple
                   connection. However it is also the worst quality output. Some cameras
                   have a component output, so called the Y-C output. This provides higher
                   quality but is more difficult to connect.
                   Some newer CCTV cameras have computer network connections,
                   These cameras convert the video into a compressed data stream which
                   can be sent down a network cable. Many use the JPEG format, some
                   use the MPEG format, which can be modified and set up to give good
                   quality at low data rates.

        Camera mounting
                   Most general purpose cameras have a screw hole underneath them to
                   secure them to a tripod or bracket. This is the same as is used by many
                   professional still cameras and is based on the ¼” Whitworth thread, with
                   20 thread per inch.

        Enclosure types
                   Most CCTV enclosures quote conformance to the National Electrical
                   Manufacturers Association (NEMA) standards. These are American
                   standards but are often quoted in manufacturers specifications.
                   IEC Publication 60529 also specifies enclosure types, and are
                   sometimes used in specifications referred to simply as IP numbers.
 Type   Purpose                      Usage                                               IP No
 1      Indoor. General.             Accidental contact prevention. Falling dust/dirt.   10
 2      Indoor. Light water          As Type 1 with protection from falling & light      11


275                                                     Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

         protection                    splashing non-corrosive liquid.
 3       Indoor/outdoor. Light water   As Type 2 with protection from sleet & ice. Dust &    54
         protection.                   rain tight.
 3R      Indoor/outdoor. Light water   As Type 3 with further protection from ice build-up   14
         & ice protection
 3S      Indoor/outdoor. Light water   As Type 3S but can still operate with heavy ice       54
         & heavy ice protection        build-up.
 4       Indoor/outdoor. Heavy         As Type 3 with protection from falling, splashing     56
         water protection.             and hose fed water, and condensation.
 4X      Indoor/outdoor. Heavy         As Type 4 with corrosion protection.                  56
         water protection.
         Corrosion resistant
 5       Indoor. Light industrial.     Protection from lint dust & dirt, light splashing,    52
                                       dripping, seepage & condensation of non
                                       corrosive liquids.
 6       Indoor/outdoor. Light         As 3R with protection from limited water              67
         submersion.                   submersion.
 6P      Indoor/outdoor. Prolonged     As 3R with protection from prolonged water            67
         submersion.                   submersion.
 7       Indoor. Hazardous             Protection from light explosions, hazardous dust,     -
         conditions.                   pressure differentials, acetylene, hydrogen,
                                       various hydro-carbons.
 8       Indoor/outdoor. Hazardous     Protection from light explosions, hazardous dust,     -
         conditions.                   pressure differentials, acetylene, hydrogen,
                                       various hydro-carbons.
 9       Indoor. Hazardous             Protection from light explosions, hazardous dust,     -
         conditions.                   pressure differentials, metal dust, carbon dust,
                                       grain dust, fibres.
 10      Indoor/outdoor. Hazardous     Mine safety. Health administration. Protection        -
         conditions.                   from methane and coal dust.
 12      Indoor. Light industrial.     As 5 but with oil protection & no knock-outs.         52
 12K     Indoor. Light industrial.     As 5 but with oil protection & no knock-outs.         52
 13      Indoor. Heavy industrial      As 12 with heavier protection.                        54


                      NEMA Type 4 is a popular enclosure type for many outdoor CCTV
                      cameras. Some attain NEMA Type 6 or 6P.




Sony Training Services                                                                           276
Broadcast Fundamentals



CCTV lenses
                 CCTV lenses are highly functional, and very much built to purpose. They
                 are simpler than professional and broadcast video, and still, camera
                 lenses. These lenses are on a quality equal with domestic and consumer
                 cameras and camcorders.
                 General purpose CCTV cameras were traditionally based on the 1”
                 video tube. Lenses were mounted to the camera with a C mount. (Cine
                 mount. 1” (25.4mm) diameter, 32 threads/inch. Back flange to sensor :
                 0.69” (17.526mm).)
                 Later general purpose CCTV cameras have the more compact CS lens
                 mount. This mount is exactly the same as the C mount but with a lens to
                 sensor distance reduced to 12.5mm.
                 Many new CCTV cameras are now supplied with fixed lenses. This
                 reduces cost, is simpler for the system designer to work with and install,
                 and allows the camera manufacturer to integrate the lens and camera
                 more closely, adding features to the complete camera that can only be
                 added if the lens and camera are fixed together.
                 Some smaller CCTV cameras are just too small to allow the lens to be
                 removed. Pin-hole cameras and web cameras often only have a simple
                 single convex lens.

       Choosing CCTV lenses
                 Assuming you have a general purpose CCTV camera, that has been
                 delivered without a lens. What lens do you fit to it? What criteria do you
                 have to bear in mind when choosing a lens?

       Focal length
                 CCTV lenses are available as either fixed focal length lenses or lenses
                 where the focal length can be varied. Fixed focal length lenses,
                 sometimes called prime lenses, are available in a number of different
                 focal lengths from fish eye and wide angle, through normal, to telephoto
                 and super telephoto.

            Wide angle lenses
                 Wide angle lenses see a wider angle of view than the human eye.
                 Although it will cover a greater area, it is difficult to make out detail. The
                 image will also look distorted, with near objects appearing to be very
                 near and far objects very far. Fish eye lenses are super wide angle. The
                 angle of view may be over 180 degrees, and image distortion is so great
                 that it becomes circular.

            Normal lenses
                 A normal, or standard, lens is a lens that will produce about the same
                 scene on the monitor as the human eye. Geometry and angle of view
                 are similar to the human eye, although the human eye actually has a
                 very odd view of the world that the brain sorts out for us. The normal



277                                                  Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                   focal length is about the same as the diagonal distance across the
                   sensor, so will vary from one camera format to another.

              Telephoto lenses
                   Telephoto lenses have a narrower angle of view than the human eye.
                   They magnify the scene. They also shorten depth, with near objects and
                   far objects appearing closer to each other.
                   Super telephoto lenses allow you to see detail from a long distance. The
                   angle of view is very small, and they are difficult to set up and maintain
                   in the correct position. The slightest movement can push them off target.
                   The table below shows the focal lengths and angles of view for various
                   lenses, for all 5 common CCTV formats.
                         Angle       ¼”          1/3”           ½”         2/3”        1”
 Fish eye                 > 100     < 1.5        <2            < 2.5       <3          <5
 Wide angle              40 - 100   1.5 - 5      2-7          2.5 - 9     3 - 12     5 - 17
 Normal or standard       @ 40       @5          @7            @9         @ 12        @ 17
 Telephoto                8 - 40    5- 24       7 - 34        9 - 45     12 - 62     17 - 90
 Super telephoto           <8        > 24        > 34          > 45       > 62        > 90



              Zoom and vari-focus lenses
                   Variable focal length lenses fall into two groups, The motorised lenses
                   are commonly referred to as zoom lenses, just as all variable focal
                   length lenses are in still photography, domestic and professional
                   camcorders.
                   Manual variable focal length CCTV lenses are commonly referred to as
                   vari-focal lenses, to differentiate them from the motorised ones. The
                   focal length can be set up during installation. Once set up they
                   effectively become fixed focal length lenses to the operator, because
                   they cannot be altered remotely.

        Format
                   The next thing to consider is the lens and camera format. Older CCTV
                   cameras were based on the 1” tube. All lenses were therefore matched
                   to these sensors. As tubes were replaced by CCD sensors, so camera
                   designs became smaller. These first sensors copied the 1” tubes,
                   allowing the same lenses to be used. Later, more compact, CCTV
                   cameras were designed with smaller sensors, which required new
                   lenses to match to. We now have 5 different CCTV lens and camera
                   formats. It is important to match each camera to the correct lens.




Sony Training Services                                                                   278
Broadcast Fundamentals




                                                 Sensor
                                                                           R e s u ltin g im a g e
                  Lens
                                    1 " le n s o n a 1 " c a m e r a




                                        Sensor
                                                                           R e s u ltin g im a g e
                      Lens
                                  1 /3 " le n s o n a 1 /3 " c a m e r a

Figure 113                                                             Good camera and lens combinations

                  Each sensor should have a matched lens fitted. If a 1/3” lens is used on
                  a 1” camera the lens will try to focus the whole image onto a small part
                  of the 1” sensor. The rest of the sensor will pick up nothing. Whether you
                  see a rectangular or circular image in the centre of the picture will
                  depend on if the lens itself has an internal rectangular mask.




                                             Sensor                            R e s u ltin g im a g e
               Lens
                                  1 /3 " le n s o n a 1 " c a m e r a




                                            Sensor                             R e s u ltin g im a g e
                  Lens
                                 1 " le n s o n a 1 /3 " c a m e r a

Figure 114                                                             Poor camera and lens combinations

                  However, if a 1” format lens is fitted to a 1/3” format camera it will still
                  work, although the optics will be incorrect. The lens will be trying to


279                                                          Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                            project the image onto a 1” sensor. Although the image will be in focus,
                            the sensor will only pick up the central portion on the image. It will
                            appear that you fitted a lens with a larger focal length. Put another way,
                            it will be as if you fitted a telephoto lens onto the camera

        Lens mounts
                            An important parameter to consider is the lens mount. Most CCTV
                            cameras with removable lenses have a C or CS mount. C mount (cine
                            mount) was the original mount for CCTV cameras. CS (cine small) is a
                            newer mount intended for more compact design.
                            Both the C and CS mount are a 1” diameter, 32 threads per inch screw.
                            The only difference is that the distance from the back of the lens to the
                            sensor, the back flange to sensor distance, is 17.526mm on the C mount
                            and 12.5mm on the CS mount, i.e. about 5mm.


                    G o o d m o u n ts (in fo c u s )           B a d m o u n ts (o u t o f fo c u s )

                                           1 2 .5 m m                                1 7 .5 2 6 m m




                     C S le n s o n a C S c a m e r a               C S le n s o n a C c a m e r a


                                      1 7 .5 2 6 m m                                      1 2 .5 m m




                     C le n s o n a C c a m e r a                   C le n s o n a C S c a m e r a

                                    5m m
                                           1 2 .5 m m




             C le n s & a d a p to r o n a C S c a m e r a


Figure 115                                                                C & CS lens mount combinations

                            If the camera is a C mount you can only fit C mount lenses. If it is a CS
                            mount, you can fit either a C or CS lens. However if you fit a C lens to a
                            CS mount camera you must remember to fit a C-CS adaptor. This is


Sony Training Services                                                                                   280
Broadcast Fundamentals

                 really nothing more than a threaded ring that pushes the CS lens away
                 from the camera by 5mm so that the back flange to sensor distance is
                 the same as that of a C mount.
                 It is important to remember that some lenses protrude from the back of
                 the back flange. It is therefore possible to damage either the lens or
                 camera if you try to screw a C mount lens to a CS mount camera without
                 an adaptor.

            Ultra miniature lens mounts
                 Mounts used on smaller CCTV cameras include the M10.5 0.5mm and
                 M15.5 0.5mm threaded lenses. These offer a very small mount and are
                 popular in instrumentation CCTV and “lipstick” cameras.

       Iris and aperture control

            What is the iris?
                 The iris is a mechanical device that varies the size of a hole somewhere
                 inside the lens. The hole itself is called the aperture. The iris allows you
                 to control the amount of light through the lens.
                 The reason for an iris is that camera sensors have a certain range of
                 sensitivity. Too little light and detail in the shadows and gloomy areas is
                 lost. This whole area of the image will turn black. Too much light and
                 detail in light areas of the image tend to be burnt out. This whole area
                 will turn white and may spread into other areas of the image.
                 It is important to adjust the iris so that there is reasonable detail across
                 the whole image. (Of course, if you can also control the lighting this will
                 help.)

            Aperture and depth of field
                 The iris also adjusts the depth of field. This is an important and often
                 forgotten aspect of the iris. Installers often open the iris as far as
                 possible, to create as bright an image as possible, and wonder why it is
                 so difficult to focus. A wide aperture gives a bright image and a narrow
                 depth of field. Focussing is more difficult. A narrow aperture gives a dark
                 image and a wide depth of field. Focussing is easier.

            f stop numbers
                 The aperture is defined as an f number. The lower the number the larger
                 the hole in the lens and the more light gets through. The numbers are
                 standardised as “stop” numbers. The standard numbers start at 1 and
                 each stop is 1.4 times the last. Thus 1.0,1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22,
                 32, 45 and so on.
                 Each f stop lets twice the amount of light through as the next f stop up
                 the scale. The difference betweenf1, f1.4 and f2 is much greater than it
                 is for high f numbers. Therefore ½ lens stops are often used close to f1,
                 like f1.2 and f1.8.




281                                                  Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                   All lenses are defined by the lowest f number possible. The higher the
                   quality, the lower this number is. Hence a 9mm f1.2 lens is a better, and
                   probably more expensive, lens than a 9mm f1.8 lens. The f1.2 lens will
                   have larger lens elements in it to allow more light to get through when
                   the iris is fully opened.
                   The f number is also mathematically a function of the focal length of the
                   lens. Hence longer focal length lenses have higher f number ranges than
                   shorter focal lengths. Put another way, it is more difficult to get a lot of
                   light through telephoto and super telephoto lenses.
                   In broadcast and still cameras irises close to apertures of about f32. This
                   represents a hold only about 2-3mm diameter. CCTV cameras often
                   specify minimum apertures far smaller than this. f360 or even higher that
                   f1000 may be used. These lenses are necessary for ultra sensitive night
                   vision cameras, if they are also used for daylight operation.

             CCTV lens iris control
                   A CCTV lens iris may be manual or automatic. Manual irises are set
                   during installation and cannot be altered by the operator afterwards.
                   They are good for indoor use where there is little change in lighting
                   conditions during the day (or night).
                   Automatic irises are motorised. There are two type of automatic iris
                   control, video servo and DC servo. With video servo iris control the video
                   signal is sent to the lens. Circuitry in the lens measures the video signal
                   and adjusts the iris so that the video signal is standard 1 volt. With DC
                   servo iris control the video signal is measured in the camera, and a
                   simple DC signal is sent to the lens to control the iris. This type of control
                   is sometimes called galvo control and the lenses called galvanometric
                   lenses.
                   Video servo makes the lens a little more expensive than DC servo, and
                   visa-versa for the camera, although, in practice, most cameras have
                   both video and DC servo outputs.
                   There has been some confusion in the
                   past about iris control connection between              1                                  3
                   the camera and lens. For a while there
                   were many different connector types.
                   Camera manufacturers would often
                   provide a plug that installers could fit onto
                   the end of the lens cable, before fitting it            2                                  4
                   to the camera. Some camera
                   manufacturers opted to fit simple screw                     D C s e rv o    V id e o s e r v o

                   terminals to make it as easy as possible to         1       C o n tro l -   +9v pow er

                   fit any lens.                                       2       C o n tro l +           -
                                                                       3        D r iv e +         V id e o
                   However an increasing number of camera         4 D riv e - (G n d ) G nd

                   and lens manufacturers are opting for a
                   standard 4 pin square plug for iris control.
                   The connector is often called the Panasonic connector, or Hi-Rose
                   connector from the Japanese Hirose Electric company.




Sony Training Services                                                                                              282
Broadcast Fundamentals

            Camera sensor auto-iris
                 Some cameras have sensors that can assist the lens iris. The sensor
                 has an electronic shutter which can be used just like an iris. This is
                 explained in more detail on page 113.

       Lens filters
                 CCTV lenses are sometimes fitted with a filter. These are used to correct
                 for colour imbalance, or protect the camera from possible damage. They
                 include neutral density, and neutral density spot filters, coloured filters
                 and polarising filters.
                 Special effect filters are not used in CCTV cameras. These would
                 remove from the clarity of the image and almost certainly go against the
                 purpose of the camera.
                 Filters are covered on page 91.


CCTV switchers and control stations
                 A CCTV system may be designed with just one camera and one
                 monitor. This is popular for instrumentation, machine control and remote
                 monitoring in harzardous environments.
                 However many CCTV systems are devised for security and surveillance.
                 In these scenarios there are likely to be many cameras involved. The
                 central control room could have one monitor fitted for each camera.
                 Some systems are designed this way. It allows an operator to view every
                 camera all the time.
                 However if there are a lot of cameras a single operator will find it more
                 difficult to keep and eye on all the monitors at once. If the site being
                 surveyed has little activity, there may be little need to have a monitor on
                 all the time, for every camera. Not to mention cost, which becomes a
                 problem if many monitors need to be purchased and maintained.
                 The solution is to view many cameras on just one monitor. This can be
                 done in two ways. Either you can switch the monitor to show a different
                 camera, or you can squash the output from many cameras and fit them
                 all onto one monitor screen as a mosaic.

       Camera switching systems
                 By far the most common way of sharing one monitor amongst many
                 cameras is by using a series of switches to select the camera you want
                 to look at.

            Switching monitor
                 The simplest way of doing this is to have a simple switch in the monitor
                 itself. The monitor has a row of push button switches on its front panel.
                 Pressing one of these buttons connects that camera to the monitor
                 screen. Switching monitors can be used for up to about 8 cameras.




283                                                 Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

             Simple video switch
                   This system can be improved by putting the switch in a separate box.
                   The switch box has many inputs, one for each camera, and a single
                   output for the monitor. Operation is similar to the switching monitor, but
                   this modular arrangement allows for a system to be built with more
                   cameras, and lends itself better to improvement and expansion later on.

             Remote video control
                   Increasing the complexity a little more, systems are available that
                   separate the video signals from the control box. Video signals do not go
                   through the controller’s switches themselves, of indeed through the
                   controller’s push button panel at all. The push button panel simply sends
                   a control signal to a separate box that has all the video connections and
                   the video switchgear. This method removes the video from the
                   controllers panel, making it easier to route cables and making the
                   controllers area much tidier. Quality tends to be better because the video
                   signals are screened better.

             Computer controlled switching
                   Extending this idea to its logical conclusion, the control link between the
                   controllers panel and the video switching box becomes a computer
                   network link. Both the panel and the box have network addresses. The
                   control panel itself is replaced by a computer. The push buttons become
                   virtual buttons on the computer monitor.
                   This approach provides the ultimate flexibility. Now that the control is
                   over a standard computer network, the control computer can be a very
                   long distance from the video switching box. The system can also be
                   designed with many control computers, and each computer may be
                   given different levels of access to the system.

        Camera control systems
                   CCTV systems often have cameras with remote zoom, pan and tilt
                   capability. Control signals must be sent from the control station to all
                   those cameras that need them.
                   Zoom, pan and tilt are normally controlled by a joystick on the control
                   panel. The controller could have one joystick for every camera.
                   However, if there are many cameras with zoom, pan and tilt capability,
                   this could result in a control panel covered in joysticks. So it is logical to
                   provide one joystick that controls all the cameras. It is also logical to only
                   control the camera that the operator has selected to look through.
                   Linking camera motion control selection to camera viewing selection
                   Control signals are not sent to all the cameras at the same time. It is
                   logical to only control the camera that the controller is actually looking
                   through.

        Switchers characteristics
                   CCTV switchers have a number of important characteristics that define
                   their quality and suitability.


Sony Training Services                                                                      284
Broadcast Fundamentals

                        Bandwidth
                             The quality of CCTV equipment is often defined by its bandwidth. ‘Width’
                             suggests a difference between a lower frequency and an upper
                             frequency. However video equipment is easily able to operate to very
                             low frequencies. We assume that the lower frequency is in fact zero. The
                             only interesting frequency is the upper limiting frequency. The upper
                             frequency limit is the same as the bandwidth.
                             All pieces of video equipment are able to transmit a certain range of
                             frequencies. Relatively low frequencies up to about 1MHz are easy to
                             process and transmit. Indeed it should be possible for video equipment
                             to handle frequencies of several MHz.
                             However above about 5MHz the higher the frequency the greater the
                             losses. There is no sudden loss with increasing frequency. The signal
                             power gradually drops and the frequency increases. So at what point
                             you do decide enough is enough?
                             A CCTV switcher’s bandwidth is defined as the frequency at which the
                             signal power has dropped by half it level. This is the same as a 3dB drop
                             (or a –3dB gain) in power. Some specifications quote the bandwidth or
                             frequency response as the 3dB point.
               Pow er




             -3 d B




                                            B a n d w id t h                  F re q u e n c y




Figure 116                                                                                       Bandwidth

                             Some specifications quote specific drops in power at specific
                             frequencies. This is not an ideal method as it makes it more difficult to
                             compare with other specifications. Indeed this may be done specifically
                             to hide a poor bandwidth.

                        Signal to noise ratio (SNR)
                             This is another important characteristic of any CCTV processing
                             equipment. There is always a certain amount of noise contained within
                             video signals.




285                                                            Sony Broadcast & Professional Europe
Part 23– CCTV, security & surveillance

                   The SNR is defined as a ratio of the noise power to the video power. If
                   the video signal is properly set up at 0.7V, it is easy to find out what the
                   actual noise level is.


CCTV over IP


Character and shape recognition




Sony Training Services                                                                     286
Broadcast Fundamentals


Part 21                                            Numbers & equations
Decibels
                 A measure of relative power. First used to measure audio power as
                 Bels. The Bel is now seldom used, the decibel is far more common. 1B =
                 10dB. Now used to measure signal power in many application areas.
                 Bel is a logarithmic ratio defined as :-
                                                               P1 
                                                   B = log 10     
                                                               P2 
                 Where P1 and P2 are the 2 power levels being measured.
                 Therefore decibels can be found by the equation :-
                                                                   P1 
                                                   dB = 10 log 10     
                                                                   P2 

       Decibels & absolute sound levels
                 Decibels are used as a measure of absolute sound levels. However the
                 decibel is a ratio, not an absolute quantity. It needs some reference
                 level. The threshold of hearing, the lowest level of sound that the human
                 ear can hear is used as P2 in the equation above.
                 The table below shows audio levels in dB’s.
                     dB                                   Sound level
                   150-160    Eardrum perforation. Space shuttle taking off.
                   140-150    Jet fighter taking off
                   130-140    Threshold of pain.
                   120-130    Sheet metal rivet gun.
                   110-120    Rock cancert, on stage. Close thunder clap.
                   100-110    Busy motorway underpass.
                   90-100     Middle of orchestra playing 1812 overture.
                    80-90     Busy street traffic or motorway hard shoulder. Vacuum cleaner
                    70-80     Average factory.
                    60-70     Department store. Normal close conversation.
                    50-60     Average office.
                    40-50     Quiet street. Average household. Mosquito.
                    30-40     Soft music. Average fridge.
                    20-30     Country garden. Babbling brook.
                    10-20     Rusting leaves. Quiet whisper.
                    0-10      Rustling leaves.
                         0    Threshold of hearing. Perceived silence.



287                                                    Sony Broadcast & Professional Europe
Part 24 – Numbers and equations

                Decibel as a direct ratio
                                           It is sometimes easy to forget exactly what signal ratio gives a specific
                                           decibel level. The diagram below shows direct power ratios compared to
                                           decibel quantities.

                                                                                                      R a t io 1 0 0 0 0 = 4 0 d B
        dB




         30                                                                                           R a tio 1 0 0 0 = 3 0 d B




         20
                                                                                                                            R a tio 1 0 0
                                                                                                                              = 20dB



         10                           R a t io 1 0
                                       = 10dB


                  R a tio 1
                   = 0dB
 -1 0                            10                  20   30   40      50      60       70       80               90                  100
                  R a tio 0 .5
                 = -3 .0 1 d B                                                                           D ir e c t r a tio
         -1 0




         -2 0




Figure 117                                                                              Decibel to direct ratio relationship

                                           You can see that a ratio of 1 is exactly 0dB, as one would expect. A ratio
                                           of 10 is 10dB, and a ratio of ½ is about –3dB. Above 10dB the ratio to
                                           dB relationship climbs at a dramatic rate. At 20dB there is a 100 direct
                                           ratio, and at 40dB there is a 1000 direct ratio.
                                           CD is 96dB signal to noise ratio. As a direct ratio this is about
                                           4,000,000,000!
                                           Decibels are used because the human ear is logarithmic, i.e. we are
                                           very sensitive to the slightest sound, but can still handle relatively loud
                                           sounds without damaging our ears.

                Signal to noise ratio
                                           The decibel can be used as a measure of signal to noise ratio. Using the
                                           equation above P1 is the signal power and P2 is the noise power, thus :-
                                                                                    Signal 
                                                                            log 10         
                                                                                    Noise 




Sony Training Services                                                                                                                      288
Broadcast Fundamentals


Part 22                                                         Things to do
                 Find depth of field picture
                 Finish the filter table
                 Rewrite the Dichroic block chapter
                 Sort out the CCD sensor chapter
                 Sort out the VTR chapter
                 Sort out timecode chapter




289                                                Sony Broadcast & Professional Europe

Broadcast fundamentals

  • 1.
    Broadcast Fundamentals Sony Training Services Version 4 14 February 2006
  • 3.
    Broadcast Fundamentals Part 1 Table of Contents Part 1 Table of Contents i Part 2 The history of television 1 Multimedia timeline 1 Part 3 Image perception & colour 29 The human eye 29 The concept of primary colours 35 Secondary and tertiary colours 36 Hue saturation and luminosity 37 The CIE space 38 Part 4 The basic television signal 40 The problem of getting a picture from A to B 40 Interlaced raster scanning 41 Half lines 42 Synchronisation 44 The oscilloscope 45 Part 5 The monochrome NTSC signal 46 The 405 line system 46 The 525 line monochrome system 46 Frame rate and structure 46 Line rate and structure 47 Bandwidth considerations 47 Part 6 Colour and television 50 Using additive primary colours 50 Ensuring compatability 50 Adding colour 51 Combining R-Y & B-Y 54 Video signal spectra 55 Combining monochrome and colour 56 Using composite video 57 Part 7 Colour NTSC television 58 Similarity to monochrome 58 Choice of subcarrier frequency. 58 Adding colour 59 The vectorscope 61 The gamut 61 The gamut detector 61 Vertical interval structure 62 Part 8 PAL television 64 i Sony Broadcast & Professional Europe
  • 4.
    Part 1 –Tableof contents What is PAL? 64 The PAL signal 65 The PAL chroma signal 65 Choice of subcarrier frequency 67 Bruch blanking 68 The disadvantages of PAL 71 Part 9 SECAM television 72 The video camera 73 Types of video camera 73 System cameras 73 Parts of a video camera 74 Video camera specifications 76 Lenses 79 Refraction 79 The block of glass 80 The prism 80 The convex lens 81 The concave lens 83 Chromatic aberration 84 Spherical aberration 86 Properties of the lens 86 The concave and convex mirrors 88 Lens types 89 Extenders and adaptors 90 Filters 91 Part 10 Early image sensors 92 Selenium detectors 92 The Ionoscope 92 The Orthicon tube 93 The Image Orthicon tube 93 The Vidicon tube 94 Variations on the Vidicon design 95 Part 11 Dichroic blocks 96 The purpose of a dichroic block 96 Mirrors and filters 96 Optical requirements of a dichroic block 98 Variation on a theme 98 Using dichroic blocks in projectors 98 Part 12 CCD sensors 100 Advantages of CCD image sensors 100 Sony Training Services ii
  • 5.
    Broadcast Fundamentals The basics of a CCD 101 Using the CCD as a delay line 102 Using CCD’s as image sensors 106 Back lit sensors 109 Problems with CCD image sensors 110 CCD image sensors with stores 111 HAD technology 115 HyperHAD 117 SuperHAD sensors 117 PowerHAD sensors 117 PowerHAD EX (Eagle) sensors 118 EX View HAD sensors 119 Single chip CCD designs 119 Noise reduction 123 The future of CCD sensors 127 Part 13 The video tape recorder 128 A short history 128 The present day 130 Magnetic recording principles 133 The essentials of helical scan 135 Modern video recorder mechadeck design 140 Variation in tape path designs 147 Definition of a good tape path 148 The servo system 149 Analogue video tape recorder signal processing 150 Popular analogue video recording formats 154 Digital video tape recorders 157 Popular digital video tape formats 159 Part 14 Betacam and varieties 163 Part 15 The video disk recorder 170 History 170 Present day 170 RAID technology 172 Realising RAID systems 179 Part 16 Television receivers & monitors 187 The basic principle 187 Input signals 187 Part 17 Timecode 189 A short history 189 Timecode 190 Timecode’s basic structure 190 iii Sony Broadcast & Professional Europe
  • 6.
    Part 1 –Tableof contents Longitudinal timecode 194 Bi-phase mark coding 199 Adjusting the LTC head 199 Vertical Interval Timecode 202 Drop frame timecode 207 Which timecode am I using ? 207 Timecode use in video recorders 208 Typical VTR timecode controls 208 The future 210 Part 18 SDI (serial digital interface) 211 Parallel digital television 211 Serial digital television 220 Serial digital audio 221 SDI 226 Video index 226 Part 19 Video compression 227 Traditional analogue signals 227 Analogue to digital conversion 227 Compressing digital signals 227 Digital errors in transmission 228 Compensating for digital errors 228 The advantage of digital compression 228 Entropy and redundancy 228 The purpose of any compression scheme 230 Lossless and lossy compression 230 Inter-frame and Intra-frame 231 What is DCT? 232 The church organ 232 The Fourier transform 233 The Discrete Fourier Transform (DFT) 235 Discrete Cosine Transform (DCT) solution to judder 237 What does the result of DCT look like? 238 DCT in video 238 The mathematics of DCT as used for video 241 DCT in audio 243 Basis pictures 243 Why bother? 243 Huffman’s three step process 244 The principle behind variable length codes 248 The results of discrete cosine transforms 248 Using bell curves for variable length coding 248 Decoding variable length codes 250 Sony Training Services iv
  • 7.
    Broadcast Fundamentals Disadvantages of variable length codes 250 The television station 253 The studio 253 The post production studio 253 The edit suite 254 The news studio 254 The outside broadcast vehicle 254 Part 20 CCTV, security & surveillance 256 What is CCTV? 256 CCTV privacy & evidence 256 CCTV use 257 CCTV terminology 259 The typical CCTV chain 263 CCTV cameras 266 Reading CCTV camera specifications 271 CCTV lenses 277 CCTV switchers and control stations 283 CCTV over IP 286 Character and shape recognition 286 Part 21 Numbers & equations 287 Decibels 287 Part 22 Things to do 289 v Sony Broadcast & Professional Europe
  • 9.
    Broadcast Fundamentals Part 2 The history of television Multimedia timeline Prehistoric BC 45,000 Neanderthal carvings on Wooly Mammoth tooth, discovered near Tata, Hungary. 30,000 Ivory horse, oldest known animal carving, from mammoth ivory, discovered near Vogelherd, Germany. 28,000 Cro-Magnon notation, possibly of phases of the moon, carved onto bone, discovered at Blanchard, France. @ 10,000 Engraved antler baton, with seal, salmon and plants portrayed, discovered at Montgaudier, France. 8,000 - 3100 In Mesopotamia, tokens used for accounting and record- keeping 3500 In Sumer, pictographs (cuneiforms) of accounts written on clay tablets. 3400 - 3100 Inscription on Mesopotamian tokens overlap with pictography 2600 Scribes employed in Egypt. 2400 In India, engraved seals identify the writer. 2200 Date of oldest existing document written on papyrus. 1500 Phoenician alphabet. 1400 Oldest record of writing in China, on bones. 1270 Syrian scholar compiles an encyclopedia. 900 China has an organized postal service for government use. 775 Greeks develop a phonetic alphabet, written from left to right. 530 In Greece, a library. 500 Greek telegraph: trumpets, drums, shouting, beacon fires, smoke signals, mirrors. 500 Persia has a form of pony express. 500 Chinese scholars write on bamboo with reeds dipped in pigment. 400 Chinese write on silk as well as wood, bamboo. @ 300 Alexandria library founded by Ptolomy. At its peak the library at Alexandria had about 700000 manuscripts and books and was a magnet for scolary from all over the world. 200 Books written on parchment and vellum. 200 Tipao gazettes are circulated to Chinese officials. 1 Sony Broadcast & Professional Europe
  • 10.
    Part 2 –The history of television 59 Julius Caesar orders postings of Acta Diurna. 48 Alexandria library burnt during Julius Caeser’s siege of Alexandria. AD 100 Roman couriers carry government mail across the empire. 105 T'sai Lun invents paper. 175 Chinese classics are carved in stone which will later be used for rubbings. 180 In China, an elementary zoetrope. 250 Paper use spreads to central Asia. 350 In Egypt, parchment book of Psalms bound in wood covers. 450 Ink on seals is stamped on paper in China. This is true printing. 600 Books printed in China. 700 Sizing agents are used to improve paper quality. 751 Paper manufactured outside of China, in Samarkand by Chinese captured in war. 765 Picture books printed in Japan. 868 The Diamond Sutra, a block-printed book in China. 875 Amazed travelers to China see toilet paper. 950 Paper use spreads west to Spain. 950 Folded books appear in China in place of rolls. 950 Bored women in a Chinese harem invent playing cards. 1000-1499 1000 Mayas in Yucatan, Mexico, make writing paper from tree bark. 1035 Japanese use waste paper to make new paper. 1049 Pi Sheng fabricates movable type, using clay. 1116 Chinese sew pages to make stitched books. 1140 In Egypt, cloth is stripped from mummies to make paper. 1147 Crusader taken prisoner returns with papermaking art, according to a legend. 1200 European monasteries communicate by letter system. 1200 University of Paris starts messenger service. 1241 In Korea, metal type. 1282 In Italy, watermarks are added to paper. 1298 Marco Polo describes use of paper money in China. 1300 Wooden type found in central Asia. Sony Training Services 2
  • 11.
    Broadcast Fundamentals 1305 Taxis family begins private postal service in Europe. 1309 Paper is used in England. 1392 Koreans have a type foundry to produce bronze characters. 1423 Europeans begin Chinese method of block printing. 1450 A few newsletters begin circulating in Europe. 1451 Johnannes Gutenberg uses a press to print an old German poem. 1452 Metal plates are used in printing. 1453 Gutenberg prints the 42-line Bible. 1464 King of France establishes postal system. 1490 Printing of books on paper becomes more common in Europe. 1495 A paper mill is established in England. 1500 – 1599 1500 Arithmetic + and - symbols are used in Europe. 1510 By now approximately 35,000 books have been printed, some 10 million copies. 1520 Spectacles balance on the noses of Europe's educated. 1533 A postmaster in England. 1545 Garamond designs his typeface. 1550 Wallpaper brought to Europe from China by traders. 1560 In Italy, the portable camera obscura allows precise tracing of an image. 1560 Legalized, regulated private postal systems grow in Europe. 1556 The pencil. 1600 – 1699 1609 First regularly published newspaper appears in Germany. 1627 France introduces registered mail. 1631 A French newspaper carries classified ads. 1639 In Boston, someone is appointed to deal with foreign mail. 1639 First printing press in the American colonies. 1640 Kirchner, a German Jesuit, builds a magic lantern. 1650 Leipzig has a daily newspaper. 1653 Parisians can put their postage-paid letters in mail boxes. 1659 Londoners get the penny post. 1661 Postal service within the colony of Virginia. 1673 Mail is delivered on a route between New York and Boston. 3 Sony Broadcast & Professional Europe
  • 12.
    Part 2 –The history of television 1689 Newspapers are printed, at first as unfolded "broadsides." 1696 By now England has 100 paper mills. 1698 Public library opens in Charleston, S.C. 1700 - 1799 1704 A newspaper in Boston prints advertising. 1710 German engraver Le Blon develops three-color printing. 1714 Henry Mill receives patent in England for a typewriter. 1719 Reaumur proposes using wood to make paper. 1725 Scottish printer develops stereotyping system. 1727 Schulze begins science of photochemistry. 1732 In Philadelphia, Ben Franklin starts a circulating library. 1755 Regular mail ship runs between England and the colonies. 1770 The eraser. 1774 Swedish chemist invents a future paper whitener. 1775 Continental Congress authorizes Post Office; Ben Franklin first Postmaster General. 1780 Steel pen points begin to replace quill feathers. 1784 French book is made without rags, from vegetation. 1785 Stagecoaches carry the mail between towns in U.S. 1790 In England the hydraulic press is invented. 1792 Mechanical semaphore signaler built in France. 1792 In Britain, postal money orders. 1792 Postal Act gives mail regularity throughout U.S. 1794 First letter carriers appear on American city streets. 1794 Panorama, forerunner of movie theaters, opens. 1794 Signaling system connects Paris and Lille. 1798 Senefelder in Germany invents lithography. 1799 Robert in France invents a paper-making machine. 1800 - 1899 1800 Paper can be made from vegetable fibers instead of rags. 1800 Letter takes 20 days to reach Savannah from Portland, Maine. 1801 Semaphore system built along the coast of France. 1801 Joseph-Marie Jacquard invents a loom using punch cards. 1803 Fourdrinier continuous web paper-making machine. 1804 In Germany, lithography is invented. 1806 Carbon paper. Sony Training Services 4
  • 13.
    Broadcast Fundamentals 1807 Camera lucida improves image tracing. 1808 Turri of Italy builds a typewriter for a blind contessa. 1817 Jons Berzelius discovered selenium, an element shown in later years to have photo-voltaic effects. The material was a bi-products of chemical processes carried out in a Swedish factory. At first he though the material was tellurium “earth”, but later found it to be a new element and named it selenium from the Greek word “selene” meaning “moon”. 1831 Michael Faraday in Britain and Joseph Henry in the United States experiment with electromagnetism, providing the basis for research into electrical communication. 1844 Samuel Morse publicly demonstrates the telegraph for the first time. 1862 Italian physicist, Abbe Giovanni Caselli, is the first to send fixed images over a long distance, using a system he calls the "pantelegraph". 1873 Two English telegraph engineers, May and Smith, experiment with selenium and light, giving inventors a way of transforming images into electrical signals. 1880 George Carey builds a rudimentary system using dozens of tiny light-sensitive selenium cells. 1884 In Germany, Paul Nipkow patents the first mechanical television scanning system, consisting of a disc with a spiral of holes. As the disc spins, the eye blurs all the points together to re-create the full picture. 1895 Italian physicist Guglielmo Marconi develops radio telegraphy and transmits Morse code by wireless for the first time. 1897 Karl Ferdinand Braun, a German physicist, invents the first cathode-ray tube, the basis of all modern television cameras and receivers. 1900 – 1909 1900 Kodak Brownie makes photography cheaper and simpler. Pupin's loading coil reduces telephone voice distortion. 1901 Sale of phonograph disc made of hard resinous shellac First electric typewriter, the Blickensderfer. Marconi sends a radio signal across the Atlantic. 1902 Germany's Zeiss invents the four-element Tessar camera lens. Etched zinc engravings start to replace hand-cut wood blocks. 5 Sony Broadcast & Professional Europe
  • 14.
    Part 2 –The history of television U.S. Navy installs radio telephones aboard ships. Photoelectric scanning can send and receive a picture. Trans-Pacific telephone cable connects Canada and Australia. 1903 Technical improvements in radio, telegraph, phonograph, movies and printing. London Daily Mirror illustrates only with photographs. A telephone answering machine is invented. Fleming invents the diode to improve radio communication. Offset lithography becomes a commercial reality. A photograph is transmitted by wire in Germany. Hine photographs America's underclass. The Great Train Robbery creates demand for fiction movies. The comic book. The double-sided phonograph disc. 1905 In Pittsburgh the first nickelodeon opens. Photography, printing, and post combine in the year's craze, picture postcards. In France, Pathe colors black and white films by machine. In New Zealand, the postage meter is introduced. The Yellow Pages. The juke box; 24 choices. 1906 A program of voice and music is broadcast in the U.S. Lee de Forest invents the three-element vacuum tube. Dunwoody and Pickard build a crystal-and-cat's-whisker radio. An animated cartoon film is produced. Fessenden plays violin for startled ship wireless operators. An experimental sound-on-film motion picture. Strowger invents automatic dial telephone switching. 1907 Bell and Howell develop a film projection system. Lumiere brothers invent still color photography process. Sony Training Services 6
  • 15.
    Broadcast Fundamentals DeForest begins regular radio music broadcasts. In Russia, Boris Rosing develops theory of television and transmits black-and-white silhouettes of simple shapes, using a mechanical mirror- drum apparatus as a camera and a cathode-ray tube as a receiver. 1908 Campbell-Swinton, a Scottish electrical engineer, publishes proposals about an all-electronic television system that uses a cathode-ray tube for both receiver and camera. In U.S., Smith introduces true color motion pictures. 1909 Radio distress signal saves 1,700 lives after ships collide. First broadcast talk; the subject: women's suffrage. 1910-1919 1910 Sweden's Elkstrom invents "flying spot" camera light beam. 1911 Efforts are made to bring sound to motion pictures. Rotogravure aids magazine production of photos. "Postal savings system" inaugurated. 1912 U.S. passes law to control radio stations. Motorized movie cameras replace hand cranks. Feedback and heterodyne systems usher in modern radio. First mail carried by airplane. 1913 The portable phonograph is manufactured. Type composing machines roll out of the factory. 1914 A better triode vacuum tube improves radio reception. Radio message is sent to an airplane. In Germany, the 35mm still camera, a Leica. In the U.S., Goddard begins rocket experiments. 7 Sony Broadcast & Professional Europe
  • 16.
    Part 2 –The history of television First transcontinental telephone call. 1915 Wireless radio service connects U.S. and Japan. Radio-telephone carries speech across the Atlantic. Birth of a Nation sets new movie standards. The electric loudspeaker. 1916 David Sarnoff envisions radio as "a household utility." Cameras get optical rangefinders. Radios get tuners. 1917 Photocomposition begins. Frank Conrad builds a radio station, later KDKA. Condenser microphone aids broadcasting, recording. 1918 First regular airmail service: Washington, D.C. to New York. 1919 The Radio Corporation of America (RCA) is formed. People can now dial telephone numbers themselves. Shortwave radio is invented. Flip-flop circuit invented; will help computers to count. 1920-1929 1920 The first broadcasting stations are opened. First cross-country airmail flight in the U.S. Sound recording is done electrically. Post Office accepts the postage meter. KDKA in Pittsburgh broadcasts first scheduled programs. 1921 Quartz crystals keep radio signals from wandering. The word "robot" enters the language. Western Union begins wirephoto service. Sony Training Services 8
  • 17.
    Broadcast Fundamentals 1922 A commercial is broadcast, $100 for ten minutes. Technicolor introduces two-color process for movies. Germany's UFA produces a film with an optical sound track. First 3-D movie, requires spectacles with one red and one green lens. Singers desert phonograph horn mouths for acoustic studios. Nanook of the North, the first documentary. 1923 Vladimir Zworykin patents the "Iconoscope", an electronic camera tube. By the end of the year he has also produced a picture display tube, the "Kinescope". People on one ship can talk to people on another. Ribbon microphone becomes the studio standard. A picture, broken into dots, is sent by wire. 16 mm nonflammable film makes its debut. Kodak introduces home movie equipment. Neon advertising signs. The A.C. Nielsen Company is founded. Nielsen's market research is soon being used by companies deciding where to advertise on radio. 1924 John Logie Baird is the first to transmit a moving silhouette image, using a mechanical system based on Paul Nipkow's model. Low tech achievement: notebooks get spiral bindings. The Eveready Hour is the first sponsored radio program. At KDKA, Conrad sets up a short-wave radio transmitter. Daily coast-to-coast air mail service. Two and a half million radio sets in the U.S. 1925 John Logie Baird obtains the first actual television picture. Vladimir Zworykin takes out the first patent for colour television. The Leica 35 mm camera sets a new standard. Commercial picture facsimile radio service across the U.S. All-electric phonograph is built. A moving image, the blades of a model windmill, is telecast. From France, a wide-screen film. 9 Sony Broadcast & Professional Europe
  • 18.
    Part 2 –The history of television 1926 John Logie Baird gives the first successful public demonstration of mechanical television at his laboratory in London. The National Broadcasting Company (NBC) is formed by Westinghouse, General Electric and RCA. Commercial picture facsimile radio service across the Atlantic. Some radios get automatic volume control, a mixed blessing. The Book-of-the-Month Club. In U.S., first 16mm movie is shot. Goddard launches liquid-fuel rocket. Permanent radio network, NBC, is formed. Bell Telephone Labs transmit film by television. 1927 The British Broadcasting Corporation is founded. Columbia Phonographic Broadcasting System, later CBS, is formed Pictures of Herbert Hoover, U.S. Secretary of Commerce, are transmitted 200 miles from Washington D.C. to New York, in the world's first televised speech and first long-distance television transmission. NBC begins two radio networks. Farnsworth assembles a complete electronic TV system. Jolson's "The Jazz Singer" is the first popular "talkie." Movietone offers newsreels in sound. U.S. Radio Act declares public ownership of the airwaves. Technicolor. Negative feedback makes hi-fi possible. 1928 Station W2XBS, RCA's first television station, is established in New York City, creating television's first star, Felix the Cat — the original model of which is featured in Watching TV Later in the year, the world's first television drama, The Queen's Messenger, is broadcast, using mechanical scanning John Logie Baird transmits images of London to New York via shortwave. The teletype machine makes its debut. Television sets are put in three homes, programming begins. Baird invents a video disc to record television. In an experiment, television crosses the Atlantic. Sony Training Services 10
  • 19.
    Broadcast Fundamentals In Schenectady, N.Y., the first scheduled television broadcasts. Steamboat Willie introduces Mickey Mouse. A motion picture is shown in color. Times Square gets moving headlines in electric lights. IBM adopts the 80-column punched card. 1929 In London, John Logie Baird opens the world's first television studio, but is still able to produce only crude and jerky images. However, because Baird's television pictures carry so little visual information, it is possible to broadcast them from ordinary medium-wave radio transmitters. Experiments begin on electronic color television. Telegraph ticker sends 500 characters per minute. Ship passengers can phone relatives ashore. Brokers watch stock prices on an automated electric board. Something else new: the car radio. In Germany, magnetic sound recording on plastic tape. Air mail flown from Miami to South America. Bell Lab transmits stills in color by mechanical scanning. Zworykin demonstrates cathode-ray tube "kinescope" receiver, 60 scan lines. 1930-1939 1930 The first commercial is televised by Charles Jenkins, who is fined by the U.S. Federal Radio Commission. The BBC begins regular television transmissions. Photo flashbulbs replace dangerous flash powder. "Golden Age" of radio begins in U.S. Lowell Thomas begins first regular network newscast. TVs based on British mechanical system roll off factory line. Bush's differential analyzer introduces the computer. AT&T tries the picture telephone. 1931 Owned jointly by CKAC and La Presse, Canada's first television station, VE9EC, starts broadcasting in Montreal. Ted Rogers, Sr. receives a licence to broadcast experimental television from his Toronto radio station. Also this year, RCA begins experimental electronic transmissions from the Empire State Building. 11 Sony Broadcast & Professional Europe
  • 20.
    Part 2 –The history of television Commercial teletype service. Electronic TV broadcasts in Los Angeles and Moscow. Exposures meters go on sale to photographers. NBC experimentally doubles transmission to 120-line screen. 1932 Parliament creates the Canadian Radio Broadcasting Commission, superseded by the CBC in 1936. Disney adopts a three-color Technicolor process for cartoons. Kodak introduces 8 mm film for home movies. The "Times" of London uses its new Times Roman typeface. Stereophonic sound in a motion picture, "Napoleon." Zoom lens is invented, but a practical model is 21 years off. The light meter. NBC and CBS allow prices to be mentioned in commercials. 1933 Western Television Limited's mechanical television system is toured and demonstrated at Eaton's stores in Toronto, Montreal and Winnipeg. Armstrong invents FM, but its real future is 20 years off. Multiple-flash sports photography. Singing telegrams. Phonograph records go stereo. 1934 Drive-in movie theater opens in New Jersey. Associated Press starts wirephoto service. In Germany, a mobile television truck roams the streets. In Scotland, teletypesetting sets type by phone line. Three-color Technicolor used in live action film. Communications Act of 1934 creates FCC. Half of the homes in the U.S. have radios. Mutual Radio Network begins operations. 1935 William Hoyt Peck of Peck Television of Canada uses a transmitter in Montreal during five weeks of experimental mechanical broadcasts. Germany opens the world's first three-day-a-week filmed television service. France begins broadcasting its first regular transmissions from the top of the Eiffel Tower. Sony Training Services 12
  • 21.
    Broadcast Fundamentals German single lens reflex roll film camera synchronized for flash bulbs. Also in Germany, audio tape recorders go on sale. IBM's electric typewriter comes off the assembly line. The Penguin paperback book sells for the price of 10 cigarettes. All-electronic VHF television comes out of the lab. Eastman-Kodak develops Kodachrome color film. Nielsen's Audimeter tracks radio audiences. 1936 There are about 2,000 television sets in use around the world. The BBC starts the world's first public high-definition/electronic television service in London. Berlin Olympics are televised closed circuit. Bell Labs invents a voice recognition machine. Kodachrome film sharpens color photography. Co-axial cable connects New York to Philadelphia. Alan Turing's "On Computable Numbers" describes a general purpose computer. 1937 Stibitz of Bell Labs invents the electrical digital calculator. Pulse Code Modulation points the way to digital transmission. NBC sends mobile TV truck onto New York streets. A recording, the Hindenburg crash, is broadcast coast to coast. Carlson invents the photocopier. Snow White is the first feature-length cartoon. 1938 Allen B. DuMont forms the DuMont television network to compete with RCA. Also this year, DuMont manufactures the first all-electronic television set for sale to the North American public. One of these early DuMont television sets is featured in Watching TV. Strobe lighting. Baird demonstrates live TV in color. Broadcasts can be taped and edited. Two brothers named Biro invent the ballpoint pen in Argentina. CBS "World News Roundup" ushers in modern newscasting. DuMont markets electronic television receiver for the home. Radio drama, War of the Worlds," causes national panic. 13 Sony Broadcast & Professional Europe
  • 22.
    Part 2 –The history of television 1939 Because of the outbreak of war, the BBC abruptly stops broadcasting in the middle of a Mickey Mouse cartoon on September 1, resuming at that same point when peace returns in 1945. The first major display of electronic television in Canada takes place at the Canadian National Exhibition in Toronto. Baseball is televised for the first time. Mechanical scanning system abandoned. New York World's Fair shows television to public. Regular TV broadcasts begin in USA. Air mail service across the Atlantic. Many firsts: sports coverage, variety show, feature film, etc. 1940-1949 1940 Dr. Peter Goldmark of CBS introduces a 343-line colour television system for daily transmission, using a disc of three filters (red, green and blue), rotated in front of the camera tube. Fantasia introduces stereo sound to American public. 1941 North America's current 525-line/30-pictures-a-second standard, known as the NTSC (National Television Standards Committee) standard, is adopted. Stereo is installed in a Moscow movie theater. FCC sets U.S. TV standards. CBS and NBC start commercial transmission; WW II intervenes. Goldmark at CBS experiments with electronic color TV. Microwave transmission. Zuse's Z3 is the first computer controlled by software. 1942 Atanasoff, Berry build the first electronic digital computer. Kodacolor process produces the color print. 1943 Repeaters on phone lines quiet long distance call noise. 1944 Harvard's Mark I, first digital computer, put in service. IBM offers a typewriter with proportional spacing. NBC presents first U.S. network newscast, a curiosity. Sony Training Services 14
  • 23.
    Broadcast Fundamentals 1945 BBC returns regular transmission of television, at the exact same time of day at exactly the same point in the programme. Clarke envisions geo-synchronous communication satellites. It is estimated that 14,000 products are made from paper. 1946 NBC and CBS demonstrate rival colour systems. The world's first television broadcast via coaxial cable is transmitted from New York to Washington D.C. Jukeboxes go into mass production. Pennsylvania's ENIAC heralds the modern electronic computer. Automobile radio telephones connect to telephone network. French engineers build a phototypesetting machine. 1947 A permanent network linking four eastern U.S. stations is established by NBC. On June 3, Canadian General Electric engineers in Windsor receive the first official electronic television broadcast in Canada, transmitted from the new U.S. station WWDT in Detroit. This year also sees the development of the transistor, on which solid-state electronics are based. Hungarian engineer in England invents holography. The transistor is invented, will replace vacuum tubes. The zoom lens covers baseball's world series for TV. 1948 Television manufacturing begins in Canada. The television audience increases by 4,000 percent this year, due to a jump in the number of cities with television stations and to the fact that one million homes in the U.S. now have television sets. The U.S. Federal Communications Commission puts a freeze on new television channel allocations until the problem of station-to-station interference is resolved. The LP record arrives on a viny disk. Shannon and Weaver of Bell Labs propound information theory. Land's Polaroid camera prints pictures in a minute. Hollywood switches to nonflammable film. Public clamor for television begins; FCC freezes new licenses. Airplane re-broadcasts TV signal across nine states. 15 Sony Broadcast & Professional Europe
  • 24.
    Part 2 –The history of television 1949 The first Emmy Awards are presented, and the Canadian government establishes an interim policy for television, announcing loans for CBC television development. An RCA research team in the U.S. develops the Shadow Mask picture tube, permitting a fully electronic colour display. Network TV in U.S. RCA offers the 45 rpm record. Community Antenna Television, forerunner to cable. Whirlwind at MIT is the first real time computer. Magnetic core computer memory is invented. 1950-1959 1950 Cable TV begins in the U.S., and warnings begin to be issued on the impact of violent programming on children. European broadcasters fix a common picture standard of 625 lines. (By the 1970s, virtually all nations in the world used 625-line service, except for the U.S., Japan, and some others which adopted the 525-line U.S. standard.) Over 100 television stations are in operation in the U.S. Regular USA color television transmission. Vidicon camera tube improves television picture. Changeable typewriter typefaces in use. A.C. Nielsen's Audimeters track viewer watching. 1951 The first colour television transmissions begin in the U.S. this year. Unfortunately, for technical reasons, the several million existing black- and-white receivers in America cannot pick up the colour programmes, even in black-and-white, and colour sets go blank during television's many hours of black-and-white broadcasting. The experiment is a failure and colour transmissions are stopped. The U.S. sees its first coast-to-coast transmission in a broadcast of the Japanese Peace Conference in San Francisco. One and a half million TV sets in U.S., a tenfold jump in one year. Cinerama will briefly dazzle with a wide, curved screen and three projectors. Computers are sold commercially. Still camera get built-in flash units. Coaxial cable reaches coast to coast. Sony Training Services 16
  • 25.
    Broadcast Fundamentals 1952 Cable TV systems begin in Canada. On September 6, CBC Television broadcasts from its Montreal station; on September 8, CBC broadcasts from the Toronto station. The first political ads appear on U.S. television networks, when Democrats buy a half-hour slot for Adlai Stevenson. Stevenson is bombarded with hate mail for interfering with a broadcast of I Love Lucy. Eisenhower, Stevenson's political opponent, buys only 20-second commercial spots, and wins the election. 3-D movies offer thrills to the audience. Bing Crosby's company, Crosby Enterprises, tests video recording. Wide-screen Cinerama appears; other systems soon follow. Sony offers a miniature transistor radio. EDVAC takes computer technology a giant leap forward. Univac projects the winner of the presidential election on CBS. Telephone area codes. Zenith proposes pay-TV system using punched cards. Sony offers a miniature transistor radio. 1953 A microwave network connects CBC television stations in Montreal, Ottawa and Toronto. The first private television stations begin operation in Sudbury and London. Queen Elizabeth's coronation is televised. CBC beats U.S. competitors to the punch by flying footage across the Atlantic. In the USA TV Guide is launched. NTSC colour standard adopted and the USA begins colour transmission again, this time successfully. Japanese television goes on the air for the first time. CATV system uses microwave to bring in distant signals. 1954 Magazines now routinely offer the homemaker tips on arranging living- room furniture for optimal television-viewing pleasure. U.S.S.R. launches Sputnik. Radio sets in the world now outnumber newspapers printed daily. Regular colour TV broadcasts established. Sporting events are broadcast live in colour. 17 Sony Broadcast & Professional Europe
  • 26.
    Part 2 –The history of television Radio sets in the world now outnumber daily newspapers. Transistor radios are sold. 1955 Tests begin to communicate via fiber optics. Music is recorded on tape in stereo. 1956 Ampex Corporation demonstrates videotape recording, initially used only by television stations. Henri de France develops the SECAM (sequential colour with memory) procedure. It is adopted in France, and the first SECAM colour transmission between Paris and London takes place in 1960. Several Louisiana congressmen promote a bill to ban all television programmes that portray blacks and whites together in a sympathetic light. Bell tests the picture phone. First transatlantic telephone calls by cable. 1957 The Soviet Union launches the world's first Earth satellite, Sputnik. Soviet Union's Sputnik sends signals from space. FORTRAN becomes the first high-level language. A surgical operation is televised. First book to be entirely phototypeset is offset printed. 1958 The CBC's microwave network is extended from Victoria, B.C. to Halifax and Sydney, Nova Scotia, to become the longest television network in the world. Pope Pius XII declares Saint Clare of Assisi the patron saint of television. Her placement on the television set is said to guarantee good reception. Videotape delivers colour pictures. Stereo recording is introduced. Data moves over regular phone circuits. Broadcast bounced off rocket, pre-satellite communication. The laser is introduced. Cable carries FM radio stations. Sony Training Services 18
  • 27.
    Broadcast Fundamentals 1959 CBC Radio-Canada Montreal producers go on strike. Bonanza debuts, starring Canadian actor Lorne Greene. Local announcements, weather data and local ads go on cable. The microchip is invented. Xerox manufactures a plain paper copier. Bell Labs experiments with artificial intelligence. French SECAM and German PAL systems introduced. 1960-1969 1960 The Nixon-Kennedy debates are televised, marking the first network use of the split screen. Kennedy performs better on television than Nixon, and it is believed that television helps Kennedy win the election. Sony develops the first all-transistor television receiver, making televisions lighter and more portable. Ninety percent of American homes now own television sets, and America becomes the world's first "television society". There are now about 100 million television sets in operation worldwide. Echo I, a U.S. balloon in orbit, reflects radio signals to Earth. In Rhode Island, an electronic, automated post office. A movie gets Smell-O-Vision, but the public just sniffs. Zenith tests subscription TV; unsuccessful. 1961 The Canadian Television Network (CTV), a privately owned network, begins operations. The beginning of the Dodd hearings in the U.S., which examined the television industry's "rampant and opportunistic use of violence". Boxing match test shows potential of pay-TV. FCC approves FM stereo broadcasting; spurs FM development. Bell Labs tests communication by light waves. IBM introduces the "golf ball" typewriter. Letraset makes headlines simple. The time-sharing computer is developed. 1962 The Telstar television satellite is launched by the U.S., and starts relaying transatlantic television shortly after its launch. The first programme shows scenes of Paris. 19 Sony Broadcast & Professional Europe
  • 28.
    Part 2 –The history of television A survey indicates that 90 percent of American households have television sets; 13 percent have more than one. Cable companies import distant signals. FCC requires UHF tuners on television sets. The minicomputer arrives. Comsat created to launch, operate global system. 1963 From Holland comes the audio cassette. Zip codes introduced. CBS and NBC TV newscasts expand to 30 minutes in color. PDP-8 becomes the first popular minicomputer. Polaroid camera instant photography adds color. Communications satellite is placed in geo-synchronous orbit. On November 22, regular television programming is suspended following news of the Kennedy assassination. On November 24, live on television, Jack Ruby murders Lee Harvey Oswald, Kennedy's suspected assassin. Kennedy's funeral is televised the following day. 96 per cent of all American television sets are on for an average 31 hours out of 72 during this period — watching, many say, simply to share in the crisis. 1964 The Beatles appear for the first time on Ed Sullivan Show. Procter and Gamble, the largest American advertiser, refuses to advertise on any show that gives "offense, either directly or by inference, to any organized minority group, lodge or other organizations, institutions, residents of any State or section of the country or a commercial organization." Olympic Games in Tokyo telecast live globally by satellite. Touch Tone telephones and Picturephone service. From Japan, the videotape recorder for home use. Russian scientists bounce a signal off Jupiter. Intelsat, international satellite organization, is formed. 1965 The Vietnam War becomes the first war to be televised, coinciding with CBS's first colour transmissions and the first Asia-America satellite link. Protesters against the war adopt the television-age slogan, The whole world is watching. Sony Training Services 20
  • 29.
    Broadcast Fundamentals Sony introduces Betamax, a small home videorecorder. Electronic phone exchange gives customers extra services. Satellites begin domestic TV distribution in Soviet Union. Computer time-sharing becomes popular. Color news film. Communications satellite Early Bird (Intelsat I) orbits above the Atlantic. Kodak offers Super 8 film for home movies. Cartridge audio tapes go on sale for a few years. Most television broadcasts in the USA are in colour. FCC rules bring structure to cable television. Solid-state equipment spreads through the cable industry. 1966 Colour television signals are transmitted by Canadian stations for the first time. Linotron can produce 1,000 characters per second. Fiber optic cable multiplies communication channels. Xerox sells the Telecopier, a fax machine. 1967 Sony introduces the first lightweight, portable and cheap video recorder, known as the "portapak". The portapak is almost as easy to operate as a tape-recorder and leads to an explosion in "do-it-yourself" television, revolutionizing the medium. Also this year, the FCC orders that cigarette ads on television, on radio and in print, carry warnings about the health dangers of smoking. Dolby introduces a system that eliminates audio hiss. Computers get the light pen. Pre-recorded movies on videotape sold for home TV sets. Cordless telephones get some calls. Approx. 200 million telephones in the world, half in U.S. 1968 Sony develops the Trinitron tube, revolutionizing the picture quality of colour television. World television ownership nears 200 million, with 78 million sets in the U.S. alone. The U.S. television industry now has annual revenues of about $2 billion and derives heavy support from tobacco advertisers. FCC approves non-Bell equipment attached to phone system. The RAM microchip reaches the market. 21 Sony Broadcast & Professional Europe
  • 30.
    Part 2 –The history of television 1969 On July 20, 1969, the first television transmission from the moon is viewed by 600 million television viewers around the world. Sesame Street debuts on American Public Television, and begins to revolutionize adult attitudes about what children are capable of learning. Astronauts send live photographs from the moon. 1970-1979 1970 Postal Reform Bill makes U.S. Postal Service a government corporation. In Germany, a videodisc is demonstrated. U.S. Post Office and Western Union offer Mailgrams. The computer floppy disc is an instant success. 1971 Canada's Anik I, the first domestic geo-synchronous communications satellite, is launched, capable of relaying 12 television programmes simultaneously. India has a single television station in New Delhi, able to reach only 20 miles outside the city. South Africa has no television at all. Intel builds the microprocessor, "a computer on a chip." Wang 1200 is the first word processor. 1972 The Munich Olympics are broadcast live, drawing an estimated 450 million viewers worldwide. When Israeli athletes are kidnapped by Palestinian terrorists during the games, coverage of the games cuts back and forth between shots of the terrorists and footage of Olympic events. The American-conceived Intelsat system is launched this year, becoming a network and controlling body for the world's communications satellite system. HBO starts pay-TV service for cable. Sony introduces 3/4 inch "U-Matic" cassette VCR. New FCC rules lead to community access channels. Polaroid camera can focus by itself. Digital television comes out of the lab. The BBC offers "Ceefax," two-way cable information system. "Open Skies": Any U.S. firm can have communication satellites. Landsat I, "eye-in-the-sky" satellite, is launched. Sony Training Services 22
  • 31.
    Broadcast Fundamentals "Pong" starts the video game craze. 1973 Ninety-six countries now have regular television service. Watergate unfolds on the air in the U.S. and ends the following year with Nixon's resignation. U.S. producers sell nearly $200 million dollars worth of programmes overseas, more than the rest of the world combined. The microcomputer is born in France. IBM's Selectric typewriter is now "self-correcting." The term Electronic News Gathering, or ENG is introduced. "Teacher-in-the-Sky" satellite begins educational mission. 1975 A study indicates that the average American child during this decade will have spent 10,800 hours in school by the time he or she is 18, but will have seen an average 20,000 hours of television. Studies also estimate that, by the time he/she is 75, the average American male will have spent nine entire years of his life watching television; the average British male will have spent eight years watching. The microcomputer, in kit form, reaches the U.S. home market. Sony's Betamax and JVC's VHS battle for public acceptance. "Thrilla' from Manila"; substantial original cable programming. 1976 The Olympics, broadcast from Montreal, draw an estimated 1 billion viewers worldwide. Apple I compter introduced. Ted Turner delivers programming nationwide by satellite. Still cameras are controlled by microprocessors. 1977 South Africans see television for the first time on May 10, as test transmissions begin from the state-backed South Africa Broadcast Co. The Pretoria government has yielded to public pressure after years of banning television as being morally corrupting. Half the broadcasts are in English, half in Afrikaans. Columbus, Ohio, residents try 2-way cable experiment, QUBE. 1978 Ninety-eight percent of American households have television sets, up from nine percent in 1950. Seventy-eight percent have colour televisions, up from 3.1 percent in 1964. 23 Sony Broadcast & Professional Europe
  • 32.
    Part 2 –The history of television From Konica, the point-and-shoot camera. PBS goes to satellite for delivery, abandoning telephone lines. Electronic typewriters go on sale. 1979 There are now 300 million television sets in operation worldwide. Flat-screen pocket televisions, with liquid crystal display screens, are patented by the Japanese firm Matsushita. The pocket television is no bigger than a paperback book. Speech recognition machine has a vocabulary of 1,000 words. From Holland comes the digital videodisc read by laser. In Japan, first cellular phone network. Computerized laser printing is a boon to Chinese printers. 1980-1989 1980 During the 1980s, in the U.S. and Germany, laws and policies are enacted to preserve a person's right to television in the event of financial setback. Later in the year, the U.S. Cable News Network (CNN) goes on the air in the U.S. India launches its national television network. Sony Walkman tape player starts a fad. In France, a holographic film shows a gull flying. Phototypesetting can be done by laser. Intelsat V relays 12,000 phone calls, 2 color TV channels. Public international electronic fax service, Intelpost, begins. Atlanta gets first fiber optics system. CNN 24-hour news channel started. Addressable converters pinpoint individual homes. 1981 450,000 transistors fit on a silicon chip 1/4-inch square. Hologram technology improves, now in video games. The IBM PC. The laptop computer is introduced. The first mouse pointing device. 1982 From Japan, a camera with electronic picture storage, no film. Sony Training Services 24
  • 33.
    Broadcast Fundamentals USA Today type set in regional plants by satellite command. Kodak camera uses film on a disc cassette. 1983 Cellular phone network starts in U.S. Lasers and plastics improve newspaper production. Computer chip holds 288,000 bits of memory. Time names the computer as "Man of the Year." ZIP + 4, expanded 9-digit ZIP code is introduced. AT&T forced to break up; 7 Baby Bells are born. American videotext service starts; fails in three years. 1984 Trucks used for SNG transmission. Experimental machine can translate Japanese into English. Portable compact disc player arrives. National Geographic puts a hologram on its cover. A television set can be worn on the wrist. Japanese introduce high quality facsmile. Camera and tape deck combine in the camcorder. Apple Macintosh, IBM PC AT. The 32-bit microprocessor. The one megabyte memory chip. Conus relays news feeds for stations on Ku-Band satellites. 1985 Digital image processing for editing stills bit by bit. CD-ROM can put 270,000 papers of text on a CD record. Cellular telephones go into cars. Synthetic text-to-speech computer pronounces 20,000 words. Picture, broken into dots, can be transmitted and recreated. USA TV networks begin satellite distribution to affiliates. At Expo, a Sony TV screen measures 40x25 meters. Sony builds a radio the size of a credit card. In Japan, 3-D television; no spectacles needed. Pay-per-view channels open for business. 25 Sony Broadcast & Professional Europe
  • 34.
    Part 2 –The history of television 1986 HBO scrambles its signals. Cable shopping networks. 1987 Half of all U.S. homes with TV are on cable. American government deregulates cable industry. 1988 Government brochure mailed to 107 million addresses. 1989 Tiananmen Square demonstrates power of media to inform the world. Pacific Link fiber optic cable opens, can carry 40,000 phone calls. 1990- 2000 1990 Flyaway SNG aids foreign reportage. IBM sells Selectric, a sign of the typewriter's passing. Most 2-inch videotape machines are also gone. Videodisc returns in a new laser form. 1991 During the Gulf War, CNN coverage of the conflict is so extensive and wide-ranging that it is commonly remarked, only half in jest, that Saddam Hussein is watching CNN for his military intelligence, instead of relying on his own information-gathering methods. Beauty and the Beast, a cartoon, Oscar nominee as best picture. Denver viewers can order movies at home from list of more than 1,000 titles. Moviegoers astonished by computer morphing in Terminator 2. Baby Bells get government permission to offer information services. Collapse of Soviet anti-Gorbachev plot aided by global system called the Internet. More than 4 billion cassette tape rentals in U.S. alone. 3 out of 4 U.S. homes own VCRs; fastest selling domestic appliance in history. Sony Training Services 26
  • 35.
    Broadcast Fundamentals 1992 Cable TV revenues reach $22 billion. At least 50 U.S. cities have competing cable services. After President Bush speaks, 25 million viewers try to phone in their opinions. 1993 A TV Guide poll finds that one in four Americans would not give up television even for a million dollars. Dinosaurs roam the earth in Jurassic Park. Unfounded rumors fly that cellphones cause brain cancer. Demand begins for "V-chip" to block out violent television programs. 1 in 3 Americans does some work at home instead of driving to work. 1994 After 25 years, U.S. government privatizes Internet management. Rolling Stones concert goes to 200 workstations worldwide on Internet "MBone." To reduce Western influence, a dozen nations ban or restrict satellite dishes. Prodigy bulletin board fields 12,000 messages in one after L.A. quake. 1995 CD-ROM disk can carry a full-length feature film. (CD-Video) Sony demonstrates flat TV set. DBS feeds are offered nationwide. Denmark announces plan to put much of the nation on-line within 5 years. Major U.S. dailies create national on-line newspaper network. Lamar Alexander chooses the Internet to announce presidential candidacy. There are over a billion television sets in operation around the world. 2002 Bibliotheca Alexandrina is due to open on April 23. This is intended as the modern equivalent to the ancient Alexandria Library which burnt down about 1600 years ago with great loss of information and human understanding. 27 Sony Broadcast & Professional Europe
  • 36.
    Part 2 –The history of television Sony Training Services 28
  • 37.
    Broadcast Fundamentals Part 3 Image perception & colour The human eye Evolutionary advantage The human eye is a marvel of evolution and selective breeding. Mapping the evolutionary history of the eye is difficult but almost certainly started as in some ancient creature that possessed a group of especially light sensitive cells on the surface of its skin. The advantage of being able to sense possible attack, the presence of possible food and a possible mate, must have been a very big advantage. The eye must have evolved quickly from one generation of creature to another. It is perhaps easy to see how the light sensitive cells became better, and how the ability to see colour must have given creatures a clear advantage over those that could not. Even the ability to see a wide spectrum of colours must have helped creatures. Exactly how the lens evolved is less clear. However the lens started its evolution is obviously gave those creatures that possessed them the ability to see with greater clarity. It is also not clear why certain evolutionary paths favoured the multi-lens compound eye, and why others favoured the single lens design. Evolution has not been entirely favourable, especially to humans. The human eye is not perfect. It has a few drawbacks, most of which we have adapted to. Some of these shortcomings actually make it easier to design television, as we will see later. What is the eye Most of us have two working eyes. Sight in humans is more important than any of our other senses. If either or both of our eyes fails to work, it is one of the most disabling disabilities humans can have. The eye grows from rudimentary skin cells before we are born. Neural connections are made directly to the brain early on in development, and what results is one of the most complex and wonderful structures in the human body. The eye’s structure As far as broadcast video is concerned most of the complexity of the human eye is irrelevant. However there are a few features and facts about the eye that are interesting. The human eye approximates to a sphere. In fact for somebody with perfect sight the back of the eye is very close to a perfect sphere. The eye is filled with a jelly like fluid called the vitreous humor. This fluid keeps the eyeball in shape. The fact that its is clear means that light can pass through it from front to back. 29 Sony Broadcast & Professional Europe
  • 38.
    Part 3 -Colour The front of the eye is covered with a clear protective film called the conjunctiva. Behind this is another protective film called the cornea. Just behind this is the iris, a muscular ring that allows the amount of light entering the eye to be regulated. In bright light the iris closes. The iris is tinted. There appears to be no reason why this is so, but this is what gives the eye its ‘colour’. Between the cornea and the iris is a watery fluid called the aqueous humor. This keeps the front of the eye in shape. Behind the iris is the lens. A marvel of evolution this organic structure focuses light to the back of the eyeball. The amazing thing about this lens is that its shape can be altered to change the focal length. The cillary muscle, a small muscle surrounding the lens, squashed it and allows the eye to focus on closer objects. When the muscle relaxes the eye focuses to infinity. Figure 1 The human eye (Lens optics is discussed in a later chapter.) As mentioned the back of the eye is almost spherical. It comprises a large structure called the retina. The retina The retina is a structure that senses light and colour, and sends this information to the brain. It is between 200 and 250 microns thick and comprises various layers. The outermost layer is a pigment layer. This acts as the outer wall to the retina and as a light stop. Inside this is the receptor cells. There are two types of receptor cells. One type are rod shaped, and the other fatter and are cone like. For this Sony Training Services 30
  • 39.
    Broadcast Fundamentals reason they are commonly referred to as rod and cones. Light hitting these cells starts a protein electro-chemical reaction in a material called rhodopsin. This reaction quickly passes along the length of the cell’s axion. The end of the axion connected to the axion of a nerve cell, called a bipolar cell, via a structure called a synapse. A synapse is not actually a connection, but a small gap across which a protein electro- chemical transfer takes place. Figure 2 The human retina Once the transfer has taken place another protein electro-chemical reaction travels the length of the bipolar cell's axion to its body, and then out along another axion to another synapse. This second synapse connects to another nerve cell called the ganglion cell. The signal passes down the ganglion’s axion using the same reaction mechanism. The ganglion cell’s axions pass across the inner surface of the eyeball and out through the nerve bundle, out of the eye. The bundle passes back into the head and directly to the brain. Light therefore has to pass through the whole thickness of the retina before hitting the rods and cones. 31 Sony Broadcast & Professional Europe
  • 40.
    Part 3 -Colour Rods Rod receptor cells have a broad sensitivity range. They are most sensitive to green, which is nearer the centre of the optical electro- magnetic spectrum. Figure 3 Rod and cone cells Rod cells measure the brightness of the image, or put another way the black and white parts of the image. Cones Cone receptor cells have a narrow sensitivity range. There are three types of cone cell. The first is sensitive to about 440nm wavelength light (blue), the second to about 530 nm (green), and the third to about 560nm (red) Cone cells are therefore responsible for seeing colour. Every colour is a mix of blue, green and red. Receptor density across the retina There are about 120,000,000 rod cells in the retina and about 7,000,000 cone cells. About 64% of the cone cells are sensitive to red light, about 32% to green light and just 2% to blue light. Most of the retina is the same, with an even concentration of rod and cone cells. However there are two areas of the retina where this even concentration is different, the fovea and the blind spot. Sony Training Services 32
  • 41.
    Broadcast Fundamentals The fovea The lens focuses the centre of the image to a point on the retina called the fovea. This area of the retina has a very dense concentration of receptor cells. Furthermore, all these cells are cones. There are no rod cells in the fovea. The fovea allows the eye to study the centre of an image or scene in great colour detail. The blind spot Because all the ganglion cell axions are on the inside of the retina they need to pass out of the eyeball at some point. It stands to reason therefore that wherever this point is there can be no receptor cells at all. This area is therefore known as the blind spot. Interesting facts about the eye The eye is far from perfect Although the eye is a marvel of biological engineering it has a number of design flaws. The cornea, lens and vitreous humor are not absolutely clear. They all reduce the amount of light hitting the retina and colour it slightly. The eye’s image is bent out of shape and upside down The image falling on the retina is reasonably well proportioned near to the fovea. However the nearer you get to the outer edge the more compressed and distorted the image becomes. The lens also focuses the image upside down and back to front on the retina. The brain corrects for imperfections The brain corrects the image to remove colour casting from the cornea, lens and vitreous humor. It also corrects edge distortion giving us the impression of a flat correctly proportioned image. Having two eyes allows us to measure distance When focusing of close object, not only do the lenses squash to focus, but also the eyeballs turn towards each other. This can be used by the brain the measure how far away an object is. You can see this happening by getting a friend to hold their finger up at arm’s length, and focus on it. Then ask them to keep focussing on the finger while slowly moving it closer to their face. The eye gets board easily The eye is very good as seeing change. If you stare at something long enough it will disappear. The brain eventually cancels the image out altogether. Thus the eye works best if it continually moves, scanning across edges and shapes continually updating what the brain receives. 33 Sony Broadcast & Professional Europe
  • 42.
    Part 3 -Colour Images can ‘burn in’ to the retina Linked to the last interesting fact, if you stare at something long enough it will appear to disappear but the image is ‘burnt in’. If you then look at something else the original image will appear in negative for a while. The eye remembers The protein electro-chemical reactions in the eye’s cells that sense light and pass the signals back to the brain take a certain time to react and stop. A flash is therefore ‘stretched’ so that the eye effectively sees it for longer than it actually occurs. This effect is known as persistence of vision. Film and television rely heavily on persistence of vision to turn what is actually many still images flashing one after another, into what appears to be a constantly changing image. The eye is good at seeing patterns The eye can pick out patterns very well. This is a problem for television and digital imagery because lines and pixels tend to stand out. For instance a digital photograph will appear to be not as good an image when compared to a conventional photograph of exactly the same resolution, because the digital photograph pixels have a pattern and the conventional photograph grains are random. The eye is very sensitive to green A third of the cone cells are sensitive to green. The rod cells, although intended for seeing the overall brightness of an image. are more sensitive to green. This makes the eye sensitive to green and more sensitive to changes in the green part of the spectrum. This has an important impact of the design of colour television. The fovea is not good for dark vision Rod receptor cells are more sensitive that cone receptor cells. Thus in dark conditions things appear to turn black and white. It is best not to look at something directly in low light, but to look just to the side or above it. This will put the object on the retina where there are plenty of rod cells and you will be able to see it. (Incidentally, it may not be a good idea to look just below an object in dark conditions as you may put it into the blind spot, where you can’t see it at all.) Sony Training Services 34
  • 43.
    Broadcast Fundamentals The conceptof primary colours Any colour can be described as a combination of 3 primary colours. Children are often taught that the three primary colours are Red, Yellow and Blue. This is a perfectly reasonable assumption when learning painting and art. Mixing these colours allows children to make almost any colour they want. Figure 4 Children’s primary colours Subtractive colour mixing This concept is called subtractive colour mixing, because the overall colour gets darker the more paint you add to the mix. Figure 5 Subtractive primary colours In reality Red Yellow and Blue are not the correct primary colours for subtractive colour mixing. The reason for this is that mixing Red , Blue and Yellow does not give Black, it makes Brown. True subtractive primaries should remove all colour and brightness when mixed together, i.e. Black. 35 Sony Broadcast & Professional Europe
  • 44.
    Part 3 -Colour The true subtractive primaries are Magenta, Yellow and Cyan. While these three colours might appear close to Red, Yellow and Blue as far as children are concerned, they are sufficiently different to go to Black when mixed together in equal proportions. Additive colour mixing The opposite of subtractive colour mixing is additive colour mixing. Additive primary colours are relevant to light. If three additive primary coloured lights are mixed in equal proportions the result is White light. The three additive primary colours are Red, Blue and Green. Figure 6 Additive primary colours Secondary and tertiary colours Each set of primary colour has a set of secondary colours. If you mix any two of the primary colours in equal proportions you will get a secondary colour. In fact the three subtractive primary colours are the secondary colours of the additive primary colours, and visa versa. A tertiary colour is found by mixing equal proportions of three primary colours. There are only two tertiary colours, White and Black. Sony Training Services 36
  • 45.
    Broadcast Fundamentals Hue saturationand luminosity Colour can be described as a 3 dimensional shape. At the top is white. Half way down is a circle of all the colours at their full intensity. You can see all six primaries, both additive and subtractive around the edge of the circle. At the bottom is Black. W h ite W h ite B lu e Red Y e llo w C yan M a g e n ta G re e n B la c k B la c k Figure 7 Colour 3D shape (front & back) This is a 3 dimensional space, therefore it is possible to pick any point at have any colour you want. The line running down the centre runs from White to Black through Grey. G re e n G re e n Y e llo w Y e llo w C yan C yan W h ite B la c k B lu e Red Red B lu e M a g e n ta M a g e n ta Figure 8 Colour 3D shape (top & bottom) 37 Sony Broadcast & Professional Europe
  • 46.
    Part 3 -Colour If you look at the shape from the top you will get a circle with all the colours around the edge and White in the middle. Look at the shape from the bottom and you will see Black in the middle. Figure 9 Hue saturation & luminosity Hue Hue is the colour. You can change the hue by rotating around the centre of the circle. Saturation Saturation can be called colour intensity. It is a measure of how far from the centre you are. Zero saturation is White, Grey or Black. Full saturation is somewhere on the edge of the circle. Luminosity Luminosity is how far up or down the shape you are. If you take any colour and force its luminosity up it will tend towards White, and visa- versa down to Black. The CIE space The common method for describing colour is the CIE colour space. This 2 dimensional representation is used for additive colour fields to define the ability of a video system to capture and display colour. As you can Sony Training Services 38 Figure 10 Hue, saturation & luminosity
  • 47.
    Broadcast Fundamentals see the NTSC and PSL gamuts are well within the total range of natural colours. Each corner of the gamut triangles for NTSC and PAL specify the primary colours. They are different for each standard. Television cameras and displays have a long way to go before they are able to capture and display every colour available in nature. Figure 11 CIE colour space 39 Sony Broadcast & Professional Europe
  • 48.
    Part 4 –The basic television signal Part 4 The basic television signal The problem of getting a picture from A to B A picture is a 2 dimensional object. It has height and width. A moving picture adds a third dimension, time, to the other two. If we are to send a moving image from one place to another we need to change the image content into a serial signal. Film frames Film conveys a moving image as a series of frames. These are like 2 dimensional chunks of data appearing at once, one after another, so rapidly that it appears to be smooth. The raster scan A raster scan scans an image and turns it into a serial stream of data. By combining film’s method of conveying frames with the raster scan method we could convey a moving image as a serial signal. The basic raster frame The normal raster scan, and the method used by all broadcast television standards, scans each line from left to right, and the each successive line from top to bottom. This is called a frame. The definition of the signal itself is simple. The brighter the image is at that point on the line the higher the signal’s voltage. Figure 12 The raster scan Sony Training Services 40
  • 49.
    Broadcast Fundamentals Lines and frame rate We need to decide on a frame rate and the number of lines. We want the highest quality possible so it would be better to have as many lines as possible, and as many frames per second as possible. We would also want to ensure that each line had the highest quality (bandwidth) possible. However the overall bandwidth of the signal is strictly limited by broadcast standards authorities, so we have to find a reasonable compromise between the number of frames per second, the number of lines, and the quality of each line. An increase in either the number of lines, the number of frames per second, or the bandwidth of each line, will increase the signal’s overall bandwidth. The blanking intervals Horizontal blanking Each raster line is normally referred to as the active line. This is where the line is traced out on the image. There is a short interval between one active line and the next. The scanning system uses this time to fly back to the beginning of the next line. The signal is ‘cut’ for this period of time to prevent the flyback appearing on the television set. In the raster scanning system this interval is referred to as the horizontal flyback. The interval is also called the line blanking interval or horizontal blanking interval. Thus every video line consists of the active line period and the horizontal blanking interval which is used as a flyback period. Vertical blanking There is a longer interval between one entire scan and the next. During this time the scanning system moves back from the bottom right corner to the top left corner. Just as with the horizontal flyback interval the signal is ‘cut’, to prevent it appearing on the television screen. This is referred to as vertical flyback. This is normally also called the vertical blanking interval. Interlaced raster scanning If a frame is raster scanned and the frame rate is the same as that of film i.e. 24 frames per second, there is a severe amount of picture flicker. This is because every point in the image will have faded before the scanning mechanism can go back around to refresh it. 41 Sony Broadcast & Professional Europe
  • 50.
    Part 4 –The basic television signal H o r iz o n ta l fly b a c k (o n ly o n e s h o w n ) E v e n lin e s (fie ld 1 ) O d d lin e s (fie ld 2 ) V e r tic a l fly b a c k fr o m fie ld 2 to 1 V e r tic a l fly b a c k fr o m fie ld 1 to 2 Figure 13 The interlaced raster scan Televisions could be designed to reduce flicker by increasing the persistence of the screen. However this would mean any rapid movement on the screen would be seen as blurring and streaking. You could increase the frame rate but this would increase bandwidth. The solution is to interlace the raster scan. Interlaced scans scan the odd numbered lines first, from top to bottom. Then the raster scan starts from the top again and scans the even lines from top to bottom. This method of scanning reduces flicker by effectively writing an image at twice the frame rate. Each of the scans is called a field, and two interlaced field make up a frame. Half lines Modern video standards also take into account that each line in the raster scan is not exactly horizontal. In fact the raster scan is progressing slowly from the top of the image to the bottom at a constant rate. The left side of each line is actually slightly lower than the right side. Therefore video standards have an odd number of lines per frame. The first field of each frame begins with a whole line and ends with a half line. The second field begins with a half line and ends with a whole line. This system gives a more rectangular raster scanned image. Sony Training Services 42
  • 51.
    43 H o r iz o n t a l A c tiv e b la n k in g v id e o Figure 14 H o r iz o n t a l syncs Broadcast Fundamentals V e r t ic a l f ly b a c k H a lf lin e V e r t ic a l b la n k in g ( V e r t ic a l b la n k in g ) A c t iv e v id e o lin e A c tiv e v id e o H o r iz o n t a l s y n c s H o r iz o n t a l b la n k in g H o r iz o n ta l fly b a c k ( H o r iz o n t a l b la n k in g ) V e r t ic a l b la n k in g H o r iz o n ta l b la n k in g E q u a lis in g p u ls e s L in e s y n c B r o a d p u ls e s V e r t ic a l E q u a lis in g p u ls e s L in e s y n c b la n k in g lin e V e r t ic a l b la n k in g Basic horizontal and vertical detail Sony Broadcast & Professional Europe
  • 52.
    Part 4 –The basic television signal Synchronisation The basic principle Synchronisation is the principle of making sure two pieces of equipment, that both have some kind of regular clock or rhythm to run at the same rate. The two pieces of equipment are said to be ‘locked’ together. Synchronisation is often done with some form of synchronisation signal, generally simply called a sync signal. How does television sync? All television equipment contains some form of clock or oscillator. This will have a natural frequency which is close to the correct frequency. Somewhere in the television transmission station will be a master sync pulse generator containing a precision master oscillator. Its frequency is correctly set to within 1 cycle in several million. All the equipment in the transmission station is locked to this master sync pulse generator. This is easy because the equipment’s own clock is running as about the same rate. The sync signal ‘pulls’ the equipment’s own oscillator to exactly the correct frequency. The transmission station will send out a television signal that contains sync pulses. All equipment from the transmission station to the television at home contains similar oscillators which are ‘pulled’ to exactly the correct frequency by the sync pulses. Line, or horizontal, sync pulses Line sync pulses are parts of the video signal that define the beginning of each video line. They occur at a certain time during the horizontal blanking interval. Line sync pulses are short intervals of time where the video signal drops below the voltage specified for black (the blanking level). Line sync pulses have a particular shape, because they are bandwidth limited. The beginning and end of the pulses are sloped. The beginning of the video line is specified as the mid-point of the slope at the beginning of the sync pulse. These pulses are placed some time during the horizontal interval. Their position relative to the beginning of the active line is set and known, so once the position of the pulse is found the beginning of the active line is known. Vertical sync pulses The vertical blanking interval is more complex, and is relatively longer, than the horizontal blanking interval. The time interval is the same as many video lines. It contains a complex series of pulses that define the beginning of each field and each frame. Sony Training Services 44
  • 53.
    Broadcast Fundamentals Blanked vertical lines The vertical interval starts and finishes with a few blanked video lines. These are simply video lines with their respective horizontal sync pulses, but with the active line period blanked as well. Equalising pulses The vertical interval contains a number of equalising pulses near to the end of one field and the start of the next field. Equalising pulses are shorter than line sync pulses and occur every half line. The reason for this is so that there is a same pattern of equalising pulses for every field, even though the transition between the first field and the second is half way through one line. Broad pulses Broad pulses are placed in between the two groups of equalisation pulses. These are very wide pulses, in fact, so wide that only a small portion of time is spent not in a broad pulse. Definition of the start of the field The definition of the start of each field is the beginning of the first broad pulse. The oscilloscope An oscilloscope is an instrument that allows engineers to view video signals, not as a picture, but as a constantly changing signal. It shows the kind of signals show on page 43 as a bright trace across the display. All oscilloscopes have more than one input, so that various signals can be compared to one another. They also often allow for complex triggering so that complex or intermittent signals can be caught and studied. Most oscilloscopes use tubes, similar to monochrome televisions. The most modern ones use digital flat screen colour technology. Engineers use the oscilloscope to check the levels and timings of video signals. 45 Sony Broadcast & Professional Europe
  • 54.
    Part 5 –The monochrome NTSC signal Part 5 The monochrome NTSC signal The 405 line system The first important monochrome video signal was the 405 line monochrome system adopted by many countries around the world. Although an important video standard in its time, the 405 line standard is now obsolete. Furthermore it is different from any of the modern video standards. Therefore we will look at the 525 line monochrome standard as the first important and relevant standard. The 525 line monochrome system The 525 line monochrome standard was proposed by the American NTSC (National Television Standards Committee) and quickly became popular. This standard formed a strong basis for the existing 525 line colour system used by many countries around the world, and so it seems sensible to study it first. The 525 line monochrome system has 525 lines per video frame, with 262.5 lines per field. Frame rate and structure The chosen frame for NTSC was 30 frames per second or 60 fields per second. It is commonly thought that this was so that NTSC televisions could be locked to the mains power. This is only half true. Mains power alternating frequency is not accurate enough to provide a reliable synchronisation signal for television receivers. Television equipment does not use mains as a locking signal. However if television equipment is not somehow linked to mains, the resulting beating and aliasing frequencies can cause undesirable effects on the screen. Making the frame rate the same as the mains power frequency at least makes these undesirable effects stand still. Field 1 starts on line 1. There are 6 equalisation pulses, then 6 broad pulses, then 6 more equalisation pulses. Normal horizontal syncs starts on line 10. The first active video line is line 22, and the last is line 262. Half of line 263 is active. Field 1 starts half way through line 263 with 6 equalisation pulses, 6 broad pulses and 6 more equalisation pulses. Normal horizontal syncs start at the beginning of line 273. The first active video line is line 285 and the last is line 525. Sony Training Services 46
  • 55.
    Broadcast Fundamentals Field start displacement The trigger point for the start of a field is normally the first broad pulse. This is the point television receivers use to start the next field. However the ‘official’ start point of each field is the beginning of the first broad pulse. This gives a discrepancy between the technical start of each field and the line numbers. The first broad pulse is at the start of line 4, and half way through line 266. Line rate and structure The line rate is simply the frame rate multiplied by 525, or 15.75kHz. Disregarding the vertical interval, all NTSC lines have the same basic structure. Bandwidth considerations The video signal can have energy that can stretch to 10Mhz and beyond. However, because of the highly repetitive nature of video, with each video line similar to the one before and after, and each video field and frame similar to the one before and after, most of the energy is centred around harmonics of line, field and frame rate. This makes the bandwidth look like a series of spikes, with very little between them. V id e o s ig n a l b a n d w id th A m p litu d e H a r m o n ic s to 1 0 M H z o r h ig h e r DC F re q u e n c y 1 lin e 1 fra m e 15750H z 30H z Figure 15 Video signal bandwidth The television signal is modulated onto a radio frequency carrier before being sent to the transmitter mast and out to the home. The harmonics may possibly spread out either side of the carrier to 10MHz or more, 47 Sony Broadcast & Professional Europe
  • 56.
    Part 5 –The monochrome NTSC signal giving a possible total bandwidth of over 20MHz. These spreads either side of the carrier are called the upper and lower sidebands. The regulatory authorities assigned a 6MHz bandwidth to each television channel. The designers of the original television standard therefore had to devise a scheme for restricting the video signal down to a 6MHz limit. L o w e r s id e b a n d U p p e r s id e b a n d A m p litu d e V H F c a r r ie r F re q u e n c y DC fre q u e n c y L o w e r s id e b a n d filte r e d le a v in g U p p e r s id e b a n d v e s tig a l s id e b a n d filt e r e d to 4 .2 M H z V H F c a r r ie r fre q u e n c y A u d io c a r r ie r - 1 .2 5 0 4 .2 4 .5 4 .7 5 6 M H z t o ta l c h a n n e l b a n d w id th T e le v is io n c h a n n e ls o n th e r a d io s p e c tr u m A u d io V id e o A u d io V id e o A u d io V id e c a r r ie r c a r r ie r c a r r ie r c a r r ie r c a r r ie r c a rri C h a n n e l b a n d w id th C h a n n e l b a n d w id th (6 M H z ) (6 M H z ) Figure 16 Video channel bandwidth Filters are used to cut off as much of the lower sideband as possible. It is not possible to cut everything off, so the filter restrains the lower sidebands to just 1.25MHz. What is left is commonly called the vestigal sideband. Sony Training Services 48
  • 57.
    Broadcast Fundamentals The upper sideband is filtered and restricted to about 4.2MHz. Filters cannot create a sharp clean cut-off at 4.2MHz, but rather a smooth roll- off that disappears to zero just below 4.5MHz. A simple audio carrier is placed at 4.5MHz, clear of the video signal. Its sidebands do not extend very far and there is nothing left at 4.75MHz. Thus the total bandwidth including the video and audio signals is constrained to 6MHz. Quality considerations The low frequency detail of the video signal is centred around the carrier frequency and the low order sidebands. Fine detail is centred around the high frequency sidebands above 4MHz. It is worth remembering that random noise will also tend to be centred around the high frequency sidebands. Most home television receivers cannot show much detail above 4MHz. It is therefore pointless trying to transmit this level of detail to the home. Radio spectrum and television channels The regulatory authorities specified a series of analogue television channels 6MHz apart. Each video carrier is 1.25MHz from the bottom of the channel, and each audio carrier is 5.75MHz from the bottom of the carrier (4.5MHz above the video carrier). Television companies have the responsibility to ensure that each channel they transmit has carriers at exactly the correct allocated frequency, and that the bandwidth is properly filtered to constrain it to within the 6MHz limit. 49 Sony Broadcast & Professional Europe
  • 58.
    Part 6 –Colour and television Part 6 Colour and television Using additive primary colours The additive primary colour principle has particular relevance to television because it uses light to detect the image in the camera and to display it in the television set at the other end. The colour television camera splits the image into three separate images, one for the Red part of the image, one for Green and one for Blue. The colour television set has three sets of phosphor dots, one type that shine Red, one type that shine Green and the last type that shine Blue. The television set also has three electron guns, each one targeting one set of phosphor dots. Original plans The original idea was to use Red, Green and Blue throughout the whole transmission system, from camera to television set. Both RCA and CBS (amongst others) developed systems that sequentially send Red, Green and Blue parts of the original image over a normal monochrome transmission system. RCA chose to send ‘dots’ of colour, Red Green Blue in a rotating sequence. CBS chose to filter successive video fields through a rotating Red Green Blue filter wheel. Neither system worked too well. What was needed was a system that added some colour element to the existing monochrome signal, allowing those with monochrome television sets to watch television with the same quality as before, but allowing those with new colour televisions to see that same basic quality of image in colour. Ensuring compatability The original ideas for a colour television system were not popular because they were not compatible with the existing monochrome standard. A compatible monochrome signal The colour video camera produces three separate Red, Green and Blue images. In theory simply mixing these three in equal proportions should give a perfect White. The human eye is more sensitive to different Green than to Red or Blue. Therefore any miss-calculations in generating the Green primary colour in the television set would be more obvious that for Red or Blue. The proportions of Red, Green and Blue was therefore adjusted to match human eye characteristics, standard luminosity curves, and make up for the non-linear nature of the light/voltage characteristics of a Sony Training Services 50
  • 59.
    Broadcast Fundamentals standard video camera and voltage/light characteristics of the standard television set. The equation for White (Y) is therefore :- Y = 0.299R + 0.587G + 0.114B This provided for a perfectly balanced monochrome signal that could be used to generate a standard monochrome signal compatible with existing monochrome television sets. Maintaining compatible channel bandwidth The regulatory authorities are constantly being pressured to provide space on the limited radio spectrum for all kinds of radio services, including commercial radio and television stations, airline radio communications, ambulance, police and other emergency services, radio control model enthusiasts, citizen band radio and HAM radio. They were therefore not prepared to allocate more of the precious radio bandwidth to television companies wanting to switch from monochrome to colour. Designers had therefore to somehow fit the colour television signal into the existing 6MHz allocated to them for monochrome television. Adding colour Colour difference signals There are three theoretical colour difference signals, each one being the difference on a primary colour from White. The colour difference signals are therefore (R-Y), (G-Y) and (B-Y). It is possible to generate any hue, saturation of luminance using any three of the four signals, Y, (R-Y), (G-Y) and (B-Y). It is also possible to generate the Y signal or any of the three primary colours from any three of these four signals. The Y signal was an essential requirement of any compatible colour system. As already mentioned this is the same as a standard monochrome signal. A decision had to be made as to which two of the three available colour difference signals would be used. The Green signal is a much higher proportion of Y than either Blue or Red. Therefore any miscalculation in Green will not be so obvious as it would be for either Red or Blue. It was therefore decided to use the Red and Blue colour difference signals (R-Y) and (B-Y). Generating the (R-Y) and (B-Y) colour difference signals Generating the colour difference signals is a simple piece of mathematics. The relationship between the Y signal and the three primary colour signals has already been established. Thus the (R-Y) colour difference signal is simply :- R-Y = R - (0.299R + 0.587G + 0.114B) 51 Sony Broadcast & Professional Europe
  • 60.
    Part 6 –Colour and television = R-0.299R – 0.587G – 0.114B = 0.701R – 0.587G – 0.114B Likewise the (B-Y) signal can be derived in the same way. B-Y = B - (0.299R + 0.587G + 0.114B) = –0.299R – 0.587G + B – 0.114B = –0.299R – 0.587G + 0.886B Component colour video signals Component colour video signals can either be in the original R, G, B form, or more commonly, are defined as the Y, (R-Y) and (B-Y) signals. Their relationship to the original primary colour signals is as previously mentioned, i.e. :- Y = 0.299R + 0.587G + 0.114B (R-Y) = 0.701R – 0.587G – 0.114B (B-Y) = –0.299R – 0.587G + 0.886B Sony Training Services 52
  • 61.
    Broadcast Fundamentals Using component video The advantage of component video is that the three signals are kept apart. This ensures that the quality is kept as high as possible. Analogue component video equipment has completely separate electronics for the three signals. Connections between different pieces of equipment if always done using three separate cables and connectors, or using one cable with three cable cores. Having three separate sets of electronics, and three separate connections is obviously more expensive, compared to the needs of monochrome television, therefore analogue component video is something normally retained for broadcast and professional use. Component connections Analogue component is normally connected between different pieces of equipment using three 75 ohm coaxial cables and three BNC connectors at each end. The cables should be about the same length although this only becomes a problem where the difference between the cables is greater than several metres. BNC cables are often cut to the same length, tied together as a triple, and made up with colour coded BNC connectors at both end. Conventionally red, green and blue colour coded connectors are used. If R,G,B component video is being used the connectors are used as is. Conventionally, if there is a sync signal provided, it is normally on the green signal, hence the phrase “Sync on Green”. With Y,(R-Y),(B-Y) component video the red colour coded connectors are conventionally used for the (R-Y) connection, the blue connectors for the (B-Y) connection, and the green ones for the Y connection. . 53 Sony Broadcast & Professional Europe
  • 62.
    Part 6 –Colour and television Figure 17 Basic component video signal Combining R-Y & B-Y Colour television requires that there be just one colour signal. This must therefore be a combination of the R-Y and B-Y contributions. The designers of the first popular colour television standard decided to combine the two colour difference signals onto a special carrier called a subcarrier, because its frequency is under the main RF carrier used to transmit the video signal. Quadrature amplitude modulation The designers devised an ingenious way of modulating the two colour signals onto one carrier by using quadrature amplitude modulation. Amplitude modulation is simple to achieve and to understand. The amplitude of the carrier is simply the level of the signal being modulated. What results is a steady frequency sine wave with varying amplitude. Sony Training Services 54
  • 63.
    Broadcast Fundamentals S u b c a r r ie r S u b c a r r ie r v e c to r Q u a d r a tu r e c a r r ie r Q u a d r a t u r e c a r r ie r v e c to r Figure 18 Quadrature vector representative This sine wave can be thought of as a rotating vector. The vector is rotating in an anticlockwise direction and its length defines the amplitude of the signal. It becomes easy to see how two signals can be modulated onto one subcarrier when you consider them as vectors. One of the signals can be modulated on a carrier that is delayed by 1/4 cycle (90 degrees). When the two modulated signals are combined they will not interfere with each other because they are 90 degrees apart. The subcarrier is sent with the video signal. This makes decoding the two colour difference signals easy. You simply look at the amplitude of the signal in phase with the subcarrier, and the amplitude of the signal 90 degrees out of phase. Video signal spectra The monochrome signal As explained on page , the monochrome video signal is highly repetitive, and the signal’s spectrum is not a smooth spread of energy from DC to high frequency. There are very definite energy peaks at harmonics of line rate, field rate and frame rate. There is very little energy between these peaks. When the monochrome signal is modulated onto the radio carrier, sidebands would normally extend above and below the carrier with energy peaks at harmonics of line, field and frame rate. However the monochrome signal is filtered, and the lower sidebands are cut. The upper sidebands are filtered to about 4.2MHz. 55 Sony Broadcast & Professional Europe
  • 64.
    Part 6 –Colour and television The colour signal The basic colour signal has the same bandwidth as the monochrome signal. When modulated onto the subcarrier it has the same basic bandwidth extending either side of the subcarrier frequency, and its energy is centred around harmonics of line rate, field rate and frame rate. If this signal is added to the monochrome signal it would be swamp it. The upper sidebands would also extend way beyond the total 6MHz channel bandwidth allowed by the regulatory authorities. The colour signal is therefore attenuated so that its total bandwidth is very much smaller. This prevents it from interfering with the monochrome signal, and constrains it to within the total channel bandwidth. Combining monochrome and colour C o lo u r s u b c a r r ie r C o lo u r s id e b a n d s A u d io c a r r ie r Y s ig n a l 1 lin e 1 fra m e 1 5 7 3 4 .2 5 H z 30H z C o lo u r s id e b a n d s C o lo u r s u b c a r r ie r Y s ig n a l Figure 19 Mixing colour and monochrome together Sony Training Services 56
  • 65.
    Broadcast Fundamentals The subcarrier signal is not only a useful way of combining the two colour difference signals into one signal, by using quadrature modulation. It is also a neat way of combining the colour signal with the monochrome signal. Both the monochrome and colour signals have spectra with energy centred around the harmonics of line rate, with gaps between each harmonic. If the frequency of the subcarrier is carefully chosen the colour harmonics can be made to sit exactly between the monochrome harmonics. The subcarrier is also chosen to be as high as possible, but ensuring that the whole colour signal is within the 4.5MHz overall bandwidth of the complete video signal. Placing the subcarrier as high as possible also ensures that, if there is any interference between the monochrome and colour parts of the signal it will only affect the high frequency fine detail of the picture. The final video signal, with Y, (R-Y) and (B-Y) mixed together is called composite video. Using composite video Composite video is a very popular method of connecting analogue video between pieces of video equipment and only requires a single cable. However, although analogue component video requires three cables to connect pieces of video equipment together it has higher quality compared to analogue composite video. P in a s s ig n m e n t P lu g S ocket C o re C o n n e c tio n F u n c tio n I/ P o r O /P C o re S ig n a l I& 0 S h ie ld G ro u n d - S h ie ld P a n e l c o n n e c to r C a b le c o n n e c to r ‘T ’ p ie c e Te rm in a to r P lu g P lu g S ocket 57 Sony Broadcast & Professional Europe
  • 66.
    Part 7 –Colour NTSC television Part 7 Colour NTSC television Similarity to monochrome The colour NTSC television signal is based on the monochrome NTSC signal. It has exactly the same number of lines per frame and per field. The active video region is the same, as is the structure of the vertical blanking region, the vertical sync pulses, the horizontal blanking region and the horizontal sync pulses. Choice of subcarrier frequency. The subcarrier frequency was originally chosen to be between the 227th and the 228th harmonic of line rate. This would make it 3,583,125Hz. I s ig n a l A u d io c a r r ie r Y s ig n a l Q s ig n a l S u b c a r r ie r 1 lin e 1 fra m e 1 5 7 3 4 .2 5 H z 30H z Figure 20 Colour NTSC bandwidth Sony Training Services 58
  • 67.
    Broadcast Fundamentals However this frequency produces interference with the audio carrier signal in the final television signal. The frame rate was therefore altered slightly, from 30 frames per second to 29.97 frames per second. The subcarrier was moved to 3,579,545Hz. In hindsight this was probably not a good idea. It may have been better to have moved the audio carrier slightly instead. The fact that NTSC is not now an integer number of frames per second causes many problems with standardisation and editing. (See page 207) Adding colour As mentioned the colour difference signal must be filtered before they are modulated, to restrict their bandwidth compared to the monochrome signal, to prevent them from interfering with it too much. What is more, with a subcarrier frequency of about 3.58MHz, it would appear that the colour bandwidth capacity above this frequency is only about 0.5MHz before it starts to interfere with the audio signal carrier itself. The solution is not to modulate the R-Y and B-Y signals directly but two other signals called I and Q. It was found that the human eye was more sensitive to colour around the orange/cyan axis, compared to white, than to colours 90 degrees from this around the magenta/green axis, compared to white. Thus two signals were generated, one called the I (in-phase) signal which was modulated with the subcarrier on the orange vector, and the other called the Q (quadrature) signal which was modulated with a subcarrier signal phase shifted to a delay of 90 degrees. The I signal The I signal is found by the equation :- I = 0.877(R-Y) cos 33deg – 0.493(B-Y) sin 33deg = 0.74(R-Y) – 0.27(B-Y) In terms of the original R, G and B signal, I can be described as :- I = 0.60R – 0.28G – 0.32B The I signal is asymmetrically filtered to a bandwidth of +0.5MHz and – 1.5MHz. This allows relatively high definition for the I signal. The Q signal The Q signal is found by the equation :- Q = 0.877(R-Y)sin 33deg + 0.493(B-Y)cos 33deg = 0.48(R-Y) + 0.41(B-Y) In terms of the original R, G and B signal, Q can be described as :- Q = 0.21R – 0.52G + 0.31B 59 Sony Broadcast & Professional Europe
  • 68.
    Part 7 –Colour NTSC television The Q signal is symmetrically filtered to a bandwidth of 0.5MHz. This allows relatively low definition for the for the Q signal. Burst Between 8 and 10 cycles of subcarrier are sent during the horizontal blanking interval of every line, except during part of the vertical blanking region. The television receiver uses this to lock its own internal oscillator to the correct frequency and phase so that it can determine colour correctly. For NTSC colour video, burst is defined as having a phase of 180 degrees. The colour NTSC vector display 9 0 (R -Y ) 100 R ed 103 90 8 8 IR E M a g e n ta 80 61 8 2 IR E 70 60 I com ponent 50 33 40 30 Y e llo w 167 20 6 2 IR E 10 180 0 (B -Y ) B u rs t 180 4 0 IR E B lu e 347 6 2 IR E G re e n 241 8 2 IR E Q com ponent C yan 303 283 8 8 IR E 270 Figure 21 Colour NTSC vector space A good way of showing the colour elements of an NTSC signal is to show the signal on a vector display. This is also sometimes called a polar display. The vector display shows the amplitude and phase of the chroma (colour) signal. The display is circular. The centre represents zero amplitude. An increase in amplitude is represented by a move towards Sony Training Services 60
  • 69.
    Broadcast Fundamentals the outer edge of the display. The position around the display represents the phase of the signal with respect to the sub-carrier. Thus no colour will be seen as a bright dot in the centre of the display. Fully saturated colour will be seen as a bright dot near the edge of the display. The position of the dot will indicate the colour, or hue. Burst appears as a bright dot to the left of the centre of the display (at 180 degrees). The vectorscope A vectorscope is a special kind of oscilloscope for measuring the colour content of a composite video signal. It shows a vector display of the signal and has a graticule with markings for the vertical and horizontal axes, concentric circles for colour saturation and target boxes for all the important colours and for burst. Engineers use a vectorscope to check that composite video has the correct colour phase and saturation and that there is no distortion on the colour signal. The gamut Gamut is the limit the colour content of a composite signal can attain on a vector display. The NTSC gamut is not circular but more shaped like a rugby ball, i.e. it is tall and thin. No part of the signal can extent beyond the gamut as this will produce illegal colours. The gamut detector A Gamut detector is an instrument that is connected between any piece of video equipment and the composite monitor it is playing into. It checks that the video signal is within gamut at every point, and shows any area of the picture that has illegal colours as an enclosed warning area on the monitor. 61 Sony Broadcast & Professional Europe
  • 70.
    Part 7 –Colour NTSC television Vertical interval structure The vertical interval for NTSC is similar to that of monochrome NTSC. The equalisation and broad pulses have the same basic construction. 4 field structure Colour NTSC differs from monochrome NTSC in that it contains a subcarrier signal and subcarrier burst elements at the beginning of each video line, except during the vertical interval. The phase relationship between subcarrier and horizontal syncs does not repeat every frame, but every 2 frames, (4 fields). This relationship is called the sc/h (subcarrier to horizontal sunc) relationship. The sequence of fields and frames is called the colour frame sequence. This relationship becomes important when editing. If the final video sequence is to maintain a steady sc/h relationship after it has been edited, edits must be made to the correct field in the 4 field sequence. Sony Training Services 62
  • 71.
    E n do f f ie ld 4 S ta r t o f fie ld 1 63 V s y n c tr ig g e r A c t iv e v id e o lin e s V e r t ic a l b la n k in g in t e r v a l A c t iv e v id e o lin e s Figure 22 6 e q u a lis a tio n p u ls e s 6 b r o a d p u ls e s 6 e q u a lis a tio n p u ls e s 523 524 525 1 2 3 4 5 6 7 8 9 10 11 12 21 22 23 Broadcast Fundamentals E n d o f f ie ld 1 S ta r t o f fie ld 2 H a lf lin e H a lf lin e 261 262 263 264 265 266 267 268 269 270 271 272 273 274 284 285 286 E n d o f f ie ld 2 S ta r t o f fie ld 3 523 524 525 1 2 3 4 5 6 7 8 9 10 11 12 21 22 23 E n d o f f ie ld 3 S ta r t o f fie ld 4 H a lf lin e H a lf lin e 261 262 263 264 265 266 267 268 269 270 271 272 273 274 284 285 286 Colour NTSC vertical interval (showing 4 field sequence) Sony Broadcast & Professional Europe
  • 72.
    Part 8 –PAL television Part 8 PAL television What is PAL? PAL stands for Phase Alternate Line. It is a description of the way colour video information is encoded and presented. The disadvantages of NTSC The NTSC colour television system is based on the original 525 line per frame monochrome signal. Colour was added to the monochrome signal without increasing the overall channel bandwidth by modulating the colour signal onto a subcarrier, and hiding it within the upper harmonics of the monochrome signal. A burst of subcarrier just before every line ensured that the television receiver could lock to the subcarrier phase and decode the colour properly. After NTSC was introduced and went into common use it was found that the transmitters suffered from non-linearity problems, i.e. the phase of the chroma shifted as the level of the chroma signal changed. The phase was locked to burst, so was correct at burst level. Any colour at levels significantly different from burst came out wrong. Hence NTSC television receivers have a Hue control to alter the colours and try to make them look better, and NTSC gained the dubious title “never the same colour”. Hue controls never worked completely. You could correct one colour and others would go wrong. At the same time, it was always felt that it was a pity that the change from monochrome NTSC to colour could not have also been used to increase the number of lines per frame, and improve the vertical resolution of the image. Two solutions were introduced in later years to overcome the colour problem, and increase the number of lines per frame. These were PAL and SECAM. Both took a slightly different approach to the problem of transmitter non-linearity, but both increased the vertical resolution of NTSC in the same way. The PAL solution The PAL system was introduced later than the NTSC system and was able to correct the main disadvantages of the NTSC system. It aimed to eradicate the colour phase shifting problem by alternating the phase of the (R-Y) part of the colour signal on each successive video line. The phase switched from positive to negative, negative to positive, for each new line. The PAL receiver was able to use this alternating phase to detect if the overall colour phase had shifted, at any level of chroma signal, and pull it back to where it should be, resulting in true colour on home television screens. PAL also has 625 lines per frame. This improved the vertical resolution of PAL video pictures giving a better picture. Sony Training Services 64
  • 73.
    Broadcast Fundamentals The PALsignal The PAL video line There a several forms of PAL video line depending on its position in the overall video frame. Most of these are the active video lines. Each active PAL video line consists of a horizontal sync pulse, burst and the video information for that line. The rest of the line is blanked. The total line duration is 64uS. Blanking extends for 12us and the active line for 52us. Start of the line The start of the line is defined as the half transition point of the leading edge of the horizontal sync pulse. This is 1.5us from the end of the last active video region, and 10.5us from the beginning of the next active video region. The frame The PAL frame consists 625 video lines. 576 of these are active, leaving 49 lines for the vertical blanking interval. The PAL frame is divided into 2 fields. Both fields have 312.5 lines each. Field 1 has 287.5 active lines, field 2 has 288.5 active lines. Vertical blanking parameters Broad and equalisation pulses PAL has 6 equalisation pulses, followed by 6 broad pulses, followed by 6 more equalisation pulses. All broad pulses and equalisation pulses have a ½ line duty cycle, i.e. repeat every ½ line. Start of the field and frame The start of field 1 is defined as the start of the half transition point of the first broad pulse, i.e. at the beginning of line 1, and for field 2, half way through line 313. The start of the frame is the same as the start of field 1. Video blanking PAL vertical video blanking extends from lines 311 to 336 between fields 1 and 2, and between lines 623.5 and 23.5 between fields 2 and 1. The PAL chroma signal The PAL component (R-Y) and (B-Y) signals are attenuated to reduce their bandwidth, but they are not rematrixed to two signals at different phase to (R-Y) and (B-Y) as they are in NTSC with the I and Q signals. The (B-Y) signal is attenuated to a signal called U, and the (R-Y) signal to a signal called V according to the equations :- 65 Sony Broadcast & Professional Europe
  • 74.
    Part 8 –PAL television U = 0.492 (B-Y) V = 0.877 (R-Y) The U signal is modulated onto the subcarrier and the V signal onto a quatrature signal 90 degrees advanced from subcarrier. +R ed -C y a n 103 100 77 9 5 IR E 9 5 IR E 90 -G re e n 80 + M a g e n ta 120 61 8 9 IR E 70 8 9 IR E 60 50 40 30 + Y e llo w + B u rs t - B lu e 167 20 45 13 6 7 IR E 2 1 .5 IR E 6 7 IR E 10 0 180 (U c o m p o n e n t) (B -Y ) - Y e llo w -B u rs t + B lu e 193 315 347 6 7 IR E 2 1 .5 IR E 6 7 IR E + G re e n 241 8 9 IR E - M a g e n ta 300 8 9 IR E -R e d +C yan 257 283 9 5 IR E 9 5 IR E 270 Figure 23 PAL vector space V switching The V component of the chroma signal is switched negative/positive positive/negative every line. Line 1 of field 1 is positive. The receiver uses this to determine difference in chroma phase at any level of chroma. Burst phase and swinging burst Burst is chosen to be at 135 degrees. Burst also switches negative/positive positive/negative every line in accordance with the V component of the chroma signal. This is called the swinging burst, and the television receiver uses this to determine if the V component of the chroma signal is negative or positive on each video line. Sony Training Services 66
  • 75.
    Broadcast Fundamentals PAL vector display The PAL vector display has twice as many box targets and the NTSC display. Half of these are similar to NTSC, the other half are negative versions for those lines when the V component and burst and negative. Thus, for instance positive red is at 103 degrees and negative red at 257 degrees. Positive cyan is at 283 degrees while negative cyan is at 77 degrees. Choice of subcarrier frequency As mentioned on page video is highly repetitive and the bandwidth spectra of the monochrome and colour signals have energy centred around the harmonics of line, field and frame rate. In NTSC, the colour subcarrier is chosen so that the colour harmonics sit in the energy gaps of the Y signal. V switching component problem However in PAL the V component switches every video line. This means that the colour signal is similar, not every line, but every other line. The energy of the PAL chroma signal is not based on harmonics of line rate but of half line rate. Placing the subcarrier exactly between two Y harmonics will mean half the colour harmonics sit exactly on the Y harmonics, just what we are trying to avoid! So the original designers of PAL chose to place the subcarrier between line harmonics 283 and 284, offset slightly from the centre between these two Y harmonics. This is called a ¼ line offset and two colour harmonics now sit between each pair of Y harmonics. Dot crawl problem It is happy chance that the NTSC subcarrier phase between subsequent line is exactly opposite, and between subsequent fields is also exactly opposite. This helps to cancel out any patterning effect due to subcarrier itself. In PAL, without the ¼ line offset, V component switching causes the exact opposite to happen. The phase of each subsequent line, and field, is the same, causing fine vertical stripes in the picture. With the ¼ line offset this turns into a crawling dot pattern across the image. Thus the designers of PAL added a further 25Hz to the subcarrier to ‘spoil’ this dot pattering effect and make it much less noticeable. This is called picture frequency shift because it is the same as frame rate. Subcarrier frequency calculation Thus the final calculation for the PAL subcarrier calculation is :- fsc = (( N – ¼ ) x L x fv ) + fv 67 Sony Broadcast & Professional Europe
  • 76.
    Part 8 –PAL television fsc = subcarrier freq. N = chosen harmonic L = lines per frame Fv = frames per second = ( (284 – ¼) x 625 x 25 ) + 25 = ( 283.75 x 625 x 25 ) + 25 = 4433593.75 + 25 = 443361875 Hz = 4.43361875 MHz Bruch blanking During the development of PAL it was found that the swinging burst caused problems with some reference generators. If the same pattern of vertical blanking is us in PAL as in NTSC, the first and last burst for each field could have either a positive or negative V component. Bruch blanking is a method of blanking the burst during the vertical interval so that the first and burst of every field always has a positive V component. The Bruch blanking pattern extends over 4 fields. The table below shows the first and last lines to have burst for all 8 fields. Field First line with burst Last line with burst 1 6 310 2 320 622 3 7 309 4 319 621 5 6 310 6 320 622 7 7 310 8 319 621 Sony Training Services 68
  • 77.
    69 Figure 24 0 .7 V Y Broadcast Fundamentals 0 .3 V 0 .3 5 V R -Y & B -Y 0 .3 5 V F ro n t p o rc h B a c k p o rc h (1 .5 5 u S ) L in e s y n c (5 .8 u S ) (4 .7 u S ) L in e b la n k in g A c t iv e lin e (1 2 .0 5 u S ) (5 2 u S ) Component PAL video line timings Sony Broadcast & Professional Europe L in e 6 4 u S
  • 78.
    Part 8 –PAL television E n d o f fie ld 4 S ta r t o f fie ld 1 A c tiv e v id e o lin e s V e r tic a l b la n k in g in te r v a l A c tiv e v id e o lin e s H a lf lin e H a lf lin e 5 e q u a lis a tio n p u ls e s 5 b r o a d p u ls e s 5 e q u a lis a tio n p u ls e s 620 621 622 623 624 625 1 2 3 4 5 6 7 8 22 23 24 25 E n d o f fie ld 1 S ta r t o f fie ld 2 308 309 310 3 11 312 313 314 315 316 317 318 319 320 321 335 336 337 H a lf lin e H a lf lin e E n d o f f ie ld 2 S ta r t o f fie ld 3 620 621 622 623 624 625 1 2 3 4 5 6 7 8 22 23 24 25 H a lf lin e E n d o f f ie ld 3 S ta r t o f fie ld 4 308 309 310 3 11 312 313 314 315 316 317 318 319 320 321 335 336 337 H a lf lin e H a lf lin e E n d o f f ie ld 4 S ta r t o f fie ld 5 620 621 622 623 624 625 1 2 3 4 5 6 7 8 22 23 24 25 E n d o f fie ld 5 S ta r t o f fie ld 6 308 309 310 3 11 312 313 314 315 316 317 318 319 320 321 335 336 337 H a lf lin e H a lf lin e E n d o f fie ld 6 S ta r t o f f ie ld 7 619 620 621 623 624 625 1 2 3 4 5 6 7 8 22 23 24 25 E n d o f fie ld 8 S ta r t o f fie ld 1 308 309 310 3 11 312 313 314 315 316 317 318 319 320 321 335 336 337 Figure 25 Sony Training Services 70
  • 79.
    Broadcast Fundamentals Different types of PAL There are in fact 8 different types of PAL. The differences are small. Line durations and subcarrier frequencies are different between the different PAL types. The ITU has given a letter designation to each of the PAL types. These are ‘B’, ‘D’, ‘G’, ‘H’, ‘I’, ‘M’, ‘N’ and ‘Combination N’. Great Britain uses PAL I. The disadvantages of PAL PAL is more complex than NTSC (which is more complex than monochrome television). Monochrome television has a 2 field repeating relationship, i.e. 2 fields make one complete frame. NTSC television has a subcarrier to horizontal sync (SC/H) relationship that repeats every 4 fields. The SC/H relationship of PAL however is even more complicated and results in an 8 field relationship. This has an impact on editing systems and special effects. Good editing with PAL signals can only be done at the correct 8 field editing point. Decoding is also more complex. It is more difficult to separate the colour and monochrome components. 71 Sony Broadcast & Professional Europe
  • 80.
    Part 9 –SECAMtelevision Part 9 SECAM television SECAM is another approach to solving the inherent problems of NTSC. However SECAM is not popular in the studio. Although clever, SECAM is very much more complex than PAL. SECAM is not covered in great detail her because it is not used very much within the studio. SECAM is similar to PAL, with 625 lines per frame, with 312.5 lines per field, and 50 fields per second. However that is where the similarity ends. SECAM transmits Y on every video line and each colour difference signal on each successive video line, i.e. R-Y then B-Y then R-Y, as so on. Thus all SECAM receivers have a line memory so the the colour difference signal from one line can be used in the decoding of the next line as well. SECAM also used a form of low frequency preemphasis on the colour difference signals. Sony Training Services 72
  • 81.
    Broadcast Fundamentals The video camera Types of video camera The camera Although all the cameras mentioned here may be described as video cameras the true video camera is a unit that simply converts a moving image into an electrical signal. All surveillance and CCTV cameras tend to fall into this description. Some professional and broadcast cameras also fall into this bracket as well although the more professional ones tend to be dockable (see below). The video camcorder The video camcorder differs from the camera in that it can record the image it is looking at onto some storage medium it is holding within itself. The most common camcorders today record to tape. An increasing number of camcorders are using disk or solid-state technology instead. Much of the design for the initial part of a camcorder is exactly the same as it is for a camera. This is called the camcorder front end. It is when we reach the signal processing part of the camcorder that things start to look a little different. The dockable camera The obvious next step is to split the camera part of the camcorder from the recorder part. This has been done to a number of broadcast and professional camcorder designs, and the technology is called ‘dockable’. The term ‘dockable’ is also used for some broadcast system cameras. These cameras have no tape recorder section, they are purely a camera. However the front end can be split from the back end, and form part of a much larger system. Dockable system cameras are described in the next section. Dockable units allow you to ‘pick and mix’ the back and front halves of a camera depending on the requirements of the shoot, technical reasons, or on financial constraints. System cameras System cameras are used in television studios and outside broadcast trucks. Their whole philosophy is that the camera forms part of a complete environment that is operated and controlled by a team of people, rather than just one person. The beginning of the system is the lens. As with many professional and broadcast cameras this will be removable, and will be chosen to match the application for which it is being used. The system camera itself is of the dockable variety. The front end has the optical block and the circuitry for processing the signals from the 73 Sony Broadcast & Professional Europe
  • 82.
    Part 10 –The video camera optical sensors. The back end handles the conversion into a standard video signal. This may be a standard analogue composite or component connection, a digital connection, maybe compressed, or something a little more professional like a triax connector. Indeed triax connectors have a true system approach as they allow for very high quality output from the camera, as well as allowing for power and control signals to the camera, all in one cable. Connections from the camera are sent into camera control units. These units allow the camera to be controlled remotely as well as allowing for adjustments like colour correction. This whole approach is to allow the cameraman to simply frame the shot. Someone else back in the control room will see feeds from all the cameras and will be able to ensure that there is a good balance between them. Yet another person can take care of colour, ensuring that whites look white and skin tone looks correct, for instance. Parts of a video camera The lens Every camera will use a lens to focus the image. In some cameras the lens is fixed, i.e. it is not removable. In other cameras the lens can be removed and replaced with another with different characteristics and/or quality. All removable lens used to be a screw fix. The screw fix lens is not as popular now as it used to be and is generally only used on cameras where it is unusual to change the lens. The popular method of changing lenses on most modern cameras if the bayonet or breach mount. Rather than screwing the lens into place bayonet and breach lenses are removed and fitted by a simple twist through about 90 degrees. The action is far quicker, and far more positive than the screw fixing. Lens electrical connections Modern cameras generally require electrical connections between the camera and the lens, for three possible controls, focus, zoom and aperture. Some camera lenses may have controls for one, two, or all three controls. These electrical connections allow the camera operator to control the lens from the camera grips, rather than reaching forward to the lens itself. This helps the cameraman balance the camera and keep it steady. Alternatively the electrical connections can be fed into the camera electronics to allow for automatic iris control, or focus and zoom from a remote camera control box. In some cases the electrical connection is made through the bayonet mount itself. This is useful because it is a good positive action, and does not involve any cables. In other cases a separate connection may have to be made after the lens is fitted. Sony Training Services 74
  • 83.
    Broadcast Fundamentals The sensor Light from the lens passes into the camera itself and into a sensor. There are various designs of sensor, but they all change the image into an electrical signal. Colour camera considerations Colour cameras need to split the image into three primary colours. This can be done using a specially designed colour sensor, or can be done by first splitting the image into three separate images, one for each primary colour, in a special piece of optics called a diachroic splitter block. Each output from the block is sent to a separate sensor, and is really just a normal monochrome sensor. Diachroic blocks and multiple sensors add size, weight and cost to the camera design, but produce a better image. Therefore cheaper colour cameras, and small colour cameras, use colour sensors. Professional and broadcast cameras use diachroic blocks and three, or maybe four, normal sensors. However there are signs of a radical change in colour camera design allowing for high quality colour cameras with no diachroic block and only one sensor. This is explained in the Parts on Image Sensors. Signal processing Signals from the image sensors are passed into the camera’s electronics. This electronics buffers, amplifies and converts the signals into a form that can be used outside the camera. This could be a composite of component signal, digital or analogue, baseband or compressed. The available outputs will depend of the camera’s application, cost, and sometimes size. The camera’s electronics also allow the signals to be modified by the operator. Many cameras have controls for brightness, and maybe some sort of colour control. Professional and broadcast cameras often have complex controls for colour balance, white and black level adjustments, and adjustments like latitude and knee controls. Camcorder signal processing Camcorders generally have similar signal processing as cameras, with the same controls, and the same outputs. However camcorder signal processing also turns the signal from the sensors into some kind of signal that can be recorded onto the internal medium. In the case of a tape this would be a serial signal with some form of channel coding. Channel coding is where the signal is modified in some way to allow it to be recorded on tape effectively and without loss. Digital camcorders often also use some form of error correction. Disk storage also requires its own type of channel coding and error correction. Solid state storage does not require and channel coding and may not require any error correction. 75 Sony Broadcast & Professional Europe
  • 84.
    Part 10 –The video camera Video camera specifications Resolution Resolution is a measure of the resolving power of the camera. All cameras, colour or monochrome, are the single sensor type. The sensor pixels in colour cameras are divided between the three primary colours. Thus, for the same sensor density, there is a difference in resolution between monochrome and colour cameras. Monochrome cameras will therefore tend to have a higher resolution than colour cameras. Still cameras often use the number of pixels in the sensor as a measure of resolution. However it is not a common method of defining resolution in video cameras. Sensor resolution will give a basic figure for the sensor itself. In many cases only a proportion of the pixels are actually used in the picture. If specifications mention ‘active pixels’ or ‘effective pixels’ rather than simply ‘pixels’, this will give greater assurance that all these pixels are part of the picture. The camera’s circuitry will also affect the sensor’s resolution. Badly built circuitry will have a poor bandwidth that will reduce the resolution provided by the sensor by the time the signal reaches the output. Having a good sensor and bad circuitry is a waste. CCTV camera resolution figures should always be related to the final output signal. Resolution figures are sometimes given as vertical resolution. This is the number of active lines in the picture. All PAL based CCTV cameras are built around the PAL television system with 625 lines per frame. Of this 576 lines are active. All PAL based CCTV cameras should be able to achieve a vertical resolution of 576 lines. Resolution figures are normally given as horizontal resolution. This is a measure of the number of individual pixels per line the camera is able to resolve, and is measured in vertical lines. Horizontal resolution can never be higher than the sensor’s horizontal resolution, and is often lower, due to bandwidth limitations of the circuitry. Horizontal resolution and bandwidth are related by the equation :-  1  Bandwidth =    Period  Each horizontal line lasts about 50uS long (exactly 52uS). The pixels, or vertical lines, are divided up into this 50uS. The period is one clock cycle, producing two vertical lines, one black, one white. Therefore :- 50 × 10 −6 Period =  Lines     2  Sony Training Services 76
  • 85.
    Broadcast Fundamentals 1 × 10 −4 = Lines Therefore the bandwidth can be found by combining these two equations :-      1  Bandwidth =  −4    1 × 10       Lines     = Lines × 10000 These equations boil down to a very simple rule. If the number of lines or pixels is measured in hundreds, and the bandwidth in MHz, the two are equal, i.e. 400 vertical lines = 4MHz bandwidth, 600 lines = 6MHz bandwidth. Bandwidth, probably more than any other parameter, is the figure that is more difficult to achieve. Bandwidth costs money and separates the good cameras from the bad ones. For square pixels the horizontal resolution would need to be 768 vertical lines, or pixels, which gives almost 8MHz bandwidth! No CCTV camera can achieve this. Cameras achieving 600 vertical lines are considered good quality. Sensitivity Sensitivity is a measurement of how much signal the camera produces for a certain amount of light. Sensitivity can be measured as the minimum amount of light that will give a recognisable picture, and is sometimes called ‘minimum illumination’. Figures of below 10 lux should be possible for standard CCTV cameras. However although this method provides an easy guide to CCTV planners and installers, it is a highly subjective measurement. What is a recognisable picture to one person may be unrecognisable to another. Professional and broadcast cameras use a different, more quantifiable method for measuring sensitivity. The camera is pointed towards a known light source. This is often a 2000 lux source at 3200K light temperature (colour). The iris is then closed until the output is exactly 700mV. Thus a reasonable sensitive camera may be f11 at 2000lux, whereas a less sensitive camera may be f8 at 2000lux. CCTV camera specifications are often not so consistent. Different lux levels are specified. In the case of low light and night cameras normal colour temperatures are meaningless because the camera is not designed to be lit with standard 3200K light! These cameras often 77 Sony Broadcast & Professional Europe
  • 86.
    Part 10 –The video camera specify the minimum illumination sensitivity, and should quote figures very much less than 1. Dome camera manufacturers specify sensitivity with the dome removed, because the figure is better than with it fitted, Some give figures with the dome fitted as well. The camera would normally be used with the dome fitted. This factor needs to be remembered. Dome cameras need to be more sensitive than other cameras, if they are to overcome the losses through the dome itself. Signal to noise ratio (SNR) A camera’s SNR is found by comparing the amount of video signal to the amount of noise, in decibels, with the equation :-  video  20 log dB  noise  As a guide, an SNR of about 20dB is poor and is probably not viewable. 30dB will give a barely distinguishable image. 50dB is acceptable and 60dB good. As a ratio of video signal to noise, 20dB is 10:1, and 60dB is 1000:1. Gain CCTV cameras with automatic gain control (AGC) add another complication to the specifications. Manufacturers will quote sensitivity figures with AGC switched on. However they will generally quote SNR figures with the AGC switched off. The reasons for this are obvious. It makes the figures look better! Output formats CCTV cameras use many different video output formats, from the simple analogue composite output fitted to most cameras, through the analogue Y-C output format, digital formats of one kind or another, and direct computer network outputs used by some of the latest cameras. Specifications always show the SNR, sensitivity, etc. from the best output. The most common output connection people use is the analogue composite output. Most cameras have it fitted and it is a simple connection. However it is also the worst quality output. Sony Training Services 78
  • 87.
    Broadcast Fundamentals Lenses A lens is a transparent curved object capable of bending light. Most lenses are made from glass. However any clear material will make a lens. Different materials have different optical and physical characteristics, some of which are better than those of glass. A lens is based on some basic properties any transparent material have, with respect to light. The most important is their ability to bend light. Refraction If a light ray passes from one transparent material to another it is bent according to the relative refractive indices of the two materials. Snell’s law Snell’s law defines the behaviour of the light ray. It states :- n1 sin i = n2 sin r Where n1 is the refractive index of one material and n2 is the refractive index of the other. i is the angle if incident (approach angle) and r is the angle or refraction (leaving angle). Every material has a different refractive index. The refractive index of air is 1. Therefore the refractive index of any transparent material can be found by rearranging the equation above, thus :- n2 = n1 sin i / sin r however n1 = 1 therefore n2 = sin i / sin r The coin in the tank of water Figure 26 The coin in the tank Light passing from water to air, or visa-versa, is bent because water has a refractive index that is different to air. Imagine a tank of water with a coin sitting at the bottom of it. The rays of light coming from the coin pass upwards through the water and out into 79 Sony Broadcast & Professional Europe
  • 88.
    Part 11 –Lenses the air. As they pass from water to air they are bent by an angle relative to the angle you are looking at from vertical. Thus if you are looking at the coin at any other angle that from directly above the coin itself it will appear to be in a different position that it actually is. The block of glass The next step is to imagine a block or thick sheet of any clear material, like glass. Figure 27 Refraction through a block of glass Light passing through the glass at an angle is bent as it passes from air to glass and out again from glass to air. The angle the light ray bends as it passes from air to glass is exactly the same but opposite to the angle as it passes from glass to air. The ray of light on the incoming side of the glass is parallel but displaced to the outgoing side. This displacement can be found by the following equation :- d = t sin i (1-1/n) Where d is the displacement, t is the thickness of the glass, i is the incident angle and n is the refractive index of the glass. Notice that the refracted r angle is not part of the equation. The entrance and exit rays are parallel and therefore irrelevant. The prism A prism is a little like the block of glass we have just looked at where the two sides of the glass are not parallel with one another. The most common prism is a block of glass with a triangular section. The sides of the triangular section can be at any angle to one another Sony Training Services 80
  • 89.
    Broadcast Fundamentals although most prisms have something approaching an equalateral triangular section. Light bending properties of a prism Light entering one side of the prism is bent and leaves the prism at a different direction. The angle of bend is called the deviation and can be found by the equation :- D = A (n-1) Where D is the deviation, A is the angle of the prism, and n is the refractive index of the glass (or whatever material the prism is made of). Figure 28 Refraction through a prism Colour splitting properties of a prism White light is made up of many different colours. Each colour has a different wavelength. Figure 29 Splitting light through a prism When a ray of white light passes from one transparent material to another the different wavelengths are refracted by different angles. This has the effect of splitting the white light into its constituent colours. The convex lens The convex lens is a little like a series of prisms placed next to each other, all with slightly different angles between their two side. 81 Sony Broadcast & Professional Europe
  • 90.
    Part 11 –Lenses Figure 30 The convex lens as prisms In fact if you increase the number of prisms, making them smaller and smaller you will eventually have a perfect convex lens. Convex lenses have a focal point. If parallel light enters one side of the lens it is focused to a single point. This is the basis for all lens designs. If the lens did this perfectly all lens designs would be just one convex element. However, as we will see the convex lens is not perfect, and certain things have to be done to eliminate these imperfections. Figure 31 The convex lens The sides of most convex lenses are made from part of a sphere. While this is easy to produce, and perfectly good enough for most lenses, it can present problems for certain lenses. Figure 32 The convex lens as part of two spheres Sony Training Services 82
  • 91.
    Broadcast Fundamentals The concavelens The concave lens is effectively the opposite of the convex lens. In the same way it can be seen as an infinite arrangement of prisms and its two sides are based on parts of a sphere. Figure 33 The convex lens as prisms Figure 34 The concave lens focus Concave lenses have a focal point. However the concave lens focal point is a ‘virtual’ focal point on the approach side the lens. The focal point has no practical purpose in the same way as it does with convex lenses, but is used mathematically to calculate the properties of lens designs that use concave lens elements. Figure 35 The concave lens as spheres 83 Sony Broadcast & Professional Europe
  • 92.
    Part 11 –Lenses Chromatic aberration There are two types of chromatic aberration, axial (sometimes called longitudinal) chromatic aberration and lateral (sometimes called transversal) chromatic aberration. Axial chromatic aberration The basic prism showed how it is possible to split white light into its constituent colours. Any refractive surface will bend different coloured light by different degrees. This effect is called dispersion. Figure 36 Axial chromatic abberation A lens is really an infinite number of small prisms laid out in a particular way. Therefore it stand to reason that a lens will split white light into its constituent colours. This effect is called axial, or longitudinal, chromatic aberration and presents problems for lens designers. Looking at a basic convex lens when parallel rays of white light enter at one side of the lens it is split into its constituent colours, with each colour having a different focal point depending on its wavelength. Shorter wavelength colours, at the ultra-violet end of the spectrum, are refracted more and have a shorter focal point. Correcting axial chromatic aberration In order to correct axial chromatic aberration cause by one lens element you need to add another lens element with the opposite error. The required overall effect of most lens designs is to produce a perfect convex lens. However a perfect convex lens does not exist. By sticking a concave lens onto the convex lens you can eliminate the axial chromatic aberration of the basic convex lens. This is why lens designers stick lens element together. However simply sticking a concave lens with the opposite effect to a convex lens also eliminates the focal effect and the lens ends up looking like a flat piece of glass! The trick is to use a convex lens and concave lens with different refractive indices. Thus although chromatic aberration is eliminated. The two lenses together still focus to a point. Sony Training Services 84
  • 93.
    Broadcast Fundamentals Figure 37 Lens doublet This design is called an achromatic doublet. Figure 38 Various lens doublet designs Achromatic doublets come in all different forms, depending on the particular use for which they are intended. The important thing is that the convex element is fatter in the middle that at the edge and the concave lens is thinner in the middle than at the edge. Lateral chromatic aberration Lateral chromatic aberration is a less obvious problem than axial chromatic aberration. It arises from the same limitation of lens elements but effect the image laterally. It causes fringing near the outer edge of images, where the different colours have been split apart. Lateral chromatic errors affect lens with very long or very short focal lengths, i.e. long telephoto lenses and fish-eye lenses. Correcting for lateral chromatic aberration Lateral chromatic aberration in telephoto lens designs can be reduced by not using refractive elements in the design. Mirror lenses use curved mirrors instead of lenses. Thus no refraction and not dispersion. The other method is to use low dispersion material such as fluorite. However this material is difficult and expensive to work, and is affected by normal air. Flourite lens elements can only therefore be used as an internal element where they can be protected by a normal glass element. 85 Sony Broadcast & Professional Europe
  • 94.
    Part 11 –Lenses Spherical aberration As shown before the two sides of most lenses are designed as parts of a sphere. This makes manufacture easy and is perfectly good in most cases. However making lens sides as part of a sphere is not actually correct. The focal point at the edge of the lens is actually at a different point compared to the middle. Most lens elements are small enough for this not to be a problem. However lens designs with large elements, such as some television camera lenses, lenses intended for dim lighting conditions and some wide angle lenses, can suffer from spherical aberration. One answer is to use lens doublets or triplets where the spherical aberration of one element is eliminated by another. Aspheric elements Another answer is to use lenses where the sides are not part of a sphere. The perfect lens is slightly flatter at the edge than in the middle, making the refractive power of the lens greater nearer the middle of the lens. These so-called aspherical lenses are difficult to produce, especially in quality, making lens designs with good spherical aberration characteristics more expensive. Figure 39 The aspherical lens Coma Coma is a distortion effect that shows up as a fuzziness at the edge of the image. It is caused by spherical aberration but shows itself in rays of light passing through the lens at a sharp angle. Properties of the lens The principal element Although lenses are made up from a collection or convex and concave lens elements, doublets and triplets, and even mirror elements, they can all be thought of as a single perfect convex lens element. Lenses use Sony Training Services 86
  • 95.
    Broadcast Fundamentals many different elements to correct aberrations, reduce size and allow the lens to be controlled. The theoretical single perfect convex lens element, is referred to as the principal element. Its position is called the principal point. Focal point The focal point is where light from infinity (i.e. parallel light) is brought to a signal point. Convex lenses and concave mirrors have a real focal point. Concave lenses and convex mirrors have a virtual focal point. Focal length The focal length can be found from the formula: 1/f = 1/u + 1/v where f = focal length u = object distance v = image distance This can be simplified when the object is at infinity. The equation becomes f=v. So a simple definition of focal length is defined as the length from the principal element to the focal point. The focal length has an effect on the field of view of a lens, and its magnification. A lens with a short focal length has a wide field of view, and low magnification, and is called a wide-angle lens. A lens with a long focal length has a small field of view, and high magnification, and is called a telephoto lens. Aperture The aperture of a lens is a way of expressing the amount of light passing through it. Its maximum value is limited by the lens’s pupil, or iris. Aperture is controlled by a multi-bladed iris mechanisn. The larger the aperture the more light is passed through the lens. The aperture is normally expressed as the f number. This f-number is expressed mathematically as: N = f/d where f = focal length, d = diameter of the entrance pupil For an aperture of 2 this would normally be written as f/2. The larger the aperture the smaller the f-number. When the f-number doubles the light passing through the lens is reduced by a factor of 4. The markings on a lens are therefore normally indicated as ratios of 1.4 i.e. 1.5, 2, 2.8, 4, 5.6, 8 and so on. Each step or ‘f-stop’ represents a halving of the light. Depth of field The depth of field is the range of object distances for which the image is within a permissible degree of sharpness. Only objects at the focal point are perfectly in focus, objects closer and further away are slightly out of focus. The depth of field is therefore not an absolute figure, but is derived from the concept of a circle of confusion. 87 Sony Broadcast & Professional Europe
  • 96.
    Part 11 –Lenses In c o m in g lig h t Lens Focal p o in t C ones of c o n fu s io n Ir is Figure 40 Depth of field Depth of field is dependent upon focal length and the aperture of the lens. A long focal length (telephoto) lens has a small depth of field. The smaller the aperture (bigger the f-number) the larger the depth of field. Lens D e p th o f fie ld Focal p o in t A c c e p ta b le c o n fu s io n Ir is a t la r g e a p e r tu r e Lens D e p th o f fie ld Focal p o in t A c c e p ta b le c o n fu s io n Ir is a t s m a ll a p e r tu r e Figure 41 Change in depth of field with aperture The concave and convex mirrors Concave and convex mirrors are used a lot in optics. They provide an alternative to lenses without all the disadvantages associated with light as it refracts through glass (or whatever the lens is made from). Sony Training Services 88
  • 97.
    Broadcast Fundamentals Mirrors have the opposite effect on incoming light that lenses have. Concave lenses disperse incoming light. Concave mirrors focus light to a point. Convex lenses focus light to a point. Convex mirrors disperse light. Mirrors are very useful for long lenses. They allow a lens design to be ‘folded’, reducing the overall length of the lens. Lens types Normal lens A normal lens is one which produces an image which is equivalent to the image from the human eye. This is a little subjective as the image seen by the human eye is greatly distorted, however the normal lens aims to produce an image with nearly the same level of magnification, image distortion and perspective at the center of view of the human eye. As a rough guide the focal length of the normal lens is approximately the same distance as the image diagonal. For a 35mm camera a normal lens is one with a focal length of 50mm. Normal lenses are also able to attain a lower aperture f number, partly because the optics are better at this focal length but also because the mathematics for calculating the aperture f number are dependant on the focal length and are more favourable for the normal lens. The telephoto lens The telephoto lens is a lens with a focal length greater than the normal lens, although the term “telephoto” is normally reserved for lenses wit a focal length greater than twice that of the normal lens. For an image size of 35mm, a telephoto lens is considered to be one with a focal length of more than 100mm. Telephoto lenses can magnify objects from a long distance. They have minimal image distortion, and compress perspective. Wide-angle lens A wide-angle lens is one with a focal length smaller than the normal lens. It has a wide field of view permitting a wide vista to be captured. For a 35mm camera a wide-angle lens is one with a focal length of smaller than 50mm although the term “wide angle” is normally reserved for lenses with a focal length smaller than about 30mm. Wide angle lenses make object appear unnaturally small, they distort the image and stretch perspective. The fisheye lens A fisheye lens is an extreme wide-angle lens. As the focal length becomes shorter it becomes increasingly difficult to maintain a geometrically correct image, i.e. one that is square. When this design target is abandoned the image becomes curved at the edges but a very wide view becomes possible. At the extreme it is possible to have a fish 89 Sony Broadcast & Professional Europe
  • 98.
    Part 11 –Lenses eye lens with totally circular image, and an angle of view of more than 180 degrees. The zoom lens A zoom lens is a lens with a variable focal length. For a 35mm camera it is common to use a lens with a 30-100mm zoom lens. It can therefore change the field of view from wide angle to telephoto. For television camera lenses it is common to have a zoom lens with a focal length range of 8mm to 150mm. For specialist applications, such as sports there are lenses that have zoom ratios of more than 40:1. Prime lens A prime lens is any lens that is not a zoom lens, i.e. any fixed focal length lens, and are so called because of their superior quality. While zoom lenses are very versatile, and their quality has reached remarkable levels in the last decade or so, they are still a compromise. The best quality prime lenses are always better quality than the best quality zoom lenses. Mirror lenses Mirror lenses use a combination of normal lenses and mirrors. Telephoto lenses are sometimes mirror lenses. Mirrors allow for compact designs for telephoto lenses with very long focal lengths. A characteristic of mirror lenses is that anything out of focus appears as a donut shape, rather than a simple blur. Extenders and adaptors There are various extenders and adaptors that can be fitted between the lens and the camera. These can be used to allow lenses with one mounting scheme to be fitted to a camera with another mounting scheme. They can also be used to alter the characteristics of the lens. Mount adaptors Mount adaptors allow lenses intended for one mount to be fitted to a camera with another mount. Mount adaptors are popular in still cameras where there are a lot of different mounts. Optics tend to suffer because the lens is pushed away from the camera and the back flange to film distance is not optimal. All camera manufacturers have mechanical and electrical connections between the lens and camera to allow the camera to control the lens. These connections are very specific to the manufacturer. Adaptors cannot guarantee to provide a match for these connections between the lens and camera. Some lens manufacturers offer lenses with no specific mount. These lenses are designed slightly shorter than they should be. You select which mount you want and the appropriate adaptor is fitted, building the lens up to the correct length. Mechanical and electrical connections are much more likely to work with this kind of mount adaptor. Sony Training Services 90
  • 99.
    Broadcast Fundamentals 2x, 3x etc. adaptors This kind of adaptor increases the focal length of the lens. The simplest of these is little more than a tube pushing the lens away from the camera and boosting the focal length as a result. The better ones have lens elements in them to improve the optics. No matter what the adaptor, they are always a compromise. Fitting a 2x adaptor to a 25mm lens will never attain the quality of a 50mm lens. However they provide a way of effectively doubling the number of lenses you have, with only a marginal reduction in quality. Mechanical and electrical quality can vary just as with the optical quality. Some adaptors are able to transfer the mechanical and electrical connections between the lens and camera better than others. Filters Filters are sometimes used to correct something in the picture, to protect the camera from damage, or to add some kind of special effect. A filter can be placed in front of the lens or built in behind the lens Built-in filters Filters placed behind the lens are always built-in because they would otherwise push the lens away from the camera and alter its optical characteristics. Two types exist, camera built-in, and lens built-in. Lens built-in filter are used if a filter cannot be put in front of the lens. This is particularly true of ultra-wide angle and fish-eye lenses, because the front lens element tends to protrude from the front of the lens. A slot somewhere at the back of the lens allows glass or gelatin filters to be slotted into the lens. Camera built-in filters are common in video cameras. There are often about 5 of these filters built into a wheel. By turning a small knob on the camera the camera operator can turn the wheel and bring different filters between the lens and the sensor. Neutral density filter are used to cut the amount of light in bright conditions. Yellow tinted filters are used to correct the colour temperature for daylight operation. Front filters There are a myriad of different filters that can be fitted in front of the lens. Professional still camera users can screw filters directly to the front of the lens. These screw-in filters are specifically designed for the diameter of the front of the lens. If a screw-in filter cannot be found an adaptor can be fitted to the front of the lens. Once fitted, this allows a wide range of standard square filter sheets to by placed in front of the lens, removing the need to find a screw-in filter of the correct diameter. This method is popular in movie cameras and video cameras. The adaptor is generally called a matt box. 91 Sony Broadcast & Professional Europe
  • 100.
    Part 12 –Early image sensors Part 10 Early image sensors Selenium detectors Selenium was the first photoelectric material to be found, in 1873. It was used in the first mechanical experiments on television, like the Nipkow disk system. Selenium is classed as a photoconductive material because its resistance changes when exposed to light. Photovoltaic materials produce a voltage potential across themselves under the influence of light. The Ionoscope The Ionoscope was the first image sensor of any commercial importance. It consisted of an evacuated glass enclosure with a tube fixed to it enclosing an electron gun. The main enclosure had a screen made from a sandwich of photosensitive particles, called a mosaic, a thin mica insulation layer and a conductive sheet backing. The plate acted like a capacitor. Figure 42 The Ionoscope Light from the lens could enter the enclosure through a window and land on the mosaic releasing electrons which were attracted away towards the anode. Thus a positive charge image built up on the surface of the mosaic. The charge was proportional to the intensity of light. The electron gun fired electrons in a raster scan at the mosaic. Any positive charge was cancelled by absorption of electrons from the beam. This absorption was detected by the conductive plate and output as a signal at the signal electrode. The rest of the electrons bounced off the mosaic to be picked up by the anode and drawn away to the anode electrode. Sony Training Services 92
  • 101.
    Broadcast Fundamentals The Orthicontube The Orthicon tube was invented by Iams and Roase at RCA in 1939. The Ionoscope used a high velocity electron beam which gave rise to secondary emission of electrons which effectively reduced the tube’s ability to catch all the electrons released purely by photoemission. The Orthicon tube had a much reduced anode voltage. This tended to make the mosaic saturate with electrons when no light was present. Any more electrons would not strike the surface. Thus no signal appears when there is no light. This produces better Black recognition. Any electrons not striking the surface of the mosaic return back down the same path as the electron beam to soak away in a collector next to the electron gun. The low anode voltage and the resulting low velocity of the electron beam means that the bean was subject to interference by stray electric fields near the mosaic. The resulted in a loss in resolution compared to the Ionoscope. The beam focusing and deflection was better than for the Ionoscope. A long focus coil was used, and either electrostatic or electromagnetic deflection. The beam retained its helical nature. This aided focussing. The beam was also deflected such that it always struck the target at a perpendicular angle. Figure 43 The image orthicon tube The Image Orthicon tube The Image Orthicon was an improvement over the Orthicon. In this design light was focussed onto a photocathode plate. The released electrons which were attracted by an accelerating grid back into the tube towards a two sided glass target plate. Thus an image in electrons was formed on the target. 93 Sony Broadcast & Professional Europe
  • 102.
    Part 12 –Early image sensors A thin mesh was placed in front of the target. The electrons from the photocathode penetrated straight through the mesh and onto the target. However any secondary electron emission was soaked up by the mesh. The electron bean was a similar low velocity perpendicular design as the Orthicon. It scanned a raster image on the back of the target. Any electrons not being soaked up by the target were returned back down the beam to be collected by the anode, next to the electron gun. Thus the return beam was a raster scan of the charge, and thus of the image. The Vidicon tube Figure 44 The Vidicon tube The Vidocon tube was introduced by RCA in 1950. It used antimony trisulphide target. This is a photo conductive material. The resistance across the target changes when it is exposed to light. With certain limits the change of resistance is proportional to the intensity of light. Figure 45 The vidicon tube The back of the target is scanned by an electron gun with the same basic design features as the low velocity perpendicular design used in the Orthicon tube. Sony Training Services 94
  • 103.
    Broadcast Fundamentals The target is biased to the anode voltage. As the bean strikes the back of the target current flow to the front is inversely proportional to the resistance, which is inversely proportional to the light intensity. Thus the anode bias voltage will alter as a raster scan of the image. Variations on the Vidicon design There were various improvements on the basic Vidicon tube design. The Plumbicon was introduced by Philips in 1962. It used lead oxide as a target material. The Saticon was another design with a target made from arsenic, selenium and tellurium. The Diode Gun Plumbicon used a photo diode instead of an electron gun These later designs offered better resolution, greater contrast and better colour balance than the basic Vidicon . 95 Sony Broadcast & Professional Europe
  • 104.
    Part 13 –Dichroic blocks Part 11 Dichroic blocks The purpose of a dichroic block The purpose of a dichroic block is to split an incoming colour image into its three primary colours. Most of the block is coated in black paint to stop light getting in, except for a window to let the incoming colour image in and three windows to let the outgoing primary images out. They are fitted just behind the lens of a colour video camera. A sensor is placed on each outgoing window, one for each primary. Each sensor measures the brightness of each primary and sends out a video signal for each primary. Mirrors and filters Various designs have been created over the years. Most designs are now beginning to look very similar. Two basic design pattern are now in use. The first is generally simply called a prism block or dichroic block. The other is called a cross block or X block. Conventional dichroic blocks The conventional prism block consists of at least three prisms, glued together with a transparent epoxy cement. Light enters the incoming window at the front and passes through the first prism. The back surface is angled, and is coated with a red dichroic mirror. The choice of material, and its thickness define the colour that is reflected. Manufactures can ‘tune’ the dichroic mirror by altering the coating thickness. The red light is reflected once more off the front of the first prism and out through the red outgoing window. There is a red filter to trim the light before it strikes the red sensor. R e d tr im filte r R sensor R e d lig h t B lu e d ic h r o ic m ir r o r G r e e n lig h t In c o m in g G r e e n tr im filte r w h ite lig h t G sensor Lens R e d d ic h r o ic m ir r o r B lu e lig h t B lu e tr im filte r B sensor C y a n lig h t Figure 46 Dichroic block Sony Training Services 96
  • 105.
    Broadcast Fundamentals The cyan light passes through the second prism. The back of this prism is coated with a blue dichroic mirror that reflects blue light, letting everything else through (green). The blue light passes out through the blue outgoing window, through a blue trim filter and onto the blue sensor. Likewise the remaining green light passes out through the green outgoing window, through a green trim filter and onto the green sensor. It is worth noting that the sensors are basically the same device. Some manufacturers may carefully select sensors that have the best performance for each colour, most will not. Cross dichroic block The cross dichroic block consists of four small triangular prisms, glued together to make a small cube with two intersecting planes. Some faces of each prism are coated with dichroic mirrors, or trim filters. G sensor G r e e n lig h t G r e e n t r im f ilt e r R e d d ic h r o ic m ir r o r B lu e d ic h r o ic m ir r o r R e d lig h t B lu e lig h t B sensor R sensor B lu e t r im f ilt e r R e d t r im f ilte r Lens I n c o m in g w h it e lig h t Figure 47 The cross dichroic block Light enters the front of the block. One of the intersecting planes is a blue dichroic mirror, the other a red dichroic mirror. Blue light reflects off the blue dichroic mirror and out from the left side through a blue trim filter. Red light reflects off the red dichroic mirror, passing out the right side through a red trim filter. The remaining light is green, and passed out the back of the block through a green trim filter. 97 Sony Broadcast & Professional Europe
  • 106.
    Part 13 –Dichroic blocks The cross dichroic block has become popular recently because of its compact and simple design. However it has one major drawback. The intersection between the four prisms causes a small vertical line on the outputs. Although the light is out of focus as it passes the intersection, and careful manufacture can make this intersection as tight as possible, this is the main reason the cross block is not used on professional and broadcast video cameras. Optical requirements of a dichroic block Every optical path, from the incoming window, to each outgoing window, is identical. This is essential, because the lens will focus through the dichroic block and onto the surface of the sensors behind. If one of the optical paths is different, that particular primary colour will be out of focus. The position of the sensors is critical. All three sensors must be mounted in exactly the same place relative to its own window. If there is any error, that particular primary colour image will be in a different position to the other two. Recombining the three primary images on the monitor will be practically impossible. Variation on a theme Most dichroic block designs are now tending to look similar to the two designs mentioned above. Some designs vary slightly. Some designs swap the position of the red and blue dichroic mirrors. Some designs have slight variations in the angles of the prisms and the paths each primary colour will take. Most designs have blue and red dichroic mirrors. Both of these mirrors are relatively easy to make, because both have one cut off wavelength. Green dichroic mirrors are more difficult to make because they have two cut of wavelength designers have to worry about. Some blocks have a forth or fifth outgoing window. This may be used for a monochrome viewfinder output for the camera operator, or for some of the camera’s internal functionality, like auto focusing or metering. Some specialist video cameras do not have standard primary colours at the outgoing windows. Security cameras may use infra-red for night vision. Video cameras used in food processing and monitoring also use infra-red to check the quality of food. These cameras may have one window in the dichroic block dedicated to infra-red. Using dichroic blocks in projectors The increased popularity of low cost video projectors has lead to an explosion in the need for cheap compact dichroic blocks. Quality is not so much of an issue with projectors, and the cross dichroic block therefore a very popular. Dichroic blocks are used the opposite way round from video cameras. Simple filters split light from the lamp into three primary beams. These Sony Training Services 98
  • 107.
    Broadcast Fundamentals three beams are passed into the dichroic blocks through light valves, where the sensors would be in a video camera. The light valves build up an image for each primary by shutting light on or off for each pixel. The dichroic block then combines the three primary images into one colour image that is projected out to the screen. 99 Sony Broadcast & Professional Europe
  • 108.
    Part 14 –CCD sensors Part 12 CCD sensors Advantages of CCD image sensors When looking at the advantages of CCD image sensors, you have to realise what alternatives there are and what was used before these devices became available. Before CCD image sensors became popular video and television cameras used some form of tube sensor. Plumbicon tubes were very popular for a while. Bearing these devices in mind, let us consider the advantages of CCD image sensors. Compact design The first and most obvious advantage of CCD image sensors are that they are considerably smaller that tube sensors. They allow very compact cameras to be made which can be used in discreet surveillance and remote investigation in dangerous or confined places. Light design CCD image sensors are considerable lighter than tube sensors. They can weigh only a few ounces. This allows them to be designed into portable cameras without increasing the overall weight of the camera by any undue amount. High shock resistance CCD image sensors have no moving parts. They also have a very light duty mechanical construction that is highly resistant to acceleration and deceleration damage. Low power consumption CCD image sensors use a lot less power than older tube sensors. This makes them suitable for any battery powered device. Good linearity Linearity is important is measuring light levels accurately. Linearity means that the output signal is proportional to the number of photons of light entering the device. Film and tube sensors are highly non-linear, partly because of their low dynamic range. They give no output at all if the light level (number of photons) is too low, and saturate if the light level is too high giving no further output if the light intensity increases further. CCD image sensors have good dynamic range, and good linearity over this range. Sony Training Services 100
  • 109.
    Broadcast Fundamentals Good dynamic range CCD image sensors saturate in the same way as any light sensor, but the light intensity required to saturate these devices is generally much higher. CCD image sensors have no effective minimum. Some specialised devices can measure near total dark. Typically photographic film has a dynamic range of about 100. CCD image sensors achieve about 10,000. High QE (quantum efficiency) QE is the ratio of the number of photons of light detected to the number of photons that enter the device. Photographic film has a QE of about 5% to 20%. CCD image sensors have a QE of between 50% and 90%. This makes them very efficient and thus very useful for dark environment monitoring and studies of deep space. Low noise In fact CCD image sensors can suffer from thermal thermal noise. However this noise if predictable and can be controlled or reduced. Cooling the image sensor using conventional cooling find and a fan can keep thermal noise to very low levels. For specialist scientific imaging Peltier effect heat pump and cooling by liquid nitrogen can reduce thermal noise to virtually zero. The basics of a CCD A charge coupled device (CCD) is sometimes referred to as a bucket brigade line. It consists of a series of cells. Each cell can store an electric charge. The charge can then be transferred from one cell to the next. The line of buckets A very good way of thinking of a CCD is to imagine a line of buckets. At one end is a set of digital scales where you can measure the amount of water you pour into the first bucket. Figure 48 Single line of buckets As soon as you pour the water from the digital scales into the first bucket you will transfer the water from all the other buckets into the next bucket 101 Sony Broadcast & Professional Europe
  • 110.
    Part 14 –CCD sensors down the line. The water from the last bucket will be poured into the digital scales at the other end, and measured. Of course it would be impossible to move the water from one bucket to the next at the same time. You would probably need another set of buckets to store the water while you were transferring it. Figure 49 The double line of buckets The electronic reality CCD’s use a line of metal oxide semiconductor (MOS) elements, constructed on the same chip. Each element contains 2, 3 or 4 polysilicon regions sitting on top of a thin layer of silicon oxide. Polysilicon can be used as a charge holder or a conductor. Although it is not as good a conductor as other metals like copper or aluminium it is easy to fabricate and is transparent, which is useful when CCD’s are used in cameras. Silicon oxide (glass) is a good insulator. These elements are fabricated on a p type doped silicon subtrate. At each end of this line is a region of n type doped silicon. Connections are made to all the polysilicon regions and to the two n type doped regions. Using the CCD as a delay line CCD’s have been very popular as a semiconductor delay line. They were used in many electronic designs before semiconductor memory became cheap and complex enough to be used instead. Figure 50 The CCD delay line CCD delay lines are essentially analogue. That is to say the charge they carry is an analogue quantity. If a CCD delay line is to be used in a Sony Training Services 102
  • 111.
    Broadcast Fundamentals digital environment there must be a digital to analogue converter fitted to the input and an analogue to digital converter fitted to the output. The transfer of charge is however digital, and the CCD will have a clock input which is used to transfer the charge from one MOS element to the next in the line. How does the CCD delay line work? The input signal is fed into the first polysilicon region. Using field effect principles electrons are pulled from the n type doped region and collect under the insulation layer. The potential on the first region creates a potential ‘well’ that the electrons effectively fall into. Although there is a maximum charge that can be held in this potential well, the amount of charge is proportional to the amount of time and the potential applied to the first region. The potential between the first and second regions are switched. This effectively moves the potential well from just underneath the first region to just underneath the second region. The first region becomes a potential barrier. The charge is attracted to the second region. The potential between the second and third regions are switched. The charge is now attracted to the third region. By switching the potential from one region to the next the charge can be transferred from one region to the next, sitting just underneath the insulation layer. This leaves the first region clear. And the next charge packet can be input to the line. When the charge reaches the last region it transfers to the n type doped region at the other end of the line and appears as a output signal. 2 region elements CCD delay lines with 2 polysilicon regions per MOS element use the second region in each element in the same way you might use the spare buckets in the line of buckets. The charge is transferred to the second region before being transferred to the first region of the next element. The disadvantage of the 2 region element is that the charge could flow the wrong way. 2 region elements employ special gates in the element and use a stepping transfer voltage to ensure the charge flows correctly. This all adds to the complexity and cost of this type of CCD. However 2 region elements offer higher density that 3 or 4 region elements. 103 Sony Broadcast & Professional Europe
  • 112.
    Part 14 –CCD sensors Figure 51 The 2 region element CCD delay line 3 region elements CCD delay lines with 3 polysilicon regions per MOS element are able to ensure that the charge flows in the correct direction from one element to the next. Figure 52 The 3 region element CCD delay line The charge is pulled from the left region to the centre region, then from the centre region to the right region. Sony Training Services 104
  • 113.
    Broadcast Fundamentals However clock phasing is more complex than both the 2 and 4 region elements. 4 region elements 4 region elements have simpler clocking signal arrangements than 3 region designs and have better charge transfer capabilities, but it is more difficult to achieve high density devices. Figure 53 The 4 region element CCD delay line 105 Sony Broadcast & Professional Europe
  • 114.
    Part 14 –CCD sensors Using CCD’s as image sensors The basic principles Metal oxide semiconductors are sensitive to light. If light enters the substrate of a MOS device, under certain conditions it excites electrons in the silicon into the conduction band. Put simply, electrons are shaken loose by light. The elements used in image sensors are similar to those used on CCD delay lines. They can be 2, 3 or 4 region elements. Sensing light When used as an image sensor a positive voltage is applied to the first polysilicon region in each element. This develops a small potential well just under the insulation layer. Step A – Exposure As light penetrates the p type substrate of the CCD it shakes electrons loose. The loose electrons in the vicinity of the potential well fall in and are trapped, forming a small collected charge. The stronger the light level falling on that element, or the longer the time allowed, the greater the number of loose electrons, and the greater the stored charge. Figure 54 The CCD delay line as an image sensor Step B, C & D – Transfer When the CCD sensor has been exposed to the image for the required time the charges stored under each element have to be transfered to the end of the row where they can be sensed and output. In Step B the potential of region 2 is raised. Now the potential well extends over two regions and the charge spreads to fill the space. Sony Training Services 106
  • 115.
    Broadcast Fundamentals In Step C the potential of region 1 is lowered and region 3 is raised. The potential well now occupies region 2 and 3. The charge is pulled across so that it sits under regions 2 and 3. In Step D the potential of region 2 is lowered and the potential of region 1 is raised. The potential well now occupies region 3 and 1 of the next element. The charge is pulled across so that it sits under regions 3 and 1. Steps B, C and D are repeated until the whole row has been transferred, element by element to the output gate at the end of the row. When this has been done and the whole row is empty of charge, exposure can begin again. The arrangement of MOS elements CCD’s used as image sensors comprise a matrix of MOS elements. The elements are laid out in columns. Each column is similar to a CCD delay lines. There are many columns in the matrix. The number of elements in each column and the number of column defines the overall resolution of the device. Each element corresponds to a single captured point from the image, otherwise called a picture element or pixel.A system of channel stops is used to guard one column from the next. These prevent charge from one row leaking into the next. Reading columns Steps B, C and D above explain how each column is read. This would imply that CCD sensor would have an output gate at the end of each row. In fact CCD sensors have just one output. Therefore another CCD line is placed at the end of the column, perpendicular to them all. This line is called a read-out register. The charge from the element at the end of each column is transferred to the elements in the read-out register. Column clocking now stops. Clocking now transfers the charges in the read-out register to the sense and output gate. When the read-out register is empty, column clocking can start again and clock the next charge from the columns into the read-out register. This procedure carries on until the last charge in the columns has been clocked into the read-out register and from there to the output. Similarity to raster scans. This method of reading one pixel from each row into a column line, then transferring them one by one to the output is similar to a conventional television raster scan. CCD sensors therefore lend themselves very well as conventional television camera sensors. 107 Sony Broadcast & Professional Europe
  • 116.
    Part 14 –CCD sensors Figure 55 Arrangement of MOS elements Sony Training Services 108
  • 117.
    Broadcast Fundamentals Backlit sensors Many sensors are now back lit. Rather than allowing light to enter the front of the sensor, passing through the region gates and the insulation layer, the whole sensor is turned over and light passes directly into the substrate from below. Figure 56 Back lit sensors Substrate thickness However the substrate is conventionally thick. This makes production easier. Manufactures only work on the top surface. The thickness of the sensor’s chip is irrelevant and therefore one less thing they have to worry about. Thick substrates also makes for a more robust sensor. The problems with thick substrates is two fold. Firstly the electrons loosened by the light are a long distance from the potential wells created by the region gates. Secondly there is a risk that electrons loosened by the light do not fall into the correct potential well. Back thinning Back lit sensors now tend to be only about 15um thick. This makes sure that the area of substrate where electrons are loosened by incoming light are close to the potential wells. There is a greater chance that all the electrons will be caught, and that the electrons will fall into the correct potential well. The gate side of back thinned CCD optical sensors tend to be mounted on a rigid surface to make the whole device more robust. This surface is often reflective to make the sensor more efficient by driving any light that leaks out the back into the substrate. 109 Sony Broadcast & Professional Europe
  • 118.
    Part 14 –CCD sensors Problems with CCD image sensors CCD image sensors are not perfect. They can suffer from manufacturing defects, and operational anomolies. A few of these are listed here. Shorts This is a manufacturing defect. Shorts can occur where silicon oxide insulation breaks down or where any level in the MOS elements has not been built properly. Shorts result in the improper collection of charge, or charge loss. If the collection of charge is damaged, individual pixels may be lost. If there is a charge leakage there may be line smearing as charge is lost through the short as the charge from each pixel is transferred down the line to the output, past the short. Traps A trap is a manufacturing defect where charge is not able to transfer sucessfully. Thermal noise and dark current As previously mentioned CCD image sensors have very noise characteristics if they are kept cool. However if their temperature rises thermal noise rises correspondingly. This can give rise to a number of other problem, but overall will effect the quality of the image capture process. Electrons freed by thermal activity are attracted to the potential well under each pixel. Thus charge develops even if there is no light falling on the sensor. This gives rise to the term dark current. CTE (charge transfer efficiency) CCD sensors must transfer the charge from one element to the next in the line as efficiently as possible. Imagine a CCD image sensor with 1024 by 1024 pixels. Charge from the far end of the furthest line of a device will be transferred 2048 times before it reaches the output sense and gate. If there is a 90% CTE in the device the charge will have dropped to 0.000000000000000000000000008 of its original value! This is clearly not a good thing. CCD image sensors generally have CTE’s better than 99.999%. With a 1024 by 1024 sensor this still means that the charge in the furthest pixel has dropped by 0.02%. While this is significantly better than in the case of a 90% CTE it is still a problem in accurate light measurement situations. As sensors increase in resolution so CTE ratings must be kept as close to 100% as possible. Sony Training Services 110
  • 119.
    Broadcast Fundamentals Chroma filtering and bad QE from front lit devices Light passing into the sensor substrate passes through the region gates and insulation layer. Using polysilicon regions Making the regions from polysilicon rather than from aluminium allows light to pass through them. All front lit sensors use polysilicon region gates. Filtering effects Light passing through the regions, even if they are made from polysilicon, and the insulation layer are filtered. The filtering is non- linear. Light at the blue end of the light spectrum is attenuated more than at the red end. Bad QE Filtering effects not only make the sensor’s characteristics non-linear, but they reduce its QE. This makes them less effective where accurate light measurement is required of camera sensors CCD image sensors with stores The problem with the design mentioned is that you have to wait after the sensor has been exposed and all the pixels charged, for the charges to be read out. This takes a while. FT sensors In the FT (frame transfer) sensor design each column is twice as long. Half of the columns are exposed. The other half acts as a temporary store, and are covered by an aluminium mask. After the image has been exposed the sensor the charges are transferred quickly into the temporary store. The sensor can then start exposing the next frame while the frame that was just exposed is output through the read-out register. 111 Sony Broadcast & Professional Europe
  • 120.
    Part 14 –CCD sensors Figure 57 FT sensor IT sensors In the IT (interline transfer) sensor design all the pixel charges are transferred into read-out gates, and from there into separate column CCD lines. These are called vertical read-out registers. These registers are masked. The charges can be transferred into the vertical read-out registers very quickly, leaving the sensor free to start exposure again. The vertical read-out registers can then be transferred into the horizontal read-out register in the normal way. Sony Training Services 112
  • 121.
    Broadcast Fundamentals Overflow gate technology and shuttering With the introduction of IT sensors came the introduction of an overflow gate. This gate is placed on the opposite side of each sensor gate from the vertical read-out register. It can be used in a number of ways. Figure 58 IT sensor 113 Sony Broadcast & Professional Europe
  • 122.
    Part 14 –CCD sensors Using the overflow gate to eliminate flair and burnout When the overflow gate is closed it will not draw any charge away from the sensor gate. After exposure all this charge can be drawn away by the vertical register. However if light levels get too high the sensor gate will become flooded. Any charge above a certain level will not be drawn away from the vertical register and the device will peak causing ‘burnout’ in the image. Furthermore the excess charge will leak out of the effected gate and into the surrounding gates spreading the perceived brightness from the actual bright area. Therefore the overflow gate is never actually close altogether. In fact, in its ‘off’ mode it will still draw charge from the sensor gate, but only if the amount of charge becomes excessive. This prevents the gate from peaking and stops the flood of charge leaking into any other gates. Using the overflow gate for shutter opening and iris control If the overflow gate is opened any charge building up underneath the sensor gate as a result of light exposure will be immediately drawn away into the overflow gate. This effectively switches the device off in the same way as a mechanical shutter would do. This can make the sensor behave a little like movie cameras with variable shutter wheels. It is also used in applications like CCTV as an electronic iris, assisting the mechanial iris in the lens itself. Using the vertical register and overflow gate for shutter closing To simulate the shutter closing the accumulated charge under the sensor gate can be drawn into the vertical register. At the same time the overflow gate is opened, so that any further charge is drawn away from the sensor gate. FIT sensors One problem with IT designs are that the vertical read-out registers are very close to the exposed regions of the device. If light levels are very strong it is possible for charge to leak from the exposed region of the sensor into the vertical register regions. In the FIT (frame interline transfer) sensor design both FT and IT design philosophies are used. The charges built up in the exposed areas are transferred quickly to the vertical read-out registers, and then into an FT type store. This takes them away from the vertical read out store so that they cannot be corrupted if light levels are very high. Sony Training Services 114
  • 123.
    Broadcast Fundamentals HAD technology Figure 59 The HAD sensor 115 Sony Broadcast & Professional Europe
  • 124.
    Part 14 –CCD sensors Sony introduced a new technology for image sensors in 1984. This technology was a real departure from the conventional designs up to that date. Rather than using the photo-excitation technology of older designs this new design uses an embedded photo diode for each pixel. The photo diode has a heavily doped p type region called a p++ region. p++ doped regions have a high number of accumulated holes. Hence the name Hole Accumulated Diode or simply HAD. The HAD increases the number of electrons that are released as light enters the device. These electrons flow down into the device’s substrate and collect underneath the HAD. HAD sensors also have the advantage that light does not have to pass through polysilicon regions to reach the diode. This makes the sensor more efficient and linear. HAD sensor operation HAD sensors comprise an array of HAD’s. An insulation layer of silicon dioxide is laid on top of the HAD’s and a thin aluminium photo mask if printed on top of the insulation. The photo mask prevents light from getting into the sensor except where there is a HAD. Light passes into the HAD and excites electron flow down into the n type substrate. At a certain time, when the image is sampled, the voltage on the polysilicon gate, to the right of the HAD, switches, creating a potential well that attracts the accumulated charge away from underneath the HAD. The polysilicon region is part of a chain of polysilicon regions that form a vertical register to transfer the accumulated charges out of the device. Charge is therefore transferred out of the device in the same way as any other IT or FIT device. A region of p++ material is fabricated deeper into the device just to the left of each HAD. This acts as a channel stop, preventing charge from leaking from underneath each HAD into the vertical register to the left. Problems with HAD devices The first problem with HAD sensors is the increased manufacturing complexity. However, in commercial terms this increased complexity and its resulting higher cost is more than offset by the increase in performance. Manufacturing techniques have also improved considerably over the years making it easier to produce HAD devices reliably. The second problem with HAD devices is the same problem facing any IT or FIT device. Ideally the whole of the front of the device should be light sensitive so that light hitting anywhere on the surface of the device is picked up and output as a signal. The amount of space taken up by the vertical registers, channel stops, polysilicon regions, etc. detracts from this light sensitive area. This problem is partially overcome in later designs. Sony Training Services 116
  • 125.
    Broadcast Fundamentals HyperHAD HyperHAD, sometimes called microlenticular technology, improves on the simple HAD device by fitting a small lens in front of each HAD. This channels light from the area around the actual gate that would otherwise be lost, increasing the effective area of each HAD outside of the HAD itself. HyperHAD sensors were introduced in 1989 and increased the sensitivity and efficiency of HAD sensors. Figure 60 The HyperHAD sensor SuperHAD sensors SuperHAD was introduced in 1997. It is basically similar to the HyperHAD design, but the actual lenses are larger, and are therefore able to capture more light, making SuperHAD sensors more sensitive than HyperHAD sensors. PowerHAD sensors PowerHAD is a marginal improvement on SuperHAD. The microlens structure is similar to that of SuperHAD but the capacitance of the vertical registers is reduced. 117 Sony Broadcast & Professional Europe
  • 126.
    Part 14 –CCD sensors Figure 61 The SuperHAD & PowerHAD sensor PowerHAD EX (Eagle) sensors Previously simply called New Structure CCD and now sometimes called Eagle sensors, PowerHAD EX sensors have another lens placed between the on-chip microlens and the HAD. The microlenses are also larger. Infact they are so large that they overlap, leaving no area on the device that is not somehow concetrated into a HAD. This further contentrates light capture increasing the efficiency of the sensor still further. Figure 62 The Eagle sensor Sony Training Services 118
  • 127.
    Broadcast Fundamentals The insulation layer between the polysilicon gate and the potential well in the substrate underneath is also thinner. This decreases the gate’s capacitance and increases the ‘depth’ of the well, making it better able to collect the HAD’s accumulated charge. Figure 63 Lenticular designs EX View HAD sensors EX View HAD sensors are physically the same as any other HAD based Figure 64 EX View HAD response sensor. However the exact doping levels and construction of the HAD makes it more sensitive to infra-red light. This makes EX View devices very appropriate to security and low light level cameras. Single chip CCD designs Professional and broadcast camera systems normally process the image they are looking at as three primary colours. (See Colour Perception.) This is important for good colour matching, and is essential if the camera’s outputs are to comply with broadcast signal standards. (See Colour in Television.) 119 Sony Broadcast & Professional Europe
  • 128.
    Part 14 –CCD sensors The split from the original image to three images, one for each primary colour, could be done in the camera’s electronics. However it is better to do the split optically. Therefore all professional and broadcast cameras have a three way dichroic splitter just behind the lens, and three CCD sensors, one on each outputs from the dichroic splitter. Each sensor is responsible for one of the primary colours. (See Dichroic Block Design.) Figure 65 Single chip HAD design However this is either too expensive or simply not possible for smaller cameras and cameras intended for industrial and domestic use. Small security cameras simply do not have enough space for a dichroic block and three CCD sensors. The cost of the dichroic block and three CCD sensors would make domestic cameras simply too expensive. In any case the increase in quality would almost certainly not be appreciated. Therefore all these types of cameras have one CCD sensor. The split from the original image to its primaries is still required and is still best done optically. Therefore single chip CCD cameras have a filter screen fitted over the sensor. CCD filter screens CCD filter screens consist of an array of small coloured squares. The resolution of the CCD sensor and the filter squares is the same. Thus each pixel in the sensor has one filter square. Random filter screenThe human eye is very good at recognising patterns. Thus it may seem a good idea to design a filter screen with a random design of squares of the three primary colours. Sony Training Services 120
  • 129.
    Broadcast Fundamentals However there is a chance that there will be discernable areas of one colour. Figure 66 The random screen The Bayer filter screen A popular screen design is the Bayer screen. This screen has a greater number of green squares, because of the human eye’s relative high sensitivity to green areas of the colour spectrum. Figure 67 The Bayer screen The Bayer screen is very popular in single CCD cameras. The pseudo random Bayer screen The problem with the Bayer screen is that there is a strong pattern. It may be possible, at certain low resolutions, for the human eye to pick out this pattern. 121 Sony Broadcast & Professional Europe
  • 130.
    Part 14 –CCD sensors Figure 68 The psuedo random Bayer screen Thus by jumbling the Bayer pattern in a particular way it is possible to retain the same ratio of the three primary colours as the basic Bayer pattern, but with no easily definable pattern. This design also makes sure that there are no large ares of one colour by ensuring that there are no squares of the same colour next to each other. Reading out from a single CCD camera Filter screens need to be very accurately fitted to the sensor. If the filter position is defined and fitted exactly, it is possible to define which pixel is responsible for which primary colour. Reading single CCD’s then becomes reasonable straightforward, if a little more complex, than with triple CCD designs. Figure 69 Pixel interpolation The camera will read each pixel out in sequence in the same way. However the read-out circuitry knows which pixel is responsible for each Sony Training Services 122
  • 131.
    Broadcast Fundamentals primary colour, and sequentially demultiplexes all the pixels for each primary to a different part of the electronics for further processing. Pixel interpolation With a Bayer filter screen half the pixels are responsible for the green primary, a quarter for red and a quarter for blue. This looks like Fig. ,where the each pixel shows the brightness of each primary colour at that point in the picture. Some single CCD cameras rebuild a full image of pixels for each primary by interpolating the pixels it has to make up the missing pixels. Putting these three separate images back together gives a much more pleasing result. A newer approach is to follow the interpolation process by a small amount of sharpening to improve the perceived quality of the image. (If you squint at the three images the sharpened one looks better!) Noise reduction Noise is a problem in any image sensor. The first area where noise in introduced is in the pixel itself where thermal noise is an enduring problem (see page 110). The only way of eliminating this kind of noise is to freeze the sensor to absolute zero to eliminate thermal electron movement. This is not practical and the sensor would fail to work at all anyway! It is really down to good image sensor design to accept thermal noise will exist and reduce its effect. O u tp u t b u ffe r S ense F ro m s w itc h C C D O u tp u t a rra y s ig n a l S e n s in g A u to -z e ro c a p a c ito r s w itc h 0V 0V Figure 70 Auto zeroing Another area where noise can be introduced is during the charge transfer period, where the charge collected under each pixel is transferred to the ouput. 123 Sony Broadcast & Professional Europe
  • 132.
    Part 14 –CCD sensors The last area where noise can be a problem is in the output gate itself, where the charge is placed into a capacitor and measured as a voltage. This capacitor needs to be carefully and quickly drained of any charge from a previous pixel, or from anywhere else, before the pixel charge is put into it. Auto zeroing Auto zeroing is the traditional way of cancelling noise in sensing comparators, analogue to digital convertors, and image sensors. It aims to pull the capacitor charge down to zero just before the pixel charge is put in. This is done by building a switch into the circuit just before the capacitor. The clocking circuit closes this switch just before the pixel charge is input. Auto zeroing switches need to be very low impedance if they are to work effectively P ix e l A c tiv e s a m p le p ix e l & h o ld F ro m c lo c k C C D + a rra y O u tp u t s ig n a l - R e fe re n c e R e fe re n c e s a m p le p ix e l & h o ld c lo c k Figure 71 Correlated double sampling Double sampling Double sampling (DS) is a method of eliminating the effects of noise in the sense capacitor. Firstly the sense capacitor zeroed, and set to a predetermined charge that is sensed as a reference voltage. Then the capacitor is zeroed again and the pixel charge placed across the capacitor. Noise will be common to both samples. Therefore any difference between the sensed reference voltage and the voltage this reference should be is noise and is subtracted from the pixel voltage. Sony Training Services 124
  • 133.
    Broadcast Fundamentals Correlated double sampling Correlated double sampling (CDS) eliminates some of the drawbacks of auto zeroing, and provides for a simpler sampling method that for DS. CDS places the operation range of the sensing capacitor away from zero, where it becomes noisy and difficult to sense accurately. CDS operates by placing a predetermined charge on the sensing capacitor, and sensing this as a voltage, just as with DS. The pixel charge is then input to the capacitor, without zeroing, and the voltage sensed again. The difference between the two values is the pixel itself. D u m m y p ix e ls A c tiv e p ix e ls D u m m y p ix e ls D u m m y p ix e ls D u m m y p ix e ls A c tiv e p ix e ls P ix e l A c tiv e s a m p le p ix e l & h o ld F ro m c lo c k C C D + a rra y O u tp u t s ig n a l - R e fe re n c e D um m y s a m p le p ix e l & h o ld c lo c k Figure 72 CDS with dummy pixels 125 Sony Broadcast & Professional Europe
  • 134.
    Part 14 –CCD sensors CDS and dummy pixels One method of CDS involves the sampling of pixels outside the active region on the CCD array. Many CCD sensors are designed such that there are a few pixels on the edge of the array that are not focussed on. These dummy pixels are masked and are effectively giving out a signal corresponding to black. The whole array is given a small reference precharge voltage. This pulls the whole array from zero and gives the masked pixels a specific value. The output gate has a sample and hold circuit that samples the masked pixels and holds their average charge as an output voltage. The active pixels are routed to a separate sample and hold and compared to the reference. P ix e l s a m p le A c tiv e & h o ld F ro m p ix e l CC D c lo c k a rra y + O u tp u t s ig n a l - R e fe re n c e R e fe re n c e s a m p le s a m p le Dum m y & h o ld 1 & h o ld 2 p ix e l c lo c k Figure 73 CDS with dummy pixels and triple sample & hold Triple sample and hold A further improvement on the double sample and hold design is to add another sample and hold circuit directly after the reference sample and hold circuit. This seemingly silly idea removes transition errors within the sample and hold switches themselves. This is shown in Fig. . Sample and hold circuits 2 and 3 are switched for every active pixel. Any switching transition errors in the active pixel circuit will be eliminated by circuit 3. Correlated triple sampling A further method of sampling the CCD array is called correlated triple sampling (CTS). This method is not used very much. It improves the noise cancelling effect of CDS by taking a third sample part way through the pixel reset period. This third sample allows for more information to get gained about the noise pattern within the array. Fowler sampling A natural conclusion of the progression from DS, CDS and CTS is to take multiple samples. This is particularly useful for long exposures. Multiple array samples are taken at the beginning and end of the integration time. These are averaged out to eliminated noise. Sony Training Services 126
  • 135.
    Broadcast Fundamentals Fowler sampling is not appropriate for broadcast camera design as the exposure time is relatively short. This kind of sampling is more appropriate for imaging in low light situations, like space exploration. The future of CCD sensors CCD sensors, in one form or another, have gained universal dominance within the video camera world. They have completely replaced the older tube sensor designs of old (See Early Image Sensors.) However a newer technology has begun to emerge that promised to replace conventional CCD sensor technology is CMOS. Although CMOS technology has been around longer than CCD technology, it was difficult to produce. CMOS is now a very important technology for microprocessor design, digital signal processing chip design and for many other types of chip design. Production processes are now reliable and versatile enough for some manufactuers to use CMOS for image sensors. However the move from conventional CCD to CMOS technology will not be the revolution that the move from tube to CCD was. CMOS represents more of an evolution rather than a revolution. 127 Sony Broadcast & Professional Europe
  • 136.
    Part 15 –The video tape recorder Part 13 The video tape recorder A short history There are many different video tape formats. Some have been more successful than others. Some have been technically more superior than others. There is often very little correlation between commercial success and quality or technical excellence. Beginnings AEG Magnetophons, Bing Crosby and the beginnings of Ampex The video tape recorder has been in existence for about 50 years. The only way to record video before video tape recorders was to use film. Film did not lend itself well to television. It required a telecine to convert the film to a television signal. During the second world war John (Jack) Mullin was stationed in England as part of the Signal Corps. He became intrigued by the fact that the Germans were able to transmit propaganda and music in the middle of the night. The music was particularly interesting, as the quality was very good, but the music was orchestral. It seemed unlikely that this quality came from 78rpm record disks, and it seems even more unlikely that entire orchestras were being employed to play every night. As the Allied forces pushed forward into Germany Jack found the machine that was able to play out with such quality. It was an AEG Magnetophon. He grabbed two and proceeded to ship the mechanical parts back to the States using the war souvenir parcel service. He reassembled a machine rebuilding the electronics and making a few improvements on the original AEG design. He demonstrated the improved Magnetophon at a number of venues. Bing Crosby saw one of the demonstrations and became interested. He desperately wanted to avoid doing live radio shows every day, and knew that quality of the shellac based recording methods available at the time were so poor that audiences at home could tell the difference between a live show and a recorded one. Bing Crosby invested $50000 in Ampex to have John Mullin’s machines produced on a more commercial basis. Ampex was a very small company based in Redwood City, California. It had been making electric motors for aircraft during the was and was looking for new projects to get involved in. Bing Crosby agreed the investments on the basis that his company Crosby Enterprises would become the sole marketing channel for Ampex machines. Ampex audio recorders were very successful and allowed radio programmes to be pre-recorded and edited before going live, but still maintaining such quality that radio listeners could not tell the difference. In the mean time 3M had made significant advances in the formulation of magnetic tape, and Ampex had set up a division specifically to make tape to supply the rapid increase in demand. Sony Training Services 128
  • 137.
    Broadcast Fundamentals The first commercial video recorders At the end of the 1940’s television was enjoying a jump in popularity. Television producers had to use modified film technology to record television shows. The quality was poor. A new method of recording video was desperately required, so that producers could edit shows and delay transmission for different time zones. In 1949 John Mullin approached Bing Crosby and proposed that the same plan that was used design a commercial audio tape machines be used for video. A team within Ampex was put together and the first working machine was demonstrated in 1950. Other early machines In England Axton began research work in 1952 on a video recorder they called Vera (Vision Electronic Recording Apparatus). By the mid 1950’s other companies like RCA had started projects to develop a video tape recorder. The 1970’s During the early 1970’s several companies, including Sony, Teac and JVC introduced a semi-professional format based on a ¾” tape, called U-Matic The 2” machine was superseded by machines using a 1” tape during the mid 1970’s, with companies like Sony and Bosch entering the fray. This format was truly helical, with the tape now wrapped around the drum which spun almost in line with the tape, rather than at right angles to it. The Bosch machines were ratified as the B format while the SMPTE ratified the 1” tape standard as the more successful C format. The 1980’s By the early 1980’s ½” tape based broadcast machine began to appear. The most successful of these was the Sony Betacam and Betamax formats. Betacam, the broadcast format, was later improved with the introduction of Betacam SP (Superior Performance). The domestic ½” format, Betamax, eventually lost the commercial battle with the VHS (Video Home System) format, although it was technically superior. In the late 1980’s digital video tape recorders had begun to appear with the D1 and D2 tape formats. A digital version of Betacam SP was introduced. Called Digital Betacam this format became a widely accepted standard for high quality digital television recording. The 1990’s ½” tape formats based on the original Betacam format were introduced during the second half of the 1990’s. They introduced MPEG compression to mainstream broadcast tape recording and the idea of high quality low bit rate recordings, metadata, and offering a bridge between streaming technology (tape) and file based technology (computers). 129 Sony Broadcast & Professional Europe
  • 138.
    Part 15 –The video tape recorder The DV format was introduced during the mid 1990’s. Originally designed as a digital replacement for VHS and Hi8, it has been more popular as a domestic camcorder format rather than for recording television programmes at home. In the professional and broadcast arenas manufactuers have squeezed extra quality and performance out of the DV format to produce the DVCam (Sony) and DVCPro (Panasonic) formats suitable for more professional and broadcast use. DV, and its high quality derivatives, are helical scan systems, indeed there is actually little fundamental mechanical different from the very first true helical scan video recorders. They are just a lot smaller. The present day The domestic arena In the domestic environment VHS is still king, although its days are almost certainly numbered. The general quality of television output (from an image point of view) has been steadily increasing over the last few years and people are starting to realise just how bad VHS is. Even from a convenience point of view VHS is starting to look cumbersome and fragile. DV was designed as a possible replacement to VHS. However manufacturers have never produced DV equivalents of the ubiquitous domestic VHS recorder, with remote control and timer functions. It looks likely that VHS will be superseded by recordable optical disk rather than tape. The professional and broadcast arenas The transition to hard disk Broadcast television has seen an increased use of hard disk technology. Indeed people have predicted that tape will be replaced by disk for many years, and yet new tape formats keep appearing, and broadcasters are still buying tape based technology. It is true that hard disk technology is being used a lot more than it used to be, and it is slowly replacing areas of the broadcast chain previously occupied by tape technology. Hard disk is now used heavily in post production and editing. However tape is still cheaper, and more robust than hard disk. It is still a popular choise in acquisition, and archive storage. All popular camcorders in use today use tape. It is removable, can be treated with a fir amount of disrespect, and is readily available. Archive and long term storage systems use tape, although video and audio material is now generally stored as digital data, and is often compressed to further save space on the tape. Tape robotics machines allow large amounts of tapes to be stored safely, with the advantage of automatic scheduling and database support, so that video and audio archives can be searched Sony Training Services 130
  • 139.
    Broadcast Fundamentals It is unlikely that hard disk will entirely replace tape. It is more likely that optical disk, using blue laser technology, will replace tape. The cost/quality balance Cost is now more of an issue than it ever was. The newer tape formats are a careful balance between cost and quality. ‘Cost’ means total cost, not just the price of the equipment, but also maintenance costs and running costs, commonly referred to as ‘total cost of ownership’. ‘Quality’ means the image quality, as well as manufacturing quality and quality of after sales service and support. Digital tape technology satisfies this careful balance much better than analogue tape technology. All major television companies now use digital tape technology, and almost all of these companies record new material in digital format. However, with large analogue tape archives, analogue tape players are still in popular use. For very high quality DI is still used. It is expensive but offers the kind of quality not attainable by any other broadcast format. Many companies use Digital Betacam, and a few the M2 format. While these formats are slightly compressed they still offer superb image quality at a much more realistic price. Betacam SX, DVCam and DVCPro are popular for news gathering where convenience and price are more important. Indeed domestic DV is being used in many professional areas. The stream/file bridge The major thrust in digital tape technology is in bridging the very difficult gap between streams and files. Video and audio are basically streams. They have no beginning and no end, and do not contain any kind of header, label, or other information. Video and audio are also strongly related to time. They are continuous and must be played at the correct speed without breaks. Files, on the other hand, are contained chunks of data. They have a header, information about the contents of the file, etc.. Files are also not related to time. When copying files from one location to another it really does not matter how long it takes, how the data is actually transferred, or if the beginning of the file gets to its destination before the end. With increasing use of computer technology in broadcast, television companies require a way of bridging this gap between these two basic methods of storing media. Manufactures are starting to produce tape recorders that can place extra information into the video or audio stream, much like a computer file can. This so called ‘metadata’ is the focus of a lot of research work. Manufacturers are also starting to introduce tape formats that can output video and audio in packets, or in file structures, so that they can be saved as files on hard disk, and treated as files within the television station. 131 Sony Broadcast & Professional Europe
  • 140.
    Part 15 –The video tape recorder The MPEG IMX format is a good example of this. Although this format is essentially a stream recording system just like the original Ampex VR-1000 of the mid 1950’s, it is able to output a stream as a series of chunks, or packets of data. The E-VTR takes this one step further and allows bits of video or audio to be marked, and played out as a file. Although the material has been recorded from a video or audio connection, as a stream, locked to time, it is now being played out to a computer network, as a file with associated metadata, and at a speed governed by the network, in bursts, faster or slower than real time. Sony Training Services 132
  • 141.
    Broadcast Fundamentals Magnetic recordingprinciples Although magnetic recording heads may have got a lot smaller, the materials used are better, and manufacturing tolerances a lot higher, all video tape recorders depend on the same basic principles of recoding a signal to magnetic tape. Principle of a magnetic field Man has known of the existence of magnets for several thousands of years. The Chinese used them to invent the compass about one thousand years ago. However the fact that an electric current develops a magnetic field was not discovered until much later. In 1820 Hans Christian Oersted discovered that a straight wire, carrying an electric current, developed a magnetic field, which circulates around the wire. Andre-Marie Ampere discovered that the magnetic field could be concentrated and magnified by winding the wire into a coil. William Sturgeon later discovered that placing an iron core inside the coil greatly increased the strength of the magnetic field, and bending the coil and iron into a ‘U’ shape further concentrated the field at the two ends of the ‘U’ shape. A little later Joseph Henry insulated the wire, thus enabling a larger and tighter coils to be wound. C o il w in d in g s C u rre n t F lu x Figure 74 Magnetic field around a wire and coil It is perhaps a pity that Oersted, Ampere, and Henry have all has their names immortalised as units of magnetic flux density, current and inductance, but Sturgeon’s name has sunk into relative obscurity. The magnetic field is known as flux, and its strength, the magnetic flux density. Flux finds less resistance through some materials than through others. Many materials are magnetic. This means that they become magnetised if subjected to a magnetic field. The ability to become magnetised is called the remenance. 133 Sony Broadcast & Professional Europe
  • 142.
    Part 15 –The video tape recorder W in d in g s F e rrite c o re (to ro id ) F lu x Figure 75 The toroid Principle of electromagnetic induction The opposite of the principle of a magnetic field is that of electromagnetic induction. When a magnetic field is applied to a wire it induces a current to flow in the wire. To be more exact, when a magnetic field changes, it induces a current. Indeed it does not matter how strong the magnetic field is, no current will be induced if this field remains constant. Conversely, a small magnetic field could induce a high current if it changes rapidly. Figure 76 The basic magnetic record head Sony Training Services 134
  • 143.
    Broadcast Fundamentals Using the qualities of magnetic materials in video tape recorders The purpose of a video tape recorder is to use a record head to magnetise the magnetic material on the tape, and for the playback head to detect this magnetisation. Record heads use the same basic principles discovered by Sturgeon. By bending an iron core into a ring a high flux density could be made to flow around the ring. A small gap is left in the ring. Flux jumps this gap, bulging slightly outwards, and could be used to magnetise the surface of the tape. However, although a large gap results in a greater flux bulge which has a greater influence on the tape, it also offers greater resistance to flux and therefore reduces flux density. Thus the design of the record head meets its first compromise. Playback heads use the same design. When the magnetised tape passes across the head gap, it induces a small magnetic flux in the head core. As the flux changes, so a small current is induced in the coils. Thus record and playback head cores need to be made from magnetic material with low flux resistance and low remenance. Conversely the magnetic material used in tape needed high remenance so that the maximum amount of signal could be recorded. The essentials of helical scan The bandwidth problem Humans can hear audio from about 20Hz to about 20kHz. The bandwidth of audio is therefore about 20Khz. If we consider modulation or sampling, the Nyquist criteria doubles this bandwidth to about 40kHz. No matter how we look at it, the frequencies involved are well within the capability of magnetic tape and recording head technology, using stationary record and playback heads. Video is very different. Broadcast channels have a total bandwidth of about 6MHz. In component form we would expect to retain as much of the quality as possible, and give the luminance signal as near to 6MHz as we can. Each colour difference signal may have a bandwidth of about 3MHz. Add these bandwidths, and take into account the Nyquist criteria and any recording system will need at least 24MHz of bandwidth! If nothing else, these somewhat crude calculations show us that we cannot record video on magnetic tape in the same way we do with audio. Either the recording system has to be radically different, or the bandwidth must be reduced. Head to tape speed Key to the problem of bandwidth is the relative head to tape speed. In analogue audio recorders this is achieved by pulling the tape across a static head. 135 Sony Broadcast & Professional Europe
  • 144.
    Part 15 –The video tape recorder If the bandwidth of the video signal is higher one could simply increase the tape speed. This idea was tried in the first video recorders. Ampex were at the forefront of video tape recorder development and demonstrated video recorders with high speed tape at the beginning of the 1950’s Other groups were working on the design of a video recorder, like the BBC Vera (Vision Electronic Recording Apparatus) which ran through tape at 21 metres per second. With reels 21” in diameter just 15 minutes could be recorded. In the States RCA built a prototype that ran through tape at 9 metres per second. An improvement, maybe, but it only gave 9 minutes of recording time. It became clear to all those groups working on video recorder designs that the kind of tape speed required for the kind of bandwidth normally found in video made the machine difficult to control and used up vast amounts of tape. Designers eventually decided that, to achieve the relative head to tape speeds required, the recording head itself could not stay still. Early scanning techniques Many of the earliest video recorders used a moving head to increase the relative head to tape speed. From the very first prototypes this was achieved by mounting the heads on a rotating drum. Ampex Mark 1 arcuate recorder An early notable attempt to increase the head to tape speed was the Ampex Mark 1 arcuate recorder. Built in 1952, this machine wrote the video information onto tape as arcs using three heads on the drum. It proved unreliable and difficult to regulate. The geometry also wasted more tape than was necessary. Arctuate tape machines were not successful. Tap e G u id e D ru m R e c o rd e d tra c k s Head Figure 77 The arcuate recorder Sony Training Services 136
  • 145.
    Broadcast Fundamentals The Ampex Mark 1 did give rise to the transversal scanning technique used by their quadruplex machines. The Ampex VR-1000 quad recorder The original Ampex VR-1000 machine used 4 heads fitted 90 degrees apart on a spinning drum. (Hence the name “quadruplex” or simply “quad”.) The drum spins in line with the tape. The tape itself is 2 inches wide and is curved by a vacuum chamber, to fit around the drum. Each head records a stripe of video across the tape, called a track. As soon as one head breaks contact with the tape the next one is ready to carry on. The tape moves a little more than the width of one of these tracks before the next head comes along. This keeps tape speed slow. The drum spins quickly. This makes the head to tape speed high and allows a high bandwidth signal to be recorded. A u d io /c u e R /P h e a d s ta c k A u d io tr a c k A u d io /c u e e r a s e Id le r h e a d s ta c k Id le r C a p s ta n D ru m m o to r C o n tro l h e a d s ta c k P in c h w h e e l Vacuum cham ber V id e o h e a d s C u e tra c k H e lic a l (v id e o ) t r a c k s C o n tro l tra c k D r u m d e ta il (V a c u u m c h a m b e r re m o v e d ) Figure 78 The Quad tape path The drum spins at 14400rpm (USA machines), recording 960 tracks per second. 16 tracks make up each video field. Each head recorded or played back just 16 lines of video. Quad recorders manage to use up 15 137 Sony Broadcast & Professional Europe
  • 146.
    Part 15 –The video tape recorder inches of tape per second. With a 4800 foot reel of tape it was possible to record just over 1 hour of video The quad machine is generally not considered a true helical scan machine, because the tracks’ angle is almost perpendicular to the tape’s direction. The scanning method is generally referred to as transversal. A longitudinal control track is also recorded along the edge of the tape so that the machine can lock to it, with the playback heads exactly following the tracks as they were recorded. Audio can also be recorded with a static head and a longitudinal track, and there is also provision for a lesser quality audio track called cue. A u d io tr a c k H e lic a l v id e o tr a c k s G u a rd b a n d s C o n tro l tra c k C u e tra c k Figure 79 The Quad tape footprint Quad remains the most long lasting video tape format of all time. This is partly because there was no alternative video recording format for many years, and partly because no video recording format since has been able to last the length of time Quad was used for, before it has been superseded by something else. However Quad has its quirks. Tape operators would have to regularly ensure that the machine was aligned correctly. The drum and vacuum chamber were particularly sensitive. The forces applied literally stretched the tape. Applying exactly the same level of brutal treatment to the tape every time it was played was not easy. True helical scanning Helical scanning uses the same basic geometry as transversal scanning. However, instead of the drum spinning vertically in transversal scanning, producing tracks on tape that are almost vertical, helical scanning uses a drum that is nearly horizontal. Another difference between transversal and helical scanning is the tape wrap. In transversal scan the vacuum chamber achieves a slight wrap around the drum, stretching and distorting the tape as it does so. However in helical scan designs the wrap is huge, by comparison. In some formats wraps of almost 360 degrees have been used, although a little over 180 degrees is more common. Even though the wrap is large Sony Training Services 138
  • 147.
    Broadcast Fundamentals none of the stresses common in Quad machines is placed on the tape in helical scan machines. E x it g u id e T r a c k e n d p o in t a n d H e a d (w r itin g tr a c k ) E x it e n d o f a c tiv e w r a p p o in t U p p e r d ru m (r o ta tin g ) H e a d (n o t w r itin g ) Tap e E n tra n c e p o in t T r a c k s ta r t p o in t a n d b e g in n in g o f a c tiv e w r a p E n tr a n c e g u id e Figure 80 Helical scan (from above) During recording and playback the tape moves slowly through the machine and round the drum. The point where the tape meets the drum is called the entrance side. The point where the tape leaves the drum is called exit side. H e a d (w r itin g tr a c k ) E n tra n c e U p p e r d ru m Tape P r e v io u s ly g u id e (r o ta tin g ) w r itte n E n tra n c e Tap e tra c k s g u id e Tape E n tra n c e g u id e E x it H ead P r e v io u s ly g u id e (n o t w r itin g ) E x it w r itte n g u id e L o w e r d ru m R a b b e t in tra c k s (s ta tic ) lo w e r d r u m L o w e r d r u m (s ta tic ) R a b b e t in lo w e r d r u m Figure 81 Helical scan (from the side) The drum assembly itself sits in the machine at a slight angle, and in most cases consists of two halves. The top half spins and the bottom 139 Sony Broadcast & Professional Europe
  • 148.
    Part 15 –The video tape recorder half is static. The record and playback heads are fitted to the bottom edge of the top half. The bottom half has a rebate cut into it, called a rabbet. The rabbet is cut at an angle, and in most machines is very close to the top of the lower drum near to the entrance side, and much lower at the exit side. Taking into account the angle of the drum assembly as a whole, the rabbet is effectively horizontal. The bottom edge of the tape rests on the rabbet as it passes around the drum assembly. The angled drum assembly and the way that the rabbet is cut, mean that the heads prescribe a helical scan across the tape as the drum spins. In some formats the track angle goes upwards, in others it goes downwards, depending on the angle of the rabbet and the rotational direction of the drum. In most modern tape formats the drum spins anti- clockwise. The tracks recorded on tape are about 5 degrees from the line of the tape, and very long, compared to those in transversal scanning machines. Modern video recorder mechadeck design The mechadeck is the mechanical part of a video recorder, consisting of the tape reels, any servo mechanism, including the pinch wheel and capstan mechanism, tension regulation, the drum and all the record and playback heads, tape cleaners, head cleaners and cassette handling mechanism. Most modern video tape recorders now have similar mechadeck designs. Tape, normally enclosed in a cassette, travels from the left (supply) reel to the right (take-up) reel during normal recording or playback. The route taken by the tape from the supply real to the take-up reel is called the tape path. The tape path from the supply reel to the capstan/pinch wheel is called the supply side. From the capstan/pinch wheel to the take-up reel is called the take-up side. The supply side of the tape path is by far the most important part. It contains all the record and playback heads. Good supply side tension regulation is important. There is often no take-up side tension regulation at all. Many machines have a tape cleaner placed in the tape path as the tape leaves the supply reel. There will also be a tension regulator in the tape path between the supply reel and the drum, to ensure that the tension around the drum is correct. Many video tape machines have static heads before the drum. A full erase head will be fitted to all recorders to erase everything on the tape before any new recording is made. Some machines also include a control head. This head records special pulses on a longitudinal track either along the top or bottom edge of the tape, and plays these pulses back to help the servo system lock during playback. There is a guide just before the tape wraps around the drum. Called the entrance guide this guide has a flange that touches the top of the tape stopping it from riding up as it is wrapped around the drum. The tape is prevented from dropping by the drum rabbet. Sony Training Services 140
  • 149.
    Broadcast Fundamentals There is another top touching guide on the exit side of the drum, called the exit guide. There may also be one or more static heads between the exit side of the drum and the capstan/pinch wheel. These are commonly used for timecode and audio, but may also be used for control. The capstan is a precision servo controlled motor responsible for pulling the tape through the tape path at the correct speed and position. A pinch solenoid will force a soft rubber pinch wheel against the capstan squeezing the tape between the two. This force is strong enough to stop the tape from slipping but not so strong as to damage it. Although the capstan rotates at essentially a constant speed the servo system constantly speeds up and slows down by minute amounts to keep the tape in the correct position relative to the drum, and to ensure that the video heads are moving exactly up the centre of the helical tracks. The control head and track are used in some machines to accomplish this. Others use the RF signal from the helical tracks. The capstan and pinch wheel isolates the supply side of the tape path from the take-up side. Various guides guide the tape back into the cassette and onto the take- up reel. Some machines include a take-up tension regulator to ensure that there is a small amount of slack on the take-up side to allow for sudden changes in direction during normal operation, but not so much as to start throwing tape loops. The drum assembly is at an angle and some machines include an angled guide to ensure that the tape is straight as it re-enters the cassette. Guard bands Older helical scan machines record analogue composite video. Each track contained both the luminance and the colour video information. As with every video recorder before and since it was important to ensure that the playback heads followed the recorded tracks exactly. All early machines used a longitudinal control track running along the top or bottom edge of the tape. The pulses recorded on this track helped the machine find the beginning of each helical track. The control track is not an exact method of finding the beginning of each helical track. The helical tracks themselves are very thin, and it is possible for the control head to be in the wrong place. Any error in the position of the control head, and the video heads will not move up the centre of the helical tracks. Tape H e lic a l tr a c k s G u a rd b a n d s Figure 82 Guard bands 141 Sony Broadcast & Professional Europe
  • 150.
    Part 15 –The video tape recorder If the helical tracks are packed together tightly there will be a risk of playing back a portion of video from another helical track. Editing also becomes problematic as recorders run the risk of over-recording material on tape that it should not. A guard band is a space between helical tracks with no recorded signal. Early machines used the concept of a guard band to prevent the video heads from picking up the recording from adjacent tracks during play back, if the control head is slightly in the wrong position, and to prevent the machine overwriting the wrong helical track during edits. However guard bands use up tape. Later machines abandon guard bands in favour of track azimuth, and thus save tape. Helical tracks with track azimuth Later video machines recorded component video, with separate circuitry, record heads, playback heads, and tracks for luminance and colour. R e c o rd / C o ils P la y b a c k H ead gap head No P o s itiv e N e g a tiv e A z im u th A z im u th A z im u th H e lic a l tr a c k s Tape Figure 83 Track azimuth Sony Training Services 142
  • 151.
    Broadcast Fundamentals Making a tape machine that is able to record and playback entirely in component increases the quality of recording over older composite machines, and removes the problems associated with editing composite material. However component video recorders are more expensive than composite ones as they are effectively two video recorders in one. Designers needed a way for these machines to differentiate the helical tracks responsible for luminance from those for colour. The method adopted was track azimuth. Track azimuth involves tilting the head gaps over at an angle. The luminance head gaps are tilted over positively and the colour head gaps negatively. During recording the luminance tracks are recorded with a positive azimuth and the colour tracks with a negative azimuth. If the machine is badly aligned and a luminance playback head is trying to play back a colour track, the angle of the recording will be incorrect. In fact it will be incorrect by twice the azimuth angle. This will severely reduce the signal. Azimuth angles of about 15 degrees are popular. This gives a total error, if the each head is on the wrong track, of about 30 degrees. Azimuth replaces the need for a guard band. The colour tracks are effectively guard bands for the luminance heads and visa versa. Helical track can be packed next to each other, saving a lot of tape, and increasing the tape’s recording capacity. Video head design The principles used by video tape recorders to record a signal onto magnetic tape have not changed since the very first tape recorders. They still rely on a donut shaped head made from ferrite, or some similar material, with a slot cut in its front face and a coil wrapped around its back. The dimensions used in modern video recorders may be a lot smaller, but you can still find a donut idea somewhere in every video record and playback head. The video heads, and the tracks they record, are thin. Older formats used heads close to 100um thick. Modern formats are using heads less than 10um thick. Video heads are no longer the classical round donut shape. They are square. The surface in contact with the tape is along rectangle. This reduces tape bounce at the head gap and reduces wear. The coils are wound on the sides of the head. Only a few turns are required on each side for the head to be effective. Channeling flux One of the challenges facing head designers is to channel as much flux to the front of the record head gap as possible, where the head is in contact with the tape, and where recording and playback will take place. Likewise the front of the playback head gap needs to be as sensitive as possible to achieve maximum signal off the pre-recorded tape. 143 Sony Broadcast & Professional Europe
  • 152.
    Part 15 –The video tape recorder H ead gap C o re Ty p ic a l v id e o h e a d E tc h in g T r a c k w id th G a p d e p th C o ils C h i p w id th S end u st or S o f tm a x r e g i o n s B a c k p r o fi l i n g T h e fe rrite h e a d S p u tte r e d S e n d u s t r e g i o n T h e M IG h e a d The TS S he ad Figure 84 Video head designs The first modification is to cut away at the back of the head, where the head gap is. This forces flux lines forwards to the front of the head. The second modification is to introduce a different material with low reluctance to the front face of the head, just where the head gap is. Flux prefers to jump the gap at this material, rather than the ferrite behind. Several material are used, often with exotic names to hide their true composition. Materials like Softmax and Sendust are used. However these materials tend to be softer than ferrite and therefore tend to wear away quicker. Only a thin slither is placed on the head, and only close to the head gap, rather than across the whole front face. Sony Training Services 144
  • 153.
    Broadcast Fundamentals Automatic tracking Another important technology that has been a vital part of modern professional and broadcast video tape machines is the automatic tracking playback head. While servo systems using a control track were able to bring the video heads, in particular the playback heads, close to the centre of the helical tracks, there was a certain degree of error due to badly adjusted servo electronics, or an imperfectly adjusted control head. Another problem is even more annoying. The geometry of all helical scan video recorders will play back correctly at play speed, because that was how the tape was recorded, at play speed. If the machine is speeded up slightly, slowed down, or paused altogether, the geometry changes. Now the playback heads will not travel exactly up the centre of the helical tracks, and will wander off track and possible cut across adjacent helical tracks. This is annoying for editors who regularly want to play back at other that play speed, or pause the video machine altogether and look at one frame or field on its own. Automatic tracking video playback heads eliminate these problems. Introduced by Ampex in 1977 as the Automatic Scan Tracking (AST) system and by Sony in 1984 as the Dynamic Tracking (DT) system, both systems relied on moving the playback heads to keep them in the centre of the helical tracks. Automatic tracking playback heads Automatic tracking video playback heads use piezo-electric crystal bimorphs The bimorph consisted of two piezo-electric crystals, bonded together. When a voltage is applied across the bimorph one crystal expands while the other contracts. This causes the bimorph to bend. Reversing the voltage reverses the bend. One end of the bimorph is fixed to the drum. The playback head is placed on the other end. In early designs, including the Ampex AST designs, one bimorph was used. Two bimorph are used in later designs, because this keeps the head itself perpendicular with the tape surface. One disadvantage with this kind of tracking system is that the bimorphs require a high voltage to bend sufficiently. Any machine with automatic tracking heads needs brushes and slip rings to transmit these high voltages to the drum. Furthermore the brushes and slip rings must maintain good contact and the drum contain smoothing circuitry. Any intermittence in the supply to the bimorphs could generate electromagnetic radiation that could be disastrous to the delicate record and playback process. Automatic tracking in operation A small alternating voltage of about 450kHz is applied to the bimorphs causing the heads to continually wobble. The wobble continually takes the head slightly off track, causing a slight drop in the RF signal. The servo system continually checks the level of the RF signal from the 145 Sony Broadcast & Professional Europe
  • 154.
    Part 15 –The video tape recorder heads, keeping the drops in RF as small as possible, by adding a DC voltage to the wobble voltage. Automatic tracking playback heads allow operators to change the playback speed of a helical scan tape machine and still maintain a steady picture. They have become an essential part of professional and broadcast tape machines. Tension regulation It is vital that the tape tension around the drum is correct and maintained within a small range. If the tape is too tight the video heads and tape will wear out rapidly. If the tape is too loose the video heads will not be able to maintain good contact with the tape surface and there is a risk that the machine will throw tape loops or stick around the drum. There are two types of tension regulator, the purely mechanical type and the electromechanical type. Most domestic machines, cheaper and smaller professional machines, use purely mechanical tension regulators. They are simple, light and cheap. All high-end professional and broadcast tape machines, especially those intended for studio use, use electromechanical tension regulators. Although they are generally heavier, more complex and more expensive, they offer much finer tension regulation, and a change for the servo system to monitor the tension regulation process. This in turn allows machine to have different tension regulation response times for different modes of operation and fault detection in case the tape sticks or breaks. The principles behind good tension regulation All tension regulators operate in the same basic way. During recording or playback the capstan and pinch wheel pull the tape out of the supply reel and around the drum. The take-up reel motor will apply a constant pull on the tape. This pull is very light but it ensures that any tape that has come through the capstan and pinch wheel is drawn into the take-up reel in a tidy fashion. The supply reel motor will be trying to resist tape from being drawn out of the supply reel. This is what produces the tension. The higher the resistance, the higher the tension. Mechanical tension regulators Mechanical tension regulators have a sensing arm with a roller on the end of it, around which tape moves. The arm is connected to a spring and to a friction belt which is wrapped around the supply reel table. If the tape tension drops the spring will pull the arm further out, tightening the friction belt around the supply reel table, increasing its resistance. The capstan will continue to pull more tape out and the tension will increase. Likewise if the tape tension increases, the arm will be pulled in against the spring loosening the friction belt around the supply reel table, and decreasing its resistance to allow tape out. Sony Training Services 146
  • 155.
    Broadcast Fundamentals Mechanical tension regulators cannot handle loose tape by pulling it back into the supply reel. This is because the tension regulator can only stop the supply side reel motor, it cannot make it turn backwards. Electromechanical tension regulators Electromechanical tension regulators use a sensing arm with a roller on the end of it, just as mechanical tension regulators. Likewise the arm is connected to a spring, however the spring tends to be better quality, and in some cases may be more that one spring to give a more accurate response. The arm will also have a strong magnet attached to it. One or more hall effect detector are fixed to a circuit board either on the mechadeck or on the tension regulator assembly. The hall effect detector will output a signal corresponding to the position of the tension regulator arm. With a properly aligned spring the position of the arm will also provide a measure of the tension in the tape. The signal from the hall effect detector is send to the machine’s servo system which controls the supply side reel motor. Reel motors in this kind of machine are more complex than those in machines with mechanical tension regulators. The servo system is able to control the direction, speed and amount or torque very precisely. Rather than using friction the supply reel is effectively trying to turn backwards. This ability to control the backward rotation of the supply reel motor also allows electromechanical tension regulators to draw loose tape quickly back into the supply reel. Variation in tape path designs Before the universal acceptance of cassettes for video tape recorders, manufacturers designed several exotic tape paths that often required the tape operator spend a while lacing up before the machine could be used. Indeed we often take it for granted as we slam another cassette into the machine that it was not always that easy. Tape path designs have now settled to a just a few variations since the introduction of cassettes, because the machine must be able to automatically draw the tape out of the cassette before recording and playing can take place, and put it neatly back into the cassette before it is ejected. Any complicated lacing cannot be performed. Terms of confusion Various terms have been given to various tape wrap patterns, and there appears to be a fair degree of confusion as to which one is which. The term ‘omega wrap’ has been associated with many wrap pattern that are not that similar. Alpha wrap The alpha wrap takes its name from the Greek letter ς . The tape passes around the drum for a full 360 degrees. The wrap is sideways, with the entrance and exit guides on the left or right. Alpha wrap would be very 147 Sony Broadcast & Professional Europe
  • 156.
    Part 15 –The video tape recorder difficult to achieve with cassettes because the tape passes over itself. The tape must be manually laced. It is only used in machines with spools. As an example alpha wrap is performed by the old Philips EL3402 1” machine. Omega wrap Omega wrap takes its name from the Greek letter Σ . The tape passes around the drum for almost 360 degrees. The active wrap is about 270 degrees. Although the term omega is used with many cassette machines it is not actually possible to perform and true omega wrap with a cassette. The tape must be laced. As an example, omega wrap is employed by 1” C format machines. C wrap The wrap pattern is actually in the shape of an backward ‘C’ a little like the Cyrillic letter ‘t ’. C wrap is possible and popular with cassette machines. The tape is drawn from the cassette at one point and taken between 200 and 300 degrees round the drum in an anti clockwise direction, giving an active wrap of anything between 180 and 270 degrees. As an example, C wrap is popular with broadcast studio machines using the Betacam SP and Digital Betacam formats and other similar tape formats. M Wrap This is the most popular wrap pattern, and is used in cassette based machines. Tape is drawn from the cassette at two points. It is drawn round the left side of the drum, and round the right side of the drum, to give a total wrap of between 250 and 300 degrees, and an active wrap of anything between 180 and 270 degrees. As an example M wrap is popular with domestic VHS machines and some broadcast machines like the Sony D1 and D2 machines and the PVW range of Betacam SP machines. Definition of a good tape path A perfect tape path with contain a perfectly circular supply and take-up reel. The tape would move from the supply reel to the take-up reel in a straight line without touching anything. Video and audio would be recorded and played back without any heads touching the tape. Clearly this is an impossibility! Compromises have to be made. The record and playback heads must touch the tape. Furthermore helical scanning requires that the tape be wrapped around the drum. Thus the tape must change direction dramatically. Helical scanning also requires accurate tape tension control. The speed of the tape must be governed and regulated. Reel motors are simply not good enough to accomplish this. A capstan is required. Any item like a guide, drum or static head, cleaner of capstan changes the tapes direction and adds friction. Spinning guides, and the drum itself, are never absolutely central and always add a slight wobble to the Sony Training Services 148
  • 157.
    Broadcast Fundamentals tape’s motion. There are therefore opportunities for the tape to stick, be forced into the wrong position or the timing to be altered. The important part of any video tape recorder tape path is the distance between the supply side reel and the capstan. This is where all the heads are and this is where the tape must be at the correct tension and in the correct position. This length of tape should be as short as possible, and should pass across as few items as possible. Static heads, the cleaner, the drum, entrance and exit guides, the supply side tension regulator and the capstan/pinch wheel are vital and therefore always present. Designers will ensure that any other guides will only be added to the design if they are absolutely necessary. A perfect tape path does not need top touching or bottom touching guides, or a rabbet on the lower drum. The tape would pass around the various items on the mechadeck in exactly the correct position. Designers calculate the angle of guides, drum, static heads, etc. so that the tape runs smoothly through the tape path. Although it is impractical to expect a perfect level of mechanical accuracy, the rabbet and any guides touching the top or bottom of the tape should do so very lightly. It is not very critical what happens to the tape after the pinch wheel and capstan. The pinch wheel and capstan act as a wall, isolating the drum and static heads from any minor wobbles in the tape afterwards. Therefore the amount of tape, number of guides and other hardware is not important. The servo system Modern video machines consist of a number of servo loops. Normally one item is the master, and servo loops slave off the master. When a video machine is playing back the master is the drum. It obeys the incoming reference taking no regard of anything else, and spins at a constant rate, somehow related to the reference itself. The drums in early analogue machines spin at frame rate, 25Hz for PAL (625 line) based machines and 29.97Hz for NTSC (525 line) based machines. Later digital machine drums spin at a multiple of frame rate. The rest of the servo system slaves off the drum. The first servo loop to consider uses signals from the drum and the control head and uses the capstan as a control. Although the machine’s servo system will control the capstan to pull the tape through at almost a constant rate, signals from the drum and the control head inform the servo system of the relative position of the tape and the spinning drum. By slightly altering the speed of the capstan the servo system will ensure that the timing between the pulses from the drum and the control head is correct, thus ensuring that each playback head finds the beginning of each helical track. Another servo loop slaves off the capstan servo loop. This loop uses a signal from the tension regulator to control the supply reel, trying to maintain the tension around the drum at a constant predefined level. In simpler machines this is done mechanically. In more complex machines this is done electronically. 149 Sony Broadcast & Professional Europe
  • 158.
    Part 15 –The video tape recorder In some machines there is also a servo loop between a take-up tension regulator and the take-up reel, to maintain the take-up tension. Analogue video tape recorder signal processing Video tape recorder signal processing can be divided into a number of distinct areas. The first division is between record and playback processing. The problems of recording to tape The earliest pioneers of tape recoding technology discovered that it was impossible to record an audio or video signal directly to tape and expect a reasonable playback signal. Two characteristics ensure that the record process was not going to be that straightforward, the basic behaviour characteristics of inductors and magnetic hysteresis. When a record head records a signal the current applied to the head generates a flux which magnetises the tape. The strength and direction of magnetisation is directly related to the current. When the playback head plays the tape back, current is generated at the output of the head that is proportional to the rate of change of magnetisation in the tape. This term ‘rate of change’ is crucial. If a high DC signal is recorded to tape, it will magnetise the tape a lot, but nothing will be played back, because the rate of change is zero. Conversely if a smaller high frequency signal is recorded to tape a large high frequency signal will be played back because the rate of change will be high. This characteristic is evident when looking at the control track of most professional VTR’s. The control head records a 25Hz or 29.95Hz square wave signal on tape. The resulting control track consists of positively and negatively magnetised region. The resulting playback signal is a series of large negative and positive spikes for each negative and positive transition. This ‘rate of change’ characteristic makes the record/playback process non-linear, it integrates the record signal, and introduces a phase shift between the record and playback sine waves. The second characteristic is hysteresis. This defines the ‘memory’’ magnetic materials have. Apply a magnetic flux to a magnetic material and it will remember this by becoming magnetised. The answer to these problems is modulation. Modulation is the process of combining a low and high frequency signal together into one signal. There are two types of modulation, amplitude modulation (AM) and frequency modulation (FM). AM involves changing the amplitude of the high frequency signal with the low frequency signal. FM involves changing the frequency of the high frequency signal with the low frequency signal. AM is the easier modulation system to design, and was used in the first attempts at modulating the video signal before recording it to tape. However FM is more resilient and was chosen as the modulation of choice for video tape recorders. Sony Training Services 150
  • 159.
    Broadcast Fundamentals Input processing Reference input selection Another important part of the input circuitry is the reference input. The machine should be able to play a tape back on its own, maintaining good and consistent timing. It should also be able to lock to an incoming reference when playing back, locking the entire playback process to the incoming reference. The machine should also be able to lock to an incoming reference while recording, or lock to the incoming video signal it is recording. Therefore every tape machine will include a precision oscillator and sync pulse generator (SPG). This module can either free run to provide a good reference for the machine, or it can be genlocked to either an incoming reference signal or video input. Part of the input processing will include a switch to select which input will be directed into the oscillator and SPG. Video input processing All video tape recorders have input circuitry. This is required to convert the input video into a common form appropriate for recording on tape. For instance, component video recorders require any video input be in component form before any final encoding or modulation can occur prior to recording to tape. Therefore input circuits will include a composite decoder, or S-video decoder, and a selection switch to allow the operator to select which type of input to record. Input audio processing As with video inputs, all video tape recorders will include switches, equalisers noise reduction encoders (Dolby for instance) and even provide microphone power, to convert and process the incoming audio. Tape encoding The video signal needs processing prior to recording to tape. This will include FM. It may also include pre-emphasis to improve the recorder’s ability to capture sharp transitions and detail in the image. It may also include a small amount of AM after the FM to reduce the possibility of over modulation problems that sometimes manifest themselves as bearding on the playback image. The audio signal needs little further processing other than standard bias modulation, before being recorded to longitudinal tracks to tape. Signal transfer to the drum Once a decision had been made to use a rotating drum to increase the relative head to tape speed, a way was need to transfer the video signals onto the spinning drum and to the record heads. Wire connection could hardly be used. They would very quickly tie themselves in knots and wrench themselves free. Slip rings and brushes also presented problems. It was impossible to maintain good enough connection. 151 Sony Broadcast & Professional Europe
  • 160.
    Part 15 –The video tape recorder The answer lies in the rotary transformer. This operates in a similar way to a standard transformer with two windings (coils) sitting close to one another. Current in the input coil produces a magnetic flux. As the current changes the flux changes. The rate of change of this flux excites a current in the output coil. If an AC signal is input to the input coil and AC output will appear and the output coil, albeit phase shifted. Rotary transformers have one coil build into the static lower drum and the other built into the spinning upper drum. An RF signal can be transferred from the lower to upper drum during recording, and from the upper to lower drum during playback. Modern video tape recorders have many rotary transformers for transferring more than one signal onto and off the upper drum. This is essential in component video machines where there is a separate path for the luminance and colour signals. A separate transformer is often also used to transfer switching information to the upper drum, so that the drum itself can switch record or playback signals between different heads either as the drum rotates, or for multi-format machines where different playback heads are used to play back tapes from different formats. Output processing One of the challenges facing designers of early tape recorders was how to play the tape back with smooth consistent timing. The timing requirements of a standard broadcast video signal are very accurate. Video tape recorders are essentially mechanical. No matter how well the tape machine is built, and no matter how good the servo system is, there will still be a slight amount of mechanical wobble that will introduce huge timing fluctuations compared to the timing accuracy required of broadcast video signals. The answer lies in a clever piece of circuitry called a timebase corrector which is explained in a separate section below. Signal transfer from the drum RF signals from the playback heads are transferred off the drum through the rotary heads described above. Head switching is required for those machines with a drum wrap of less than 360 degrees. Many machines use an active wrap of 180 degrees. Two sets of playback heads are used, 180 degrees apart. The machine will switch between heads at exactly the correct point, and thus maintain a continuous video signal from the tape. This switching can be performed on or off the drum. Switching on the drum means that fewer rotary transformers are required to transfer the video signals from the upper to lower drums. However an extra rotary transformer is required to transfer switching information to the upper drum. Switching off the drum removes the need to transfer switching information to the upper drum, but increases the number of rotary transformers required to transfer the video signals off the upper drum. Sony Training Services 152
  • 161.
    Broadcast Fundamentals Once the signal is off the drum it is buffered and equalised. There may also be an automatic gain control (AGC) to automatically correct small irregularities in the amplitude of the playback RF signal. Output video and audio processing The final piece of circuitry before the outputs processes the video and audio signals to provide whatever outputs the machine is designed to provide. Component analogue machines may include a composite encoder for a composite output, or analogue to digital converters for either digital audio or digital video outputs. The timebase corrector (TBC) Sitting between the playback equalisers and the final output processing is the TBC. A TBC evens out the irregularities in timing of the signal coming from the helical tracks of a video tape recorder. All TBCs do this by storing a certain amount of the signal and then playing it out at a constant rate. Clock generation An important part of any TBC is the ability to generate accurate clocks. Clocks are used to write video into the store and out of it. The read clock needs to accurately follow the timing irregularities in the signal coming from tape. The write clock needs to be locked to the machine’s SPG keeping constant smooth timing. Of the two clocks the read clock present the greatest challenge. A horizontal syncs detector sends the horizontal sync pulses off tape to a timed monostables which output a voltage depending on the rate of the sync pulses. The detector may also include and window discriminator which will ignore any false horizontal syncs and half line pulses during the vertical interval. The signal from the timed monostables is fed into a voltage controlled oscillator (VCO). The VCO is designed to run at the same rate as the read clock when there is no signal from the timed monostables. Timing irregularities in the signal off tape will increase or decrease the horizontal sync rate. This will cause the control voltage from the timed monostables to increase or decrease, shifting the VCO frequency up or down. Charge coupled device (CCD) delay line TBC The first TBCs used CCD delay lines. These half analogue, half digital, devices consist of a long line of cells. Each cell could contain an analogue charge. A clock input transfers all the charges one cell towards the end of the line. The input is connected to the first cell and the output to the last cell. CCD delay lines cannot input and output at the same time. Therefore two delay lines are used, one for writing and the other for reading. They are designed to store enough for one line of video. One line later and the delay lines are switched. The one that was writing is now reading, and 153 Sony Broadcast & Professional Europe
  • 162.
    Part 15 –The video tape recorder visa versa. The clocks are also switched. The delay line that is writing uses the write clock and the one that is reading uses the read clock. Semiconductor TBC All new TBC designs use semiconductor memory devices instead of CCD delay lines. Semiconductor memory devices are totally digital. They therefore require a digital input and give out a digital output. All semiconductor memory TBC used in analogue video recorders use analogue to digital converters at the TBC input and digital to analogue converters at the output. Dual TBC designs All modern broadcast analogue video tape recorders record component video, keeping the luminance and colour parts of the video signal separate throughout the whole record playback process, even on tape. Thus the luminance and colour playback signals experience their own timing inconsistencies. Each signal must be timebase corrected independently if quality is to be maintained. A dual TBC has a separate horizontal sync detector, timed monostable and VCO for luminance and colour. It also has two stores. The read clock is the same for both luminance and colour. Popular analogue video recording formats This is by no means an exhaustive list. There are many analogue video tape formats not mentioned here, that were only moderately successful and others that were more of a failure. Quadruplex (1956) Ampex introduced the Quadruplex tape format, commonly known as Quad. Quad is a professional 4 head transversal composite format. It uses spools of 2” tape. Helical tracks were originally 10 mils wide, 33 minutes from vertical. Drum is just over 2” dia. spinning at 1,400 rpm for the original NTSC machines. The most long lasting of video tape formats. The Ampex VR-1000 machine was the first commercial video tape machine. There are still archives of Quad tape, and it is still in use in a few places, although it has been superseded for new recordings by other formats. U Matic (1970) Developed by JVC, Matsushita and Sony in 1971, sometimes called Type E. U Matic is a professional 2 head helical scan composite format. It uses spools of ¾” (19mm) tape. Helical tracks are 84um wide and 4.95 degrees from horizontal. Drum is 110mm dia. spinning at 1800 rpm (1500 rpm for NTSC). U Matic will record 2 longitudinal audio tracks. LTC was not designed in as a separate track to start with but was given a dedicated track under the helical tracks. This meant that LTC had to be recorded first and could not be re-recorded without over writing part of the helical tracks. Later provision for VITC. Sony Training Services 154
  • 163.
    Broadcast Fundamentals U Matic was a very successful format because of its wide user base, from high end broadcast to professional and industrial use. Although not as long lasting as Quad it was probably more popular. Eventually available in higher quality SP form, in lo-band and hi-band modes. Betamax (1975) Developed by Sony. Betamax is a domestic 2 head composite helical scan format using the colour under system. It uses cassettes containing ½” tape. Helical tracks are just over 30 um wide and 5.85 degrees from horizontal. Drum is 74.487mm dia. spinning at 1800 rpm (1500 rpm for NTSC). Betamax will record 1 longitudinal audio track and no timecode. Betamax was head to head with VHS in the late 1970’s, but eventually lost. Many reasons have been given for this. The reluctance of Sony to licence the format. The reluctance of video rental firms to accept pre- recorded Betamax tapes, the lesser record times and features of Betamax machines. 1” type C (1976) Developed by Sony and Ampex. C is a professional helical scan composite format. It uses spools of 1” tape. Helical tracks are 5.1 mils wide, and almost flat at 2.5 degrees from horizontal. The format uses a large drum of 132mm dia spinning at 3600 rpm (3000 rpm for NTSC). C will record 3 longitudinal audio tracks (4 in Europe) and LTC normally recorded on the last audio track. VHS (Video Home System) (1976) Developed by JVC, adopted by many other manufacturer. VHS is a domestic 2 head composite helical scan format using the colour under system. It uses cassettes containing ½” tape. Helical tracks are 2.3 mils wide in standard play mode, 1.15 mils wide in long play mode and 5.96 degrees from horizontal. Drum is 60.5mm dia. spinning at 1800 rpm (1500 rpm for NTSC). VHS will record 2 longitudinal audio tracks and no timecode. VHS was head to head with Betamax in the late 1970’s, but eventually won. VHS went on to become the most popular domestic and industrial format. Video 2000 (1979) Developed by Philips and Grundig. Video 2000 is a domestic 2 head composite helical scan format using the colour under system. It uses cassettes containing ½” tape. Helical tracks are 22.6 um wide and 15 degrees from horizontal. Drum is 65mm dia. spinning at 1800 rpm (1500 rpm for NTSC). Video 2000 will record 2 longitudinal audio tracks and no timecode. In Europe Video 2000 was the ‘other domestic format’ while VHS and Betamax battled for supremacy. It boasted automatic tracking, using bimorphs similar to those used in professional machines, and dual sided cassettes. However Video 2000 was never going to win against either VHS or Betamax. While video rental firms were pushed to provide two 155 Sony Broadcast & Professional Europe
  • 164.
    Part 15 –The video tape recorder versions of each movie in their shops, VHS and Betamax, it was inconceivable that they would supply three. Betacam (1982) Developed by Sony, sometimes called Type L. Betacam is a professional 4 head component helical scan format. It uses cassettes containing ½” tape. Helical tracks are 86um wide, and +15.25 degree azimuth, for luminance and 72um, and -15.25 degree azimuth, for colour tracks, and 4.679 degrees from horizontal. Drum is 74mm dia. spinning at 1800 rpm (1500 rpm for NTSC). Betacam will record 2 longitudinal audio tracks, LTC and VITC. Betacam became a popular professional format using oxide tape similar to that used by domestic Betamax. However Betacam was really just a ‘rehearsal’ for the improved version, Betacam SP, which became a workhorse professional and broadcast video tape format. 8mm (1983) and Hi8 (1989) Developed by a Japanese consortium. 8mm is a domestic 2 head composite helical scan format using the colour under system. It uses cassettes containing 8mm tape. Helical tracks are 20.6um wide 4.88 degrees from horizontal. Drum is 1.6” dia. spinning at 1800 rpm (1500 rpm for NTSC). 8mm will record 2 PCM audio channels and 2 AFM audio channels and no timecode. Hi8 is an enhancement of 8mm, developed by Sony, using metal oxide tape Both 8mm and Hi8 have gained reasonable success as a domestic camcorder tape format. Betacam SP (1986) Developed by Sony, sometimes called Type L. An improvement over the Betacam format Betacam SP, with the same format dimensions, uses higher FM frequencies and metal instead of oxide tape. Betacam SP introduced 2 AFM audio tracks inserted into the colour helical track signal, providing the format with 4 audio tracks altogether. The BVW-75 and BVW-75P machines became workhorse machines within the broadcast industry selling thousands of machines and millions of tapes worldwide. M2 (1986) Developed by Panasonic. M2 is a professional 4 head component helical scan format. It uses cassettes containing ½” tape. Helical tracks are 44um wide for luminance and 36um for colour tracks, with a 15 degree azimuth, and 4.29 degrees from horizontal. Drum is 76um dia. spinning at 1800 rpm (1500 rpm for NTSC). M2 will record 2 longitudinal audio tracks, 2 AFM audio tracks, LTC and VITC. M2 was introduced as a competitor to Betacam SP and some broadcasters adopted it as a standard. Although technically very similar the machines gained a reputation for unreliability, probably due more to Sony Training Services 156
  • 165.
    Broadcast Fundamentals spare parts availability and service rather that the machine’s reliability, M2 did not gain the universal acceptance that Betacam SP gained. S-VHS (1987) Developed by JVC, adopted by every other manufacturer. VHS is an enhancement of the VHS format with improved luminance bandwidth. It gained popularity because of its compatibility with VHS. Digital video tape recorders Practical broadcast digital video recorders began to appear at the beginning of the 1980’s with the publication of CCIR 601 in 1982 and CCIR 656 in 1986. These two documents proposed a method if digitising component video signals and conveying them in digital form over a multicore cable. Sony designed the D1 video recorder specifically to record CCIR-601 signals without any loss. The original CCIR 601 document specified 8 bit samples. However the CCIR 656 document also specified two spare data bits which were for ‘future development’. The industry grabbed these two spare bits using then as ½ and ¼ resolution, increasing the samples sizes to 10 bits. About the same time as the transition from 8 bits to 10 bits, there was transition from the original multicore cable, parallel method of conveying CCIR 601 data to a serial version using standard 75 ohm coaxial cable and BNC connectors. Sony and other manufacturers, notably Panasonic and Ampex, followed over the years by producing broadcast quality digital video recorders to record either 8 or 10 bit CCIR 601 samples, either entirely transparently, or with compression. Although digital video recorders have gained almost universal acceptance in broadcast, the domestic and industrial markets continue to use analogue tape formats, due to the overwhelming use of VHS, and the introduction of analogue formats like Hi-8 which have sufficient quality for most peoples’ needs. DV has gained wide acceptance as a camcorder standard for domestic use. The lack of any domestic DV television recorders has helped to keep VHS as the only practical home television recording format. It is unlikely that there will ever be a de-facto standard digital home tape recording format. The imminent release of Blu-Ray optical disk recorders will certainly now kill any change for a manufacturer introduce one. The advantages of digital video tape recorders Digital video recorders have a number of distinct advantages over analogue machines. The first is the record transparency. A digital video signal can retain all its quality through the record playback process. In theory exactly the same digital data that is recorded to tape can play back. Although this is not exactly true, it is certainly true that professional digital video recorders allow video to be recorded, played back and re- recorded many more times than is possible with analogue recorders. This is important for editing and post production. 157 Sony Broadcast & Professional Europe
  • 166.
    Part 15 –The video tape recorder The second is robustness. Digital data can be protected with error correction data far more easily than an analogue signal. Furthermore digital data can be shuffled and scrambled before recording to tape. If there is a large error on tape, either during recording or during playback, due to, for instance, dust, the highly concentrated group of errors can be diluted over a large amount of data, as a widely spread group of small errors that can easily be corrected one by one. Other advantages have become apparent over the years, as digital video tape formats have developed, and with the introduction of computers into the broadcast production chain. Later digital video recording formats now offer long recording time, and small tape sizes. They also offer the possibility of transferring digital data directly to the IT world, without loss, where computer based non-linear editors and effects processors can perform a wide range of previously unavailable creative possibilities. Digital video recorder or digital data recorder It is important to remember that all digital video recorders are not recording digital video, or audio. The data is always processed, scrambled, shuffled, sometimes compressed, and has extra error correction data added. What is actually recorded to tape is just digital data and bears very little resemblance to the original video and audio it came from. Manufacturers are now using stripping the video and audio input and output processing out of their digital video recorders to produce very competent data recorders for the IT backup and archive markets. Digital video recorder mechadeck design There is no difference in the requirements of a digital video recorder mechadeck from that of an analogue one. Digital video recorder mechadecks differ more as a result of general developments in mechadeck design rather than any special requirements. Most digital video recorder mechadecks use a spinning upper drum and static lower drum. They employ either M wrap or C wrap, and they all incorporate sophisticated supply side tension regulation, and capstans on the exit side of the drum. A notable difference employed by the Sony Digital Betacam, D1 and D2 machines was the rotating mid drum assembly. The lower drum is static, as normal, but these machines also have an upper drum is fixed to the lower drum, leaving a narrow slot between the two. A mid drum assemble spins inside and between the upper and lower drums, with the record, playback and flying erase heads fixed to its circumference and protruding through the slot to touch the tape. This technique is more expensive but produces equal strain on the tape at every point round the drum resulting in very straight helical tracks on tape. From the start, broadcast digital video recorders have recorded audio as digital data somewhere on the helical tracks. Although some digital formats still retain a low quality longitudinal cue track, this development has resulted in very high quality audio recording and the removal of Sony Training Services 158
  • 167.
    Broadcast Fundamentals much of the mechadeck hardware required for analogue longitudinal audio recording. Some digital formats have even removed the need for a conventional longitudinal control track transferring all the servo lockup to the helical tracks. These formats only have one longitudinal heads on the mechadeck, the full erase head. Digital video recorder channel coding Analogue video recorders use FM as a method of coding the video signal before recording it to tape, and decoding it on playback, to overcome the problems of recoding to magnetic tape. In general terms this technique of coding and decoding is called channel coding. The tape is the channel. Digital video recorders cannot use FM. It is both inappropriate and impossible considering the available bandwidth on tape and the required recording bandwidth. Digital video recorders use a combination of Partial Response type 4 (PR4 or PRIV) and Viterbi as a channel coding scheme. Popular digital video tape formats D1 (1987) Developed by Sony. D1 is a professional digital 4 head component helical scan format. It records 8 bit CCIR 601 video data with no compression. It uses cassettes containing 19mm tape. Helical tracks are 40um wide and 5.4 degrees from horizontal. Drum is mm dia. spinning at rpm ( rpm for NTSC). D1 will record 4 audio channels on the helical tracks, and one on a longitudinal track. It will also record LTC and VITC. D1 is expensive, both for machines and tape cassettes, but is used in post production where quality is the prime concern. D2 (1989) Developed by Sony. D2 is a professional digital 4 head composite helical scan format. It records a digitised PAL or NTSC (depending on the machine version) composite video signal with no compression. It uses cassettes containing 19mm tape. Helical tracks are um wide and degrees from horizontal. Drum is mm dia. spinning at rpm ( rpm for NTSC). D1 will record 4 audio tracks on the helical tracks, and one on a longitudinal track. It will also record LTC and VITC. D1 is expensive, both for machines and tape cassettes, but is used in post production where quality is the prime concern. Digital Betacam (1993) Developed by Sony. Digital Betacam is a professional digital 4 head component helical scan format. It records 10 bit CCIR 601 video data with DCT based compression at just over 2:1. It uses cassettes containing ½” tape. Helical tracks are 24um wide and 5 degrees from horizontal. Drum is 80mm dia. spinning at 4500rpm ( 5400rpm for 159 Sony Broadcast & Professional Europe
  • 168.
    Part 15 –The video tape recorder NTSC). Digital Betacam will record 4 audio tracks on the helical tracks, and a low quality cue channel on a longitudinal track. It will also record LTC and VITC. Certain machines capable of playing back Betacam and Betacam SP tapes. Digital Betacam is cheaper than D1 but offers indistinguishable image quality and 10 bit sample recording. It is widely used in post production where quality is the prime concern. However the compression scheme is closed and proprietary. Present broadcasters are now looking to output the digital stream directly from the tape machine. DV & Mini DV (1995) A consortium of 10 companies agreed and created DV, sometimes called MiniDV. DV is a domestic digital 4 head component helical scan format. It records intraframe 4:1:1 or 4:2:0 compressed video data with 5:1 compression ratio. It uses cassettes containing ¼” tape. Helical tracks are 18um wide and 9.18 degrees from horizontal. Drum is 21.7mm dia. spinning at 9000rpm. DV will record 2 audio tracks on the helical tracks. Timecode is recorded on helical track as data (not VITC). DV has become the most popular domestic digital video tape format, and is available from a wide range of manufacturers. Camcorders and decks offer direct compressed data outputs via the IEEE1394 interface, otherwise known as FireWire (Apple) and iLink (Sony). Software companies also offer good support for DV, with drivers and plug-ins for DV data input to graphics, editing, and rendering software. DVCPRO (1995) Developed by Panasonic and based on the DV format. DVCPRO is a professional digital 4 head component helical scan format. It records DV data but with a wider track on metal evaporated tape to increase robustness and quality. It uses cassettes containing ¼” tape. Helical tracks are 18um wide and 9.18 degrees from horizontal with a +20.03 -19.97 degree azimuth. Drum is 21.7mm dia. spinning at 9000rpm. DVCPRO will record 2 audio tracks on the helical tracks and 1 longitudinal cue track. It will also record LTC and VITC. DVCPRO is the Panasonic professional DV format, and initially gained wide acceptance due to its low price and compact design. DVCAM (1996) Developed by Sony and base on the DV format. DVCAM is a professional digital 4 head component helical scan format. It records DV data. It uses cassettes containing ¼” tape. Helical tracks are 15um wide and 9.18 degrees from horizontal with a +20.03 -19.97 degree azimuth. Drum is 21.7mm dia. spinning at 9000rpm. DVCPRO will record 2 audio tracks on the helical tracks and 1 longitudinal cue track. It will also record LTC and VITC. DVCPRO is the Sony professional DV format. Introduced after DVCPRO is lagged behind in popularity but is now beginning to gain widespread Sony Training Services 160
  • 169.
    Broadcast Fundamentals support as an industrial format and for low budget television work. Machines like the PD-150 have almost gained ‘classical’ status. Betacam SX (1996) Developed by Sony using the Digital Betacam mechadeck. Betacam SX is a professional digital 4 head component helical scan format. It records 8 bit CCIR 601 video data with MPEG 4:2:2P@ML based compression. Betacam SX uses IB frame compression to maintain broadcast quality at 18Mbps and 10:1 compression ratio. It uses cassettes containing ½” tape. Helical tracks are 22um wide and 5 degrees from horizontal with a 15.25 degree azimuth. Drum is 80mm dia. spinning at 2250rpm ( 2700rpm for NTSC). Betacam SX will record 4 audio tracks on the helical tracks. It will also record LTC and VITC. Certain machines capable of playing back Betacam and Betacam SP tapes. Sony introduced a hybrid Betacam SX machine combining a conventional tape mechadeck with hard disks. Compressed video and audio material could be transferred to and from the tape and disks. This allowed for linear and non-linear editing in one unit. However the hybrid machine proved too complex for many and was not widely adopted. Betacam SX was introduced as a replacement to Betacam SP and has comparable digital quality. Widely used as a news gathering format. However although the compressed digital stream is available at the output for direct high speed transfer the compression scheme was not ratified, with the standard authorise preferring 50Mbps data instead. Sony responded with IMX. Digital S (1996) Developed by JVC and otherwise known as D9. Digital S is a professional digital 4 head component helical scan format. It uses 4:2:2 sampling, like MPEG, making it technically better than DV and the same as MicroMV. It uses cassettes containing ½” tape. Helical tracks are 20um wide.Digital S will record 4 audio tracks on the helical tracks and 2 longitudinal tracks. It will also record LTC and VITC. HDCAM (1997) Developed by Sony and based on the Digital Betacam mechadeck. HDCAM is a professional digital 4 head component helical scan format. It records high definition video data with mild 3:2 compression. It uses cassettes containing ½” tape. Helical tracks are 22um wide and 5 degrees from horizontal with a 15.25 degree azimuth. Drum is 80mm dia. HDCAM will record 4 audio tracks on the helical tracks and one longitudinal cue track. It will also record LTC and VITC. HDCAM was introduced as an alternative to film, and thus records progressive 24 fps (24P), but can be switched to a number of television based recording methods. HDCAM is expensive and exclusive, but offers very high quality recording. 161 Sony Broadcast & Professional Europe
  • 170.
    Part 15 –The video tape recorder Machines due for release about the time this book is published will include uncompressed high definition and machines based on the IMX mechadeck. DVCPRO 50 (1998) Developed by Panasonic as an enhancement of the original DVCPRO format, with 50 Mbps recorded data to comply with the requirements of standards authorities. Machines can now be equipped with a IEEE1394 interface, allowing high speed transfer of 50Mbps DV data. IMX (2000) Developed by Sony using a new design of mechadeck loosely based on the Digital Betacam mechadeck. IMX is a professional digital 4 head component helical scan format. It records 8 bit CCIR 601 video data with MPEG 4:2:2P@ML I frame only based compression at 50 Mbps. It uses cassettes containing ½” tape. Helical tracks are 22um wide and 5 degrees from horizontal with a 15.25 degree azimuth. Drum is 80mm dia. spinning at 4500rpm ( 5400rpm for NTSC). IMX will record 8 audio tracks on the helical tracks. It will also record LTC and VITC. Certain machines capable of playing back Betacam, Betacam SP and Digital Betacam tapes. IMX was introduced to comply with the standards authorities requirement for a 50Mbps I frame only MPEG video recorder. Although recording 8 bit samples ( a requirement of MPEG) IMX quality is indistinguishable from Digital Betacam. However, unlike Digital Betacam, the compressed stream is available at the output for direct transfer to other machines, computer hard disk, or video servers. A later modification, the E-VTR, allows video and audio material from tape to be packaged and send directly out on a computer network cable. Micro MV (2001) Developed by Sony. Micro MV is a new format intended for the domestic market. However it records true MPEG data on a tiny cassette, giving it comparable, if not better, quality than DV. Although technically superior than DV, MicroMV has a lot of work to do to gain any ground on DV and DV based formats like DVCPRO and DVCAM. Software manufacturers have still to offer the kind of support for MicroMV that DV enjoys. ) Sony Training Services 162
  • 171.
    Broadcast Fundamentals Part 14 Betacam and varieties Variations of the original Betacam format have dominated the broadcast industry for the last 20 years. The same basic scheme is now used in analogue and digital video recorders, high definition recorders and data recorders. The first of these broadcast machines recorded to ½” oxide tape encased in a cassette. Two cassette sizes were made available, the smaller being exactly the same size as the domestic Betamax tape, and was suitable for portable devices and short programme content. The larger was about twice the size. It had longer record time and was more suitable for studio use. Mechanics All Beta formats are true helical scan. The drum assembly is about 81mm diameter and consists of two halves. The lower half is static and acts simply as a support. The upper half is about 15mm thick and spins horizontally. The whole assembly leans at about 5 degrees writing tracks that are only about 5 degrees from the tape’s direction. The tape wrap is a little over 180 degrees with the record/playback heads fitted in pairs, on opposite sides of the drum. During recording each head writes one track for 180 degrees. At the end of the track, just before the head leaves contact with the tape, the record signal is switched to the opposite head, which has just begun its 180 degrees contact with the tape. This head then writes the next track. The tape moves slowly through the machine, so that each track sits next to the last. With the original Betacam format each track writes one field of video. Therefore PAL based machines have a drum that spins at 25Hz. Each complete revolution of the drum records or plays back one complete frame. Several other tracks are recorded on the tape. There are four of these and all are longitudinal. These tracks are recorded and played back by two sets of static heads placed in the tape path just before and just after the drum itself. Two tracks run along the top edge and two along the bottom. The top two tracks are responsible for audio channels 1 and 2. Channel 1 is on the inside (bottom track). This is intentional. If a single channel is recorded it is likely to be channel 1 and is therefore less likely to be corrupted if the edge of the tape is damaged. The bottom two longitudinal tracks are responsible for control and timecode. The top track is responsible for control, the most important of the two. If the bottom edge of the tape is damaged, timecode will be lost and the machine will switch to the control track to keep timecode counting, until a god timecode signal can be found again. 163 Sony Broadcast & Professional Europe
  • 172.
    Part 16 –Betacam and varieties The standard Betacam tape path Even though the internal structure of Betacam tape machines may differ from one machine to another, the basic tape path is identical in all machines. It has to be, if one is able to take a tape recorded in one machine and play it back in another. Supply reel and tape cleaner Tape exits from the left hand cassette reel, commonly called the supply reel. Most machine have a tape cleaner. This is a blade, made either from steel of artificial sapphire and cleans any debris off the tape before the machine attempts to record or play back the tape. Supply side tension regulator The tape then passes around a tension regulator, often called the supply side tension regulator. This important device measures the tape tension on the whole supply side of the machine, including the drum, and sends a signal back to the supply side reel to either let more tape out, if the tension is too high, or hold the tape back, if the tension is too low. It is very important that the tension around the drum is correct. Too tight and both the tape and drum will wear out quickly. Too loose and head to tape contact will be broken with resulting loss in recording and playing back. Full erase and control heads Now the tape passes across a static head responsible for erasing the whole tape when a crash record is being performed. This head blasts the tape with a strong alternating magnetic field deleting anywhing previously recorded on it. The tape then passes across another static head responsible for recording and playing back the control track, the control head. The exact position of the erase head is not important. As long at it is before the control head, it really does not matter. The position of the control head is exact, and the same on every Betacam machine. The drum, entrance and exit guides Now the tape runs across an entrance guide, round the drum for at least 180 degrees, and leaves the drum to run across an exit guide. The lower drum has a small step or ledge milled in its surface, called the rabbet. This rabbet is at the top of the lower drum at the entrance side and slopes down towards the exit side. The tape rests on the rabbet. This, and the slope of the whole drum causes the helical motion of the drum heads. The entrance and exit guides have flanges that touch the top of the tape, holding the tape down so that is enters and exits the drum at exactly the correct point. The upper drum spinning anti-clockwise at 25Hz (for PAL machines) draws air between the tape and the drum itself. If the tape tension is Sony Training Services 164
  • 173.
    Broadcast Fundamentals correct this makes a cushion of air between the two, and the heads protrude from the surface of the drum, penetrating this cushion to touch the tape. Figure 85 The basic Betacam tape path 165 Sony Broadcast & Professional Europe
  • 174.
    Part 16 –Betacam and varieties Audio/timecode head stack The tape now passes across another pair of static heads. The first is responsible for erasing the two longitudinal audio tracks and the timecode track. The second is responsible for recording and playing back the audio and timecode tracks. Some Betamax have a third static head that is responsible for playing back the audio and timecode signals while in record mode. This so- called confidence mode gives the operator confidence that the audio and timecode has been recorded correctly. The exact position of the audio/timecode stack is critical to ensure proper syncronisation between the video, audio and timecode. Capstan and pinch wheel The head now passes between the capstan and pinch wheel. The pinch wheel is a small soft rubber cylinder. The capstan is a precision motor with a shaft about 5mm diameter sticking out of the top of it. Normally there is a gap between the pinch wheel and the capstan shaft. However during recording or playing back a solenoid pushes the pinch wheel against the capstan shaft squeezing the tape between the two. As the capstan motor turns it pulls the tape through at a steady speed. The control track signal is passed into the machines computer where it is processed and converted into control signals to adjust the speed of the capstan motor so that the tape is passing round the drum as the correct speed and position. Take-up side tension regulator The tape then makes its journey back into the cassette and onto the take-up reel. Some machines include a tape-up tension regulator, to measure the take-up tension and pass a signal back to the take-up reel motor, to ensure that the tape is reasonably loose but not sloppy. Other guides The tape path will include a series of other guides along the tape path. Some of these touch the top of the tape, some the bottom and some neither. Definition of a good tape path The important part of a ½” tape path is the distance between the supply side reel and the capstan. This is where all the heads are and this is where the tape must be at the correct tension and in the correct position. This length of tape should be as short as possible, and should pass across as few items as possible. Any item like a guide, drum or static head changes the tapes direction and adds friction. Spinning guides, and the drum itself, are never absolutely central and always add a slight wobble to the tape’s motion. There are therefore opportunities for the tape to stick, be forced into the wrong position or the timing to be altered. Sony Training Services 166
  • 175.
    Broadcast Fundamentals Static heads, the cleaner, the drum, entrance and exit guides, and the supply side tension regulator are vital and cannot be removed. However any other guides should only be added to the design if they are absolutely necessary. A good tape path design will therefore have a short length of tape between the supply side reel and the capstan, and as few extra guides as possible. What happens to the tape after the capstan is not at all critical. The amount of tape, and number of guides between is not important. Electronics The basic electronics elements of a Betacam VTR consists of two halves, the audio/video circuitry and the control circuitry. The audio/video circuitry can be further divided into two halves, the record circuits and the playback circuits, finally these two halves can be subdivided into audio and video circuitry. Control circuitry The control circuitry is responsible for taking control signal from the VTR keyboard and from any remote control ports at the back of the machine, and converting them into control signals for the mechanics. Control circuitry is also responsible for recording the control and timecode tracks. The control track has a 25Hz signal recorded to it. This signal is used during playback to ensure that the tape is sitting in the correct place relative to the spinning drum, so that the playback heads are moving directly up the centre of the helical tracks on tape. Audio/video circuitry If the composite input is used, the video recording circuitry decodes this input into three component signals, Y, (R-Y) and (B-Y). It then combines the two colour difference signals (R-Y) and (B-Y) into one signal using a compressed time division multiplexing (CTDM) technique. The Y and CTDM signal are then emphasised and modulated onto FM carriers. Special horizontal sync signals are added before the signals are sent to the record heads for recording to tape. The video playback circuitry takes the modulated Y and CTDM signals from the tape, checks for tape drop-out and extracts a clock from the horizontal syncs signals. The signals are then demodulated and de-emphasised. The clock is used to perform timebase correction on the signals before the CTDM signal is broken down into individual (R-Y) and (B-Y) signals. If needed the resulting component signals are combined to make a composite output. If not the Y, (R-Y) and (B-Y) signals are output as component signals. Record audio circuitry takes the two incoming analogue channels and passes them through a Dolby noise reduction system before recording them directly to the two longitudinal tracks at the top edge of the tape. 167 Sony Broadcast & Professional Europe
  • 176.
    Part 16 –Betacam and varieties In playback the two audio signals off tape are passed through the same Dolby noise reduction system and directly out to analogue connectors. Betacam video record techniques The normal horizontal sync is replaced by a large tri-level pulse. The CTDM colour signal has no horizontal sync pulse. So a large negative going pulse is added so that timbase correction can be performed on the CTDM signal independently. Prior to modulation the signals are emphasised. During playback the signals will be de-emphasised again. This improves the signals to noise ratio of the whole record/playback path. Betacam uses frequency modulation to record the Y and CTDM signals to tape. This is a form of channel coding. The signals themselves would not playback correctly if they were recorded to tape without any modulation. The FM modulation signal has a straight line sloped frequency response that drops to zero and about 15MHz. Again this improves the signal to noise ratio. The Y and CTDM signals are modulated onto their respective FM carriers so that the most positive and negative excursions on these two signals fits between two specific FM carrier frequencies. Betacam video playback techniques The video playback circuitry is more complex than the record circuitry. Even with all the precise mechanical engineering of a Betacam mechadeck it impossible to playback a perfectly timed signal. Tape speed fluctuations, drum speed fluctuations, rotary head impact and tape tension fluctuations all serve to alter the exact timing of the video signal as it comes off the tape. Dirt and debris can get between the tape and the rotary heads, as the drum is spinning. The tape may also be damaged, old or just bad quality. All these factors can prevent a signal from being recorded to tape, or prevent a good signal on tape from being played back. This is called drop-out The playback circuitry must correct any playback signal timing fluctuations and somehow hide drop-out. The timebase corrector An important part of the playback circuitry of any Betacam VTR is the timebase corrector, This piece of electronic circuitry smoothes out timing fluctuations in the video playback signals. The timebase corrector works by storing a small amount of the signal, holding it for a short while, and then releasing it smoothly. This is a little like using a bucket to provide a smooth water flow from a fluctuating water source. As with the bucket analogy a certain amount of the signal must be stored to allow for flucturations. Hopefully the fluctuations are not so great as to either completely fill or empty the store. Sony Training Services 168
  • 177.
    Broadcast Fundamentals Timebase correctors normally use a semiconductor memory as its store. Thus the analogue playback signal must be passed through analogue to digital converter before the store, and through a digital to analogue converter afterwards. 169 Sony Broadcast & Professional Europe
  • 178.
    Part 17 –The special effects machine Part 15 The video disk recorder History The idea of recording video to a disk has existed about as long as those of tape. However, whereas tape technology was technically possible, with good recoding times and playback quality, technology was too crude to allow comparable machines to be built for many years after tape recorders became popular. Ampex designed and built a prototype disk recorder in 1965. It became a commercial product in 1967. Called the HS-100 it used an open hard disk and allowed just 30 seconds of analogue video to be recorded. It was used mainly for instant and slow motion replay. However it was not until the 1990’s that disk recorders started to appears that had a real practical use in broadcast. Abekas produced the A-64 component video recorder which recorded parallel CCIR-601 digital video on two large hard disks. The total storage time was a little under 1 minute, but it was full uncompressed broadcast quality. At the time the Abekas disk recorders were one of only a few methods of manipulating full uncompressed digital video in a non linear fashion, and they became popular in post production and any high quality complex short form editing. The secret to Abekas’s success was their ability in modifying the hard disks available at the time to make them do things that they would not otherwise be able to do. Hard disks generally had integrated controllers mounted on them that acted as an interface between the outside world and the disks themselves. Abekas bypassed this to allow the video data to be recorded directly to the disk platters. Abekas built a business based to a great extent on their ability to modify standard hard disk technology, and successfully sold a variety of hard disk digital video recorders to post production facilities, advertising companies, etc.. As hard disk technology improved, it became less necessary to bypass the disk controller. There was less need for the kind of specialist techniques employed by Abekas. Companies could produce video disk recorders by using standard hard disks. Hard disks also became cheaper. It became possible for companies to offset the low bandwidth and speed of standard hard disks by simply designing in more that one hard disk. Systems became available that used an array of disks to both spread the bandwidth and increase the capacity. Present day Video disk recorders can now be grouped into two areas, although some products are able to cross the grey area between. Sony Training Services 170
  • 179.
    Broadcast Fundamentals Transmission servers The first group are the video disk recorders intended specifically for transmission. These machines trade quality for storage capacity. The whole system may require enough storage capacity for a day, or several days of transmission. It may be required to supply many channels of transmission. However the quality need not be supreme. These servers can employ high compression ratios, with low data rates. Reliability is very important in these servers. They need to work all the time without fail. Therefore this kind of server often employs a high degree of redundancy technology and hot swappable elements, like hard disks, controllers and power supplies. Transmission servers also do not need to be able to perform many ‘tricks’. They are intended to play, and perform some simple real-time switching. Nothing else. So while remote control will need to be accurate and fast, it will not need to be very versatile. Production servers This kind of video disk recorder has opposite requirements to those of transmission servers. Production servers need only have capacity for the programme that is being worked on – far lower than the requirements in transmission servers. Generally the material held on them is not the master, but the work-in-progress material. Regular backups are normally performed. Absolute reliability is les important than in transmission servers. However production servers must maintain quality. As video is edited, copied from one location to another, and generally fiddled with, there must be no loss in quality. Any loss in quality would accumulate through each edit generation until it becomes noticeable. Production servers also need to be ‘clever’. Operators will need to be able to be able to perform complex edits, and move, copy, and cut material on the hard disk as though they were using a word processor. Remote control ports need to be both fast, accurate and versatile This kind of server may also need to be accessed by several users at the same time. Therefore there may need to be more than one remote control port fitted. 171 Sony Broadcast & Professional Europe
  • 180.
    Part 17 –The special effects machine RAID technology An important technology for video disk recorders is RAID (redundant array of inexpensive, or independent, drives). RAID consists of an array of disks, and a RAID controller. The device (normally a computer) that is accessing the array will see one logical drive. The RAID controller acts as an interface, organising and sending data backwards and forwards between the device and the array. History In 1988 Professor Randy Katz, while at Berkeley University in California, was chiefly responsible for writing a paper entitled “A case for Redundant Array of Inexpensive Disks”. This paper became the model for disk array design. It specified five RAID levels, 1 to 5. These levels define how the disks are logically arranged and how the data, and any error correction codes, are spread across them. Since the paper’s publication further levels have been designed. The most important of these is levels 0 and 6, both of which have gained general adoption in the industry. Level 7 was later added by Storage Computer Corporation. Although proprietary is has gained general acceptance because the company is a major producer of RAID solutions and level 7 offers some real benefits. Other levels have been added more recently. These are all combinations of the existing levels, and have all been added for marketing reasons. Reasons for RAID Disk arrays have the benefit of increasing capacity. The “inexpensive” part of RAID becomes important. The raid controller makes it appear that there is one big expensive disk drive where there are actually many smaller cheaper drives instead. Another important reason for RAID is to increase the performance of the array. The “redundant” part of RAID is not important in this case. Indeed in some forms of RAID there is actually no redundancy at all. Using an array simply makes it look as though one very fast disk has been installed. Redundancy The basic idea of RAID was originally designed to ensure that extra data is written to the disk array. This so-called redundant data could be used if any of the data had errors in it during read operations. The simplest redundant data is to make a complete copy of the data to another set of disks. However it became obvious that a complete copy did not have to be made. Instead some kind of error correction data could be written. These codes generally took up less space that the original data. There are two kinds of error correction codes used, Parity codes, dual parity codes and Hamming codes. Sony Training Services 172
  • 181.
    Broadcast Fundamentals Parity Parity is a simple one bit code applied to a byte, word, or block of data. There are two kinds of parity, even and odd. With both kinds the number of “1”s on the data is added together. With even parity the parity code is a “1” is there is an even number of “1”s in the data and a “0” if there is an odd number of “1”s. With odd parity the logic is reversed. Parity codes are not very powerful. They can only detect 1 bit errors. It is perfectly possible for more than 1 bit to be wrong, and the parity code to still be correct. Hamming codes Hamming codes (names after the inventor) are multi-bit codes derived from the data, that can be used to reconstruct the data if it is read back with errors. Specific data pattern produce specific Hamming codes. Hamming codes are more powerful than parity codes. They can detect and correct 1 bit errors, and detect 2 bit errors. Dual parity codes Dual parity codes are an enhanced version of simple 1 bit parity codes. With these codes many bytes, words or blocks of data are grouped to give a two dimensional array. Parity codes are generated in two dimensions, for the array columns and rows. Dual parity codes are more powerful that simple parity codes because parity checks can be applied in two dimensions, detecting and correcting a greater number of possible errors. They take up more space that simple parity codes but a lot less space than Hamming codes. Dual parity codes have been incorrectly called Reed Solomon codes. However Reed Solomon codes are not single bit codes but multi-bit codes. They are also generated from a complex polynomial algorithm giving very powerful error correction capability. Dual parity codes cannot achieve the error correction capability of Reed Solomon codes, but take up far less space. RAID levels Level 0 (Disk striping) Data is written in blocks, in sequence, to each disk in turn. Not really RAID. Specifically designed for high performance, with increased bandwidth and performance, and no redundancy. Advantages : High bandwidth and performance. Simple design. Disadvantages : Not a true RAID. No error correction (other than the individual drives’ own internal error correction). Level 1 (Mirroring) Exact copy of each disk written to another disk. Sometimes achieved within the computer software for simplicity, but this loads the computer resources. Best achieved within the RAID controller instead. 173 Sony Broadcast & Professional Europe
  • 182.
    Part 17 –The special effects machine Advantages : Best error protection. Increased read performance. Good for multi-user environments. Disadvantages : Expensive (twice the hard disks). Level 2 (Bit level disk striping & Hamming code disks) With this level data is striped at bit level across multiple disks. Hamming codes are generated and written to a separate disk or disks. Hamming codes are multi-bit error correction codes. Although more powerful they take up more space than simple 1 bit parity codes. Therefore Hamming codes need more disk space making level 2 disk requirements closer to level 1. Level 2 is a dead RAID level. None of the RAID suppliers supports this level. It is said that level 2 is not used because it requires special disks. This argument comes from the fact that standard hard disks have their own internal error correction, and that if you are using Hamming codes the disks themselves need to be non-standard, with no internal error correction. However any RAID error correction is applied before the data is written to disk. The disk’s internal error correction just adds another level of security underneath anything applied by the RAID system. In truth, level 2 is probably not used because the internal error correction provided by present day disks, with their overall level of reliability, is good enough that Hamming code protection supplied by the RAID system would be more protection than is necessary considering the extra capacity required. Simple parity codes are generally sufficient. Advantages : Very good error protection. Disadvantages : Dead. High ratio of ECC disks to data disks. Level 3 (Byte level disk striping & parity disks) RAID level 3 stripes across disks at the byte level rather than at the bit level. 1 bit parity codes are written to a separate disk or disks. Similar to level 2 but parity codes are smaller than Hamming codes and take up less disk space. Advantages : Good error protection. Low ratio of ECC disks to data disks. Good for small and scattered file read/writes. Disadvantages : Error protection not as powerful as levels 1 and 2. Inefficient handling of large sequential files. Single parity drive is a performance bottleneck. Sony Training Services 174
  • 183.
    175 D a ta b its 1 ,5 ,9 ,1 3 ,1 7 ,2 1 ... D a ta b lo c k s D a ta w o r d s D a ta b lo c k s 1 ,5 ,9 ,1 3 ,1 7 ,2 1 ... Figure 86 1 ,5 ,9 ,1 3 ,1 7 ,2 1 ... D a ta b lo c k s D a ta b lo c k s 1 ,5 ,9 ,1 3 ,1 7 ,2 1 ... 1 to 1 0 2 4 . 1 to 1 0 2 4 . D a ta b its 2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ... D a ta b lo c k s D a ta w o r d s D a ta b lo c k s 2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ... D a ta b lo c k s D a ta b lo c k s 2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ... 2 ,6 ,1 0 ,1 4 ,1 8 ,2 2 ... 1 0 2 5 to 2 0 4 8 . 1 0 2 5 to 2 0 4 8 . D a ta b its R A ID C o n tr o lle r 3 ,7 ,1 1 ,1 5 ,1 9 , 2 3 ... D a ta b lo c k s D a ta w o r d s D a ta b lo c k s R A ID C o n tr o lle r 3 , 7 ,1 1 ,1 5 ,1 9 ,2 3 .. . R A ID C o n tr o lle r D a ta b lo c k s D a ta b lo c k s 3 ,7 ,1 1 ,1 5 ,1 9 ,2 3 ... R A ID C o n tr o lle r 3 ,7 ,1 1 ,1 5 ,1 9 ,2 3 ... R A ID C o n tr o lle r 2 0 4 9 to 3 0 7 2 2 0 4 9 to 3 0 7 2 D a ta b its 4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ... Broadcast Fundamentals code D a ta b lo c k s D a ta w o r d s D a ta b lo c k s H a m m in g 4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ... g e n e ra to r 4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ... D a ta b lo c k s D a ta b lo c k s 4 ,8 ,1 2 ,1 6 ,2 0 ,2 4 ... 3 0 7 3 to 4 0 9 6 3 0 7 3 to 4 0 9 6 P a r ity P a r ity g e n e ra to r g e n e ra to r H a m m in g c o d e s 1 ,3 ,5 ,7 ,9 ,1 1 ,1 3 ... P a r ity c o d e s P a r it y c o d e s 1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ... 1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ... Level 0 Level 1 Level 2 Level 3 Level 4 H a m m in g c o d e s 2 ,4 ,6 ,8 ,1 0 ,1 2 ,1 4 ... D a ta b lo c k s D a ta b lo c k s 1 ,6 ,1 1 ,1 6 ,2 1 . .. 1 ,6 ,1 1 ,1 6 ,2 1 ... P a r ity c o d e s D u a l p a rity c o d e s D a ta b lo c k s 1 .6 .1 1 .1 6 .2 1 . .. 1 .6 .1 1 .1 6 .2 1 ... 1 ,5 ,9 ,1 3 ,1 7 ... D a ta b lo c k s C ache 1 , 5 ,9 ,1 3 ,1 7 ,2 1 ... D a ta b lo c k s D a ta b lo c k s 1 to 1 0 2 4 D a ta b lo c k s D a ta b lo c k s 1 ,5 ,9 ,1 3 ,1 7 ... 2 ,7 ,1 2 ,1 7 ,2 2 ... 2 ,7 ,1 2 ,1 7 ,2 2 ... P a r ity c o d e s D u a l p a rity c o d e s 2 ,7 ,1 2 ,1 7 ,2 2 ... D a ta b lo c k s 2 ,7 ,1 2 ,1 7 ,2 2 ... 2 ,6 ,1 0 ,1 4 ,1 8 ... D a ta b lo c k s 2 , 6 ,1 0 ,1 4 ,1 8 ,2 2 .. . D a ta b lo c k s D a ta b lo c k s 1 0 2 5 to 2 0 4 8 D a ta b lo c k s 2 ,6 ,1 0 ,1 4 ,1 8 ... D a ta b lo c k s 3 ,8 ,1 3 ,1 8 ,2 3 ... 3 ,8 ,1 3 ,1 8 ,2 3 ... P a r ity c o d e s D u a l p a rity c o d e s D a ta b lo c k s 3 ,8 ,1 3 ,1 8 ,2 8 ... 3 ,8 ,1 3 ,1 8 ,2 8 ... 3 ,7 ,1 1 ,1 5 ,1 9 ... D a ta b lo c k s 3 , 7 ,1 1 ,1 5 ,1 9 ,2 3 .. . A s y n c h ro n o u s D a ta b lo c k s R A ID C o n tr o lle r 2 0 4 9 to 3 0 7 2 D a ta b lo c k s R A ID C o n t r o lle r R A ID C o n tr o lle r R A ID C o n tr o lle r 3 ,7 ,1 1 ,1 5 ,1 9 ... R A ID C o n tr o lle r D a ta b lo c k s D a ta b lo c k s 4 ,9 ,1 4 ,1 9 ,2 4 ... 4 ,9 ,1 4 ,1 9 ,2 4 ... P a r it y c o d e s . .. D u a l p a rity c o d e s ... D a ta b lo c k s 4 ,9 ,1 4 ,1 9 ,1 4 .... 4 ,9 ,1 4 ,1 9 ,1 4 .... D a ta b lo c k s 4 ,8 ,1 2 ,1 6 ,2 0 ... 4 , 8 ,1 2 ,1 6 ,2 0 ,2 4 .. . D a ta b lo c k s 3 0 7 3 to 4 0 9 6 D a ta b lo c k s P a rity D a ta b lo c k s 4 ,8 ,1 2 ,1 6 ,2 0 ... g e n e ra to r D a ta b lo c k s P a rity 5 ,1 0 ,1 5 ,2 0 ,2 5 ... 5 ,1 0 ,1 5 ,2 0 ,2 5 ... g e n e ra to r g e n e ra to r D u a l p a rity c o d e s D u a l p a rity P a r ity c o d e s 5 ,1 0 ,1 5 ,2 0 ,2 5 ... 5 ,1 0 ,1 5 ,2 0 ,2 5 ... P a r ity c o d e s S t r ip e d s e t M irro re d s e t 1 , 2 ,3 ,4 ,5 ,6 ,7 ,8 ... Level 5 Level 6 Level 7 Level 10 Level 10 C o m p u te r to C o m p u te r to D a ta c o n n e c tio n R A ID c o n tr o lle r D a ta c o n n e c tio n R A ID c o n tr o lle r c o n n e c tio n D a ta P a r it y D ual H a m m in g c o n n e c tio n codes p a r ity codes codes RAID levels Sony Broadcast & Professional Europe
  • 184.
    Part 17 –The special effects machine Level 4 (Block level disk striping & parity disks) RAID level 4 is similar to level 3 except that the data is striped across the disks in blocks rather than in bits. Block reads and writes tend to increase the overall performance over level 3 for large and sequential file read and write operations. Advantages : Good error protection. Low ratio of ECC disks to data disks. Good for large sequential file read/writes. Disadvantages : Error protection not as powerful as levels 1 and 2. Inefficient handling of small files. Single parity drive is a performance bottleneck. Seldom used. Level 5 (Block level & parity disk striping) This is the most popular RAID level. It is very similar to level 4, except that the parity codes are not written to a separate disk. All the parity codes are striped across the same disks as the data. This improves the performance over RAID levels 3 and 4 by removing the bottleneck associated with the separate ECC disk or disks. However, because the data and parity codes are scattered over all the disks, it is difficult to rebuild a new drive if one of the drives in the array fails. Advantages : Higher performance than level 4. Good error protection. Low ratio of ECC codes to data. Good for large sequential file read/writes. Disadvantages : Error protection not as powerful as levels 1 and 2. Inefficient handling of small files. Complex controller design. Complex rebuilds. Level 6 (Block level & dual parity disk striping) This is a similar scheme to level 5. It uses block level read and write operations, and spreads the parity across the disks rather than writing all the parity codes to a separate disk or disks. However level 6 processes blocks of data and produces 2 parity codes, one set for the columns and another for the rows. This increase in the amount of parity code generation greatly increases the complexity of the RAID controller, and decreases the overall performance of the array. Advantages : Very good error protection. Low ratio of ECC codes to data (but not as low as levels 2-5). Good for large sequential file read/writes. Disadvantages : Poor performance due to dual parity calculations. Inefficient handling of small files. Very complex controller design. Level 7 (Asynchronous cached data & parity striping) RAID Level 7 is a proprietary technology from Storage Computer Corporation. It borrows ideas from levels 3 and 4, but incorporates a large memory cache between the disk array and the controller. The Sony Training Services 176
  • 185.
    Broadcast Fundamentals controller uses the cache to read and write data to the disks asynchronously. This means that each disks in the array can operate independently, greatly improving overall performance. This extra workload mean that RAID level 7 controllers are complex. Power failures stand a greater chance of data loss, because data spends time in the cache. If power is lost the cache is emptied. Advantages : Very good performance for any file type. Very good error protection. Low ratio of ECC codes to data. Disadvantages : Proprietary design. Very complex controller design. Expensive. Possible loss of data during power failure. Level 10 (Striped array with mirroring) Level 10 is a combination of level 0 and level 1. This is not a true standard and there are different definitions of exactly what level 10 is, some which are no different to level 0+1. The most consistent definition is that part of the array is a striped set and part a mirror set. This combines the advantages of both sets. Advantages : Simple design. Same error correction capability as level 1. Good performance. Low ratio of ECC codes to data. Disadvantages : Not efficient. Not a rigid standard. No ECC data. Level 0+1 (Mirrored array with striping) Level 0+1 is another non-standard array definition. However level 0+1 definitions appears to be more consistent than level 10. This level has two complete striped sets. It is a level 1 implementation of a level 0 array. It therefore has some of the advantages and some of the disadvantages of each level. Level 0+1 is not efficient but have good error protection, like level 1. Is also has good bandwidth, like level 0. Advantages : Simple design. Same error correction capability as level 1. Good performance. Low ratio of ECC codes to data. Disadvantages : Not efficient. Not a rigid standard. No ECC data. Other levels There are a variety of other levels specifies within the storage industry that attempt to combine advantages of the established levels. Level 30 is a level 0 array where each stripe is a level 3 array. Level 50 is a level 0 array where each stripe is a level 5 array. Level 53 is a level 5 combined with a level 0. Strictly this level should be called level 50, but this description has already been used. JBOD JBOB (just a bunch of disks) is a name given to a group of hard disks that have no particular array pattern. It is a name often given to disk systems in an attempt to give them the same importance as RAID levels. 177 Sony Broadcast & Professional Europe
  • 186.
    Part 17 –The special effects machine Stripe size considerations Most RAID solutions require the files be split into small pieces called stripes. Each stripe is written to a different disk in the array. Stripe size is important in defining the performance of the array. Striping a file means that parts of the file can be written to and read from multiple disks at the same time. This greatly increases the performance of the array as a whole. However the task of splitting files into stripes takes time and effort. Therefore it is important that the stripe size is correct. Ideally every file would be divided into exactly the number of disks in the stripe set. This gives the highest performance of all, with the lowest splitting workload. However stripe size if fixed. Working with small stripes A RAID system that uses small stripes works well for files systems with many small files and few large files. It is easy to split the files into stripes as they are already small, but most files can be divided so taking advantage of the increased performance of striping. Any large files take longer to split and give lots of stripes that have to be handled. But there are few of these files in this kind of file system. Working with large stripes A system that uses large stripes works well for file systems with predominantly large files, and with few small files. The large files splits easily into a few stripes that can be stored on the disk array quickly. Any small files may well be smaller that the stripe size and cannot therefore be split. There is no performance advantage of striping for these files, but there are few of then in this file system, so overall performance is still high. Software v hardware RAID RAID systems are designed to perform all the RAID processing either as a software solution or in dedicated hardware. In fact both these solutions perform the same function, just in a different place and in a slightly different way. RAID controller processing The RAID controller performs three basic operations during write operations. Firstly is splits the files into stripes. Secondly, it may calculate an error correction code. This may be a simple parity code, a dual parity code or a Hamming code. Thirdly it arbitrates and controls data write operations to each disk’s interface. Software RAID Software RAID performs all the file splitting and error correction calculation in the computer’s processor using a small piece of software resident in the computer’s memory. This processing robs processor Sony Training Services 178
  • 187.
    Broadcast Fundamentals resource and is somewhat inefficient, but is simple to achieve, and relatively simple to modify. The calculation of error correction codes is generally very resource hungry. Software RAID solutions are more popular for RAID level 0 and 1 where no error correction codes are used. Hardware RAID Hardware RAID performs all the controller processing in dedicated hardware. This removed all the workload from the central processor allowing it to perform other tasks with greater efficiency. The dedicated hardware is often actually a fast processor coupled to some dedicated integrated firmware. Hardware RAID controllers are faster than software RAID controllers, but are more expensive and dedicated to specific RAID levels and stripe sizes. Realising RAID systems At the time RAID was proposed by Randy Katz, and others, in 1988 the idea was to allow large storage elements to be built from lots of small cheap drives, with redundancy built in to allow for errors and disk failure. Direct disk connection The fastest method of writing to and reading from hard disks is to communicate directly with the disk platters. Early video disk recorders used direct access as the only way of achieving the bandwidth to and from the disks for full bandwidth broadcast video. However this imposes extra loading on the computer that is using the hard disks. All the sector, and cylinder allocation had to be done by the computer’s central processor. Direct access also imposes a risk of drive retirement. Disk drive manufactures often improve their products and alter the layout of platters and their density. This may not change the overall disk capacity, and therefore will make little difference to normal computer systems. However it does effect video disk recorders that rely on direct access to the hard disk platters. Bus RAID connections All hard disks are now built with some kind of integrated controller. This handles all the sector and cylinder addressing. The computer is presented with logical addressing that has nothing to do with the actual sector and cylinder addressing the drive itself will use. This removes the loading from the computer and also removes the risk of disk retirement. However hard disk integrated controllers add a layer between the computer and the disk platters themselves and slow data transfer to and from the disks. Hard disk technology had to evolve before hard disk with integrated controllers could be used in video disk recorders. 179 Sony Broadcast & Professional Europe
  • 188.
    Part 17 –The special effects machine The popularity of IDE/ATA, SCSI, and later, serial SCSI, fibre and IEEE1394, coupled with the advances in hard disk technology made it possible and easy to build RAID systems for broadcast quality video. SCSI SCSI (small computer systems interface) was originally designed as a method of connecting peripherals to a computer with a very fast data link. Although SCSI has been popular as a method of connecting scanners, and a number of other peripherals, the most popular peripheral that uses SCSI is the hard disk. The original version of SCSI allows for 8 nodes on the whole bus. One of these must be the controller. The other 7 nodes can be hard disk drives. Later versions of SCSI allow for 16 devices (1 controller and 15 drives). The length of the SCSI bus is also important. Different versions of SCSI have different maximum bus lengths. Originally the SCSI bus could not be any longer than about 6 metres. Now it is possible to achieve a SCSI bus of about 25 metres, although this imposes other restrictions. There are about 10 different flavours of SCSI in current use, and a few new ones starting to appear. The table on the next page shown these different types. Name is the common name given to this type of SCSI. In some cases two names are in-fact identical SCSI types. Standard is the official SCSI standard to which this type refers. Bus is the size of the SCSI bus, in bits. This is not the number of pins in the connector, or conductors in the cable, although there is a general relationship. Rate is the data rate for this SCSI type. This is not the bus speed. Some SCSI types use special tricks to multiply the bus speed to increase the actual data throughput on the bus. Connectors is a guide to the kind of connectors used in the various SCSI types. There is no hard and fast rule to the connector type, and each SCSI equipment manufacturer has their own preference. However certain connectors are specifically for external, and others for internal, use. Cables is a guide to the kind of cables used in the various SCSI types. Signal is the kind of electrical signal this type of SCSI uses. All SCSI types use a pair of wires on the bus for each bit. The original electrical signal was single ended (S), in which case one of the wires in each pair was the data itself while the other was connected to ground. Later types used differential connections to increase the possible bus length. Each pair had a positive and negative version of the signal. The original used relatively high voltages for the signals and was so called high voltage differential (H). The latest differential type is low voltage differential (L), which allows for longer bus lengths and high data rates. Max devices is the maximum number of devices allowed on the SCSI bus, including the controller. In some cases the maximum number of devices allowed will depend on the signal type, and will also have an Sony Training Services 180
  • 189.
    Broadcast Fundamentals effect on the maximum length of the bus. For example Ultra 2 SCSI will allow high voltage or low voltage differential connection, with low voltage differential connections allowing either 2 of 8 devices depending on the bus length. Max cable is the maximum length of the whole SCSI bus, including the cable itself, any internal connections and any circuit board tracks. Different SCSI types allow for different bus lengths. For instance Ultra SCSI allows for single ended or high voltage differential bus connections. If 3 metres of single ended connection are used, only 4 devices can be connected. If the bus length is halved to 1.5 metres, the number of devices doubles to 8. 181 Sony Broadcast & Professional Europe
  • 190.
    Bus R a te N am e S ta n d a r d (b its ) (M B /s ) C o n n e c to rs C a b le s S ig n a l M a x d e v ic e s M a x c a b le S C S I-1 S C S I-1 8 5 1 ,2 ,3 ,4 ,5 1 ,2 S ,H 8 S :6 H :2 5 N a rro w S C S I S C S I-1 8 5 1 ,2 ,3 ,4 ,5 1 ,2 S ,H 8 S :6 H :2 5 Fast S C S I S C S I-2 8 10 2 ,3 ,7 1 ,2 ,3 S ,H 8 S :3 H :2 5 Sony Training Services F a s t N a rro w S C S I S C S I-2 8 10 2 ,3 ,7 1 ,2 ,3 S ,H 8 S :3 H :2 5 W id e S C S I S C S I-2 16 10 6 ,1 1 4 S ,H 16 S :6 H :2 5 F a s t & W id e S C S I S C S I-2 16 20 6 ,1 1 4 S ,H 16 S :3 H :2 5 U ltr a S C S I S C S I-3 8 20 2 ,3 ,7 1 ,2 ,3 S ,H S :4 o r8 H :8 S 4 :3 S 8 :1 .5 H :2 5 N a r r o w U ltr a S C S I S C S I-3 8 20 2 ,3 ,7 1 ,2 ,3 S ,H S :4 o r8 H :8 S 4 :3 S 8 :1 .5 H :2 5 W id e U ltr a S C S I S C S I-3 16 40 6 ,1 1 4 S ,H S :4 o r8 H :8 S 4 :3 S 8 :1 .5 H :2 5 U ltr a 2 S C S I S C S I-3 8 40 2 ,3 ,7 1 ,2 ,3 H ,L H :8 L :2 o r8 H :2 5 L 2 :2 5 L 8 :1 2 N a r r o w U ltr a 2 S C S I S C S I-3 8 40 2 ,3 ,7 1 ,2 ,3 H ,L H :8 L :2 o r8 H :2 5 L 2 :2 5 L 8 :1 2 W id e U ltr a 2 S C S I S C S I-3 16 80 6 ,1 1 4 H ,L H :1 6 L :2 o r1 6 H :2 5 L 2 :2 5 L 1 6 :1 2 U ltr a 3 S C S I S C S I-3 16 160 6 ,1 1 4 L 2 o r1 6 L 2 :2 5 L 1 6 :1 2 U ltr a 1 6 0 S C S I S C S I-3 16 160 6 ,1 1 4 L 2 o r1 6 L 2 :2 5 L 1 6 :1 2 U ltr a 1 6 0 + S C S I S C S I-3 16 160 6 ,1 1 4 L 2 o r1 6 L 2 :2 5 L 1 6 :1 2 U ltr a 3 2 0 S C S I S C S I-3 16 320 6 ,1 1 4 L 2 o r1 6 L 2 :2 5 L 1 6 :1 2 U ltr a 6 4 0 S C S I S C S I-3 16 640 ? ? L ? ? 182 Part 17 – The special effects machine
  • 191.
    Broadcast Fundamentals SCSI connectors 1 : 25 pin, D25 connector. 1 2 : 50 pin Centronics connector. 12 3 : 50 pin IDC connector. 1 2 ultra 4 : 50 pin D50 connector. 1 5 : 37 pin D37 connector. 1 6 : 68 pin HD68 connector. Ultra2 lvd & ultra wide scsi3 7 : 50 pin HD50 connector. 23 8 : 30 pin HDI30 connector. Apple 9 : 50 pin HPCN50 connector. 10 : 60 pin HDCN60 connector. 11 : 68 pin VHDCI connector. Ultra scsi 2 & 3 SCSI cables 1 : 50 conductor Centronics C50. 2 : 50 conductor ribbon cable. 3 : 50 conductor high density D50M cable. 4 : 68 conductor high density D68 cable. IDE/ATA Early PC designs placed the hard disk on a card, integrating it with the controller and providing a simple connection through one of the ISA connectors into the motherboard. However this was awkward because it made the card large, heavy and cumbersome. Western Digital produced a card that provided an interface between the 16 bit ISA bus connector on the motherboard and the drive. Controller electronics were placed on the drive, providing a simple interface without having to communicate directly with the disk platters, just as SCSI does. This was called integrated drive electronics (IDE) Because the PC design this was first used in was called the PC/AT the adaptor was called the AT adaptor, or ATA. Several other manufacturers saw the simplicity of the IDE/ATA design for PC’s. These computers did not need any of the complexity or performance of SCSI, and IDE/ATA became the de-facto standard for fitting hard disks into PC’s. Every PC required a hard disk, some more than one. It became obvious that the ATA controller should be fitted to the PC motherboard, rather than wasting one of the ISA slots. In the early 1990’s ATA packet interface (ATAPI) was introduced. This enhancement allowed CDROM’s and tape drives to be integrated into 183 Sony Broadcast & Professional Europe
  • 192.
    Part 17 –The special effects machine the same bus connection as the hard disks rather than connecting them to some other proprietary interface. Later versions of the IDE/ATA interface allowed direct memory access (DMA) modes, and later, faster DMA modes, called Ultra DMA. (UDMA). UDMA mode 2 allowed for a data transfer rate of 33MB/s and was often called Ultra DMA-33 or Ultra ATA-33, or simple UDMA-33 or ATA-33. Later improvements to the bus appears as UDMA-66 (ATA-66) and UDMA-100 (ATA-100). The performance of the whole drive/controller configuration will drop to the item with the slowest speed. Therefore it is important to ensure that both the ATA controller and the drives have to be designed to operate at the correct bus speed. Most IDE drives can be connected via a standard 40 way ribbon cable. However any ATA controller and drive faster than UDMA-33 must use special 80 way cable. This cable is exactly the same overall size as the 40 way cable, and has the same number of signal connections as 40 way connections. However every other wire in this 80 way ribbon cable is connected to ground and separates the signal wires, improving performance. Modern PC’s integrate the ATA into the motherboard’s chipset. Intel’s PCI chipset now integrates the entire ATA into the PCI chipset. All motherboards now include two 40 pin connectors on the motherboard. Each connector provides one IDE/ATA bus. Each bus allows for one master and one slave drive. Thus four IDE drives can be fitted. It should be remembered that ATAPI allows all drives, including CDROM and DVD drives to be connected to these IDE connectors. Most PC’s have this connection, with very few drives connections available to build a RAID from. IDE/ATA RAID Manufacturers have now produced plug-in cards that have multiple ATA connections, and an interface controller. These cards allow small RAID systems to be built into the PC. Some of these cards rely on software to perform the RAID control. These are little more than the standard ATA controllers found integrated to motherboards. They generally only have two 40 way connectors allowing four drives to be fitted. These RAID solutions are somewhat restricted, and slow. Other cards offer hardware RAID. Free from the constraints of the normal two connector scheme these cards often include four or more 40 way connectors, allowing for more drives to be connected. They include coprocessors and memory to perform proper interface and control. Some motherboards include more ATA connectors other than the normal two. These are specifically designed to allow small RAID systems to be added to the PC without the need for any plug-in card. Just as with the plug-in solutions, motherboard integrates solutions can offer either software or hardware based RAID. Sony Training Services 184
  • 193.
    Broadcast Fundamentals Hardware based plug-in IDE RAID cards and the motherboard integrated IDE RAID controllers tend to use hardware based RAID level 0, 1 and 5. These are by far the most popular RAID levels for PC RAID designs. Serial SCSI and Fibre The argument for serial SCSI To an engineer it may appear that a parallel interface should be faster than a serial one. After all if you can sent data down a serial bus 8, 10, 16, 20, 32, or even 64 bits as one big chunk, each clock cycle, this must surely be faster than sending it one bit at a time. Surely if, for instance you have a 16 bit parallel bus connection the data rate would have to be 16 times faster to achieve the same data rate with a serial bus. However various transmission effects conspire to ensure that serial bus SCSI connections in fact offer greater performance than most parallel connections. As the data rate increases there is an increase in cross-talk between one conductor in a parallel bus to another. Each data bit becomes more corrupted and the data rate is stepped up. As the cable length is increased there is an increase in bit slippage. This is where the data in one conductor gets to its destination before another conductor simply because of the data pattern in each wire and its corresponding delay. All the bits from one word of data arrive at the far end of the bus at slightly different times and become more difficult to read. With many pins in a serial connector there is a greater chance that any one pin will not make proper contact. This may invalidate transmitted data. Some parallel SCSI connections are still the fastest connection method. Ultra 320 and the up and coming Ultra 640 versions of SCSI are still parallel connections. However most SCSI installations are based around 10, 20 or 40MB/s buses where serial connection are better. Which flavour? With the introduction of SCSI-3 the whole structure of the format was altered giving it more of a layered and modular structure, with each module communicating with others in the structure. Any one implementation need not use all the modules, just enough to ensure messages and data are properly transferred. Although appearing complex the new approach allowed elements of the interface to be altered while still maintaining the overall format. An important concept for SCSI-3 is that the physical elements are now separated from the command definitions. A range of different physical modules exist, for traditional parallel connections as well as some serial connections and network connections. 185 Sony Broadcast & Professional Europe
  • 194.
    Part 17 –The special effects machine The important serial connections are Serial Storage Architecture (SSA), Fibre Channel (FC) and IEEE1394. IEEE1394 is more of a multimedia interface. Although popular as a media interconnection, it is not popular in RAID design. There is plenty of discussion on the relative merits of SSA and FC with both supporters backing their own preference and decrying the other. I appears that FC have more universal backing irrespective of any technical merits (although it seems FC is also technically better). Fibre Channel The main advantages for using FC in a RAID environment are :- 1 : FC is a network protocol, SCSI and ATA/IDE are not. This allows drives to be addressed just as anything else on a computer network. 2 : Different connection topologies possible. Point to point (the nearest to parallel SCSI and ATA/IDE), arbitrated loop and fabric. 3 : Huge connection distances compared to parallel SCSI or ATA/IDE. 4 : Each computer can access a huge number of disks, not the 2 per bus on ATA/IDE or 16 on SCSI. 5 : Each disk can be accessed by a huge number of computers. This is not possible with either ATA/IDE or SCSI and allows easy file and directory sharing. 6 : Easy connection. 7 : Hot swappable connection. 8 : Fast. As already discussed there are some versions of SCSI that are faster but these are exotic and not universally supported at present. FC is as fast or faster than most common SCSI types and every form of ATA/IDE. Sony Training Services 186
  • 195.
    Broadcast Fundamentals Part 16 Television receivers & monitors The basic principle A television receiver’s task is to turn the electrical video signal that is connected to it back into a moving image. In most cases television receivers include a tuner to accept signals from an aerial, cable or satellite feed. These signals also include audio information, which the television receiver will turn back into a recognisable audio signal. In a domestic arena the television receiver is often referred to simply as a “television”. Monitors are designed for use in professional and broadcast situations. They have the same technology as a television, although the input signal possibilities are normally restricted to those used in professional and broadcast situations. They normally have no tuner and cannot be connected to an aerial, cable or satellite feed. Monitors sometimes have provision for audio but its quality is not normally very good. Monitor video quality has a far greater range of quality than for television receivers. At the lowest end of the quality scale are mini-monitors intended for CCTV and surveillance. Compact design, robustness and price tend to be the defining factors in these monitors. Picture quality is not so important and is often poorer than for domestic televisions. Broadcast monitors have very good quality because they are used as a reference in the broadcast station. They are expensive and often need periodic alignment checks to retain their quality. Studio monitors are graded according to their quality. A grade 1 monitor is the best quality. The tube is selected for its definition and colourimetry, and the circuitry is designed with no compromise to picture quality. Input signals Analogue inputs Terrestrial By far the most popular input signal to a television is the UHF signal. Called terrestrial because the signal is sent over land as a radio signal. It uses a transmitter mast at the broadcast station and an aerial at the receiver. The radio frequency carrier holds a composite video signal and its associated audio, in a bandwidth of about 6MHz. Televisions include radio frequency tuners that can tune into one of these terrestrial signals and demodulate the video and audio signals, turning them back into baseband analogue signals. Composite Composite video is a baseband signal, and does not include audio. Audio must be input separately. This presents a difficulty in a domestic environment where simple installation is very important. The most 187 Sony Broadcast & Professional Europe
  • 196.
    Part 18 –Television receivers & monitors popular method of connecting composite video to a domestic television in Europe is through a Scart connector. The Scart connector is a multi- pin connector providing component, composite and audio connections in one connector. Many European domestic television peripheral equipment, like video tape players, DVD players, and video games machines have Scart connectors fitted and provide a simple way of connecting to domestic television receivers. Composite is common in broadcast stations and post-production. Generally regarded as a low quality connection for monitors, compared to analogue component and digital connections, composite is easy to connect and still provides a reasonable monitoring image. Component Component video connections are normally more difficult to connect because they involve three connectors. Add to this the fact that, like composite, component is a baseband video signal with no audio, and component is not an easy option for domestic use where simplicity is paramount. However analogue component provides a very high quality connection for domestic purposes. An increasing amount of peripheral equipment, like video tape players, DVD players, and video games machines are being designed to output component signals using the Scart connector, providing the home viewer with a relatively high quality image. Component is the preferred analogue video connection within the broadcast station, and most studio monitors provide for component analogue input, using three separate, usually BNC, connectors. Digital inputs Satellite/cable Although there are an increasing amount of people subscribing to satellite and cable channels, it is still rare to find a television receiver with a built-in satellite decoder. Most receivers use an external decoder and the television takes a decoded signal from the decoder. This is often a UHF signal, similar to a terrestrial signal, but could be either a baseband composite or component analogue signal. Digital terrestrial Like satellite and cable, digital terrestrial is still a rare option as an input for television receivers. Most people subscribing to domestic digital terrestrial broadcast services use an external decoder box with the television taking a UHF, composite or component analogue signal. Sony Training Services 188
  • 197.
    Broadcast Fundamentals Part 17 Timecode A short history Splicing tape Ever since video was first recorded there has been a need to edit video material. At first this process consisted of little more than removing errors and any material not required in the final recording. This was done by simply cutting the video tape. As technology progressed efforts were made to perform the same editing tasks that were already in common use in film, i.e. making up a complete program from bits and pieces of video joined together. As with film this was done by simply cutting the required sections of video and splicing them together. Edits were often badly made, causing picture breakup and rolls at the edit points, and once the edit was made there was no turning back. Electronic editing A little later electronic editing was introduced. Rather than cutting the video tape up into pieces, a copy would be made from the original tape onto a new tape. By electronically organising how the various bits of video material were copied from the original tape to the new one it became possible to edit a complete program together without affecting the original video tape. It soon became necessary to index the tape in some way so that particular edit points could easily be found. In the early 60’s Ampex introduced a system called Editek. This system allowed the editor to insert an audio tone into the audio channel of the video tape at the chosen edit point. The recorder and player VTR would then use the tone to switch at the edit point and perform the edit electronically. Although providing editors with a technical advantage over anything that had gone before, Editek was still slow and not as easy to use as could be. Further more, Editek was not frame accurate. Frame accuracy Film uses sprocket holes to mechanically move it through the projector. By linking the mechanics of the projector to a counter it was therefore easy to get an accurate frame count as the film progressed through the projector. Early efforts were made to do the same thing with video tape, by counting capstan rotations. This was however very inaccurate due to slippage. A little later the control track was used. As video uses a control track to lock the player’s mechanics to the helical video tracks recorded on tape a simple counter could be attached to the control track servo system to count frames in much the same way as one would counting sprocket holes in film. 189 Sony Broadcast & Professional Europe
  • 198.
    Part 19 –Timecode However, if the film was damaged sprocket holes could be missed and the overall count would slip. In much the same way, if video tape was stopped and started, and wound backwards and forwards repeatedly, control track pulses could be missed and the count would slip. Also if the film or video tape was loaded somewhere in the middle, one would have no idea how far from the beginning one was. What was needed, not only for video tape editing, but also for film editing, was a method of individually “marking” each frame with a unique number. Timecode In the late 60’s timecode was introduced. Simply called ‘timecode’ this coding method would later be called ‘longitudinal timecode’ when the alternative ‘vertical interval timecode’ was introduced some ten years later. Timecode provided editors with the system they had been waiting for. A coding system that was designed to be read both in the forward and reverse directions, at a wide range of tape speeds with a numbering system that was related to real time, with a unique code for each and every video frame. Computer based editing systems soon became popular, allowing edits to be programmed as a number of timecode related points. The idea of an edit list came about, and it became common to carry an edit list, either in paper form or on disk, with video tapes, when moving from one edit suite to another. Future uses of timecode include very sophisticated computer controlled equipment using timecode and related video clips or snapshots for versatile off-line editing suites. Timecode’s basic structure Timecode is represented as 8 digits split into 4 pairs of 2 digits each, separated by colons, as shown in figure . Each digit pair conveys hours, minutes, seconds, and frames, reading from the left. Figure 87 Timecode’s basic structure Sony Training Services 190
  • 199.
    Broadcast Fundamentals Timecode gives a 24 hour count. This amount is considered longer than any single piece of video would last, and can also be set to be the time of day, thus allowing the time a video recording was made to be recorded on tape as well. Both LTC and VITC are conveyed and recorded on tape as a serial data stream. This data stream is a collection of binary bits, 80 bits for LTC and 90 bits for VITC. Groups of these bits define various elements of the timecode. Timecode address bits (BCD) Figure 88 Binary coded decimal There are 26 address bits separated into 8 BCD (binary coded decimal) groups, of either 2, 3 or 4 bits each. Each BCD group defines either the tens or units digit of the hours, minutes, seconds or frames count. User bits (binary groups) There are 32 user bits separated into 8 groups of 4 bits each. These groups can either be used in any way the user sees fit, or can be specified to comply with 7 and 8 bit standard ISO character sets, or can define another timecode value which can be the same, or different, from the one defined by the timecode address bits. Thus by using the user bits as timecode, LTC or VITC can hold 2 unrelated timecode counts. Sync bits In LTC there are 16 sync bits placed as one group (word) at the end of each code. They define the end of each code, so that an LTC timecode reader can find the beginning of each code. They also define the tape direction because the sync word is different in the forward direction as it is in the reverse direction. In VITC there are 18 sync bits separated into 8 pairs occurring throughout the code after each user group. 191 Sony Broadcast & Professional Europe
  • 200.
    Part 19 –Timecode Flags There are 6 special single bit flags. They are placed in the bits ‘saved’ by 2 and 3 bit BCD timecode address groups, as explained as part of the timecode address bits on page. The flags are defined as follows :- Drop frame flag Used to identify if the timecode increments according to the NTSC drop frame counting method. See page 131. Colour frame flag Used to identify if the timecode is related to composite video material, or component video material that has been decoded from composite video material. The exact field in the colour frame sequence is found by mathematically calculating on the basis that timecode 00:00:00:01 is the first field of the colour frame sequence. Phase correction flag (LTC only) Used to ‘switch’ the LTC phase if there are an uneven number of 1’s in the complete code. This makes sure that each code starts low at the beginning of each video frame. (In VITC this flag is used as a field mark flag.) Binary group flags Used to define how the user binary group bits are to be used as shown in the table below. Binary group flags Function 2 1 0 0 0 0 Character set not specified 0 0 1 Eight bit character set 0 1 0 Unassigned 0 1 1 Unassigned 1 0 0 Page/Line 1 0 1 Unassigned 1 1 0 Unassigned 1 1 1 Unassigned The first state, with all bits at ‘0’ specify that all the user bit groups are undefined and can be used in any way the user sees fit. The second state specifies that the user bit binary groups are taken in pairs, giving either 7 or 8 bit groups that are used to specify an ISO character. Sony Training Services 192
  • 201.
    Broadcast Fundamentals Field mark flag (VITC only) VITC is field sensitive, unlike LTC which is only frame sensitive. This flag is used to define the field. For field 1 this flag is ‘0’, for field 2 it is ‘1’. For NTSC and PAL based video material this flag is ‘0’ for odd field and ‘1’ for even fields (In LTC this flag is used as a phase correction flag.) Unassigned timecode flags There are two unassigned flags in timecode. These have been left for future expansion if a new technology is devised that may require more of timecode than in presently provided. 193 Sony Broadcast & Professional Europe
  • 202.
    Part 19 –Timecode Longitudinal timecode Figure 89 The longitudinal timecode head LTC (longitudinal timecode) was the first timecode to be proposed and extensively used, during the latter part of the 1960’s. It uses a linear, or longitudinal, track running along the edge of the video tape. The actual position on tape varies from one standard to another. C format, for instance ‘borrows’ audio track 3 along the bottom edge of the tape for timecode. U-Matic machines have an extra track placed at the bottom end of the helical track for timecode. ½” tape formats like Betacam, Betacam SX Digital Betacam, IMX and HDCAM place LTC along the bottom edge of the tape below the control track. LTC has the advantage that it can be read at high tape speed when VITC cannot be read. However it has the disadvantage that it cannot be read at zero tape speed (stop mode) because the tape is no longer moving passed the longitudinal timecode head. LTC signal structure LTC consists of 80 bits of data recorded serially on tape beginning at the same time as line 5 of either the 525 or 625 line sequence is being written to the helical tracks on tape. Sony Training Services 194
  • 203.
    Broadcast Fundamentals (There is an obvious physical displacement between these two points on tape, but this displacement is the same within each tape format, and therefore is not a problem.) 195 Sony Broadcast & Professional Europe
  • 204.
    Figure 90 0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 79 Sony Training Services F ra m e s U s e r b in a r y U s e r b in a ry S econds U s e r b in a ry S econds U s e r b in a ry M in u te s U s e r b in a ry M i n u te s U s e r b in a ry H o u rs U s e r b in a ry H o u rs U s e r b in a ry S y n c r o n is a ti o n w o r d u n it s c o u n t g ro u p 1 g ro u p 2 u n its c o u n t g ro u p 3 te n s g ro u p 4 u n its c o u n t g ro u p 5 te n s g ro u p 6 u n its c o u n t g ro u p 7 te n s g ro u p 8 count count co u n t The longitudinal timecode signal Part 19 – Timecode 196
  • 205.
    Broadcast Fundamentals B it U s e (5 2 5 ) U s e (6 2 5 ) B it U s e (5 2 5 ) U s e (6 2 5 ) 0 F r a m e u n its b it 0 40 M in u t e s t e n s b it 0 1 F r a m e u n its b it 1 41 M in u t e s t e n s b it 1 2 F r a m e u n its b it 2 42 M in u t e s t e n s b it 2 3 F r a m e u n its b it 3 43 B in a r y g r o u p fla g 0 B in a r y g r o u p f la g 1 4 U s e r g ro u p 1 44 U s e r g ro u p 6 5 U s e r g ro u p 1 45 U s e r g ro u p 6 6 U s e r g ro u p 1 46 U s e r g ro u p 6 7 U s e r g ro u p 1 47 u s e r g ro u p 6 8 F r a m e te n s b it 0 48 H o u r s u n it s b it 0 9 F r a m e te n s b it 1 49 H o u r s u n it s b it 1 10 D r o p fr a m e fla g U n a s s ig n e d ( s e t t o 0 ) 50 H o u r s u n it s b it 2 11 C o lo u r f r a m e f la g C o lo u r f r a m e f la g 51 H o u r s u n it s b it 3 12 U s e r g ro u p 2 52 U s e r g ro u p 7 13 U s e r g ro u p 2 53 U s e r g ro u p 7 14 U s e r g ro u p 2 54 U s e r g ro u p 7 15 U s e r g ro u p 2 55 U s e r g ro u p 7 16 S e c o n d s u n its b it 0 56 H o u r s t e n s b it 0 17 S e c o n d s u n its b it 1 57 H o u r s t e n s b it 1 18 S e c o n d s u n its b it 2 58 B in a r y g r o u p f la g 1 B in a r y g r o u p f la g 2 19 S e c o n d s u n its b it 3 59 B i n a r y g r o u p f l a g 2 P h a s e c o r r e c t io n f la g 20 U s e r g ro u p 3 60 U s e r g ro u p 8 21 U s e r g ro u p 3 61 U s e r g ro u p 8 22 U s e r g ro u p 3 62 U s e r g ro u p 8 23 U s e r g ro u p 3 63 U s e r g ro u p 8 24 S e c o n d s te n s b it 0 64 S y n c w o rd (s e t to 0 ) 25 S e c o n d s te n s b it 1 65 S y n c w o rd (s e t to 0 ) 26 S e c o n d s te n s b it 3 66 S y n c w o rd (s e t to 1 ) 27 P h a s e c o r r e c t io n f la g B i n a r y g r o u p f l a g 0 67 S y n c w o rd (s e t to 1 ) 28 U s e r g ro u p 4 68 S y n c w o rd (s e t to 1 ) 29 U s e r g ro u p 4 69 S y n c w o rd (s e t to 1 ) 30 U s e r g ro u p 4 70 S y n c w o rd (s e t to 1 ) 31 U s e r g ro u p 4 71 S y n c w o rd (s e t to 1 ) 32 M in u t e s u n it s b it 0 72 S y n c w o rd (s e t to 1 ) 33 M in u t e s u n it s b it 1 73 S y n c w o rd (s e t to 1 ) 34 M in u t e s u n it s b it 2 74 S y n c w o rd (s e t to 1 ) 35 M in u t e s u n it s b it 3 75 S y n c w o rd (s e t to 1 ) 36 U s e r g ro u p 5 76 S y n c w o rd (s e t to 1 ) 37 U s e r g ro u p 5 77 S y n c w o rd (s e t to 1 ) 38 U s e r g ro u p 5 78 S y n c w o r d ( s e t to 0 ) S y n c w o r d ( s e t to 1 ) 39 U s e r g ro u p 5 79 S y n c w o rd (s e t to 1 ) Figure 91 LTC bits 197 Sony Broadcast & Professional Europe
  • 206.
    Part 19 –Timecode LTC is recorded as simple polarised regions on tape according to the bi- phase mark channel coding method. See page for an explanation on the bi-phase mark method of channel coding. The bi-phase mark signal must obey certain criteria which are outlined in Fig. on page. Figure 92 Longitudinal timecode signal detail The 80 bits are evenly spaced over the whole frame. They are separated into groups of bits responsible for the timecode itself, user binary groups, flags and syncs. The usage of these groups is described on page . The LTC signal structure is shown in Fig. and on page 122. LTC sync bits LTC contains 16 sync bits at the end of each code. These bits have a particular sequence, ‘0011111111111101’. The 12 bits in the middle of the sync bits are all ‘1’s. Because the timecode groups are all BCD (see page this particular pattern of 12 bits cannot occur anywhere else in the LTC code. Thus an LTC reader can determine where the end of the LTC code is and therefore know to begin looking for the beginning of the next code. The bi-phase mark signal structure is direction independent (see page ). In the reverse direction the code reads ‘1011111111111100’. Because the reader finds ‘10’ before the 12 ‘1’s in the middle of the sync bits, and ‘00’ at the end it knows the tape is running backwards. It therefore knows the code will occur after the sync, not before it, and that it will be backwards, and that the appropriate adjustments therefore have to be made to read the code correctly. Sony Training Services 198
  • 207.
    Broadcast Fundamentals Bi-phase markcoding LTC uses bi-phase mark as a channel coding method, otherwise known as the Manchester code 1. Bi-phase mark places a transition at every bit boundary, and a transition in the middle of each bit period for each ‘1’ bit. This makes it polarity independent i.e. anything reading bi-phase mark is only concerned with the transitions, not whether the transitions are going high or low. Figure 93 Bi-phase mark coding It is also direction independent, and the original data can always be decoded from a bi-phase mark signal even though it may be read backwards. It is also self clocking, i.e. no matter what speed the tape is going, all an LTC reader has to do is to look for regular transitions corresponding to the bit boundaries, and once locked to them, search for any bit periods with a transition in the middle. Any with no transitions are 0’s and those with, are 1’s. Adjusting the LTC head The LTC head is a static head as shown in Fig 66. 199 Sony Broadcast & Professional Europe
  • 208.
    Part 19 –Timecode Head to tape contact The head itself hangs off a bracket. The head to tape contact can be adjusted by loosening the fixing screws between the bracket and the head and rotating the head about the vertical axis. The head gap must be in direct contact with the tape if it is to record and playback timecode properly. Head height The bracket is held on a plate by a large spring underneath the whole assembly. The spring is trying to pull the whole head downwards. Thus by adjusting a small screw between the bracket and the plate you can adjust the head height. Figure 94 LTC head adjustments The head assembly must be at the correct height if each head is to cover its track properly. Sony Training Services 200
  • 209.
    Broadcast Fundamentals Head zenith The plate is fixed to the base plate by a screw and spring arrangement. There is a small pivot at the back of the head which keeps the plate and base plate apart. Thus by adjusting a small screw at the front whish is also separating the plate and base plate, you can adjust the head’s zenith, i.e. the amount of lean forwards or backwards. If the head zenith is incorrect either the timecode or audio portions of the head will not be in good contact with the tape and both recording and playback will be bad. Furthermore, as the tape moves across the head it will forced either upwards or downward by the head. This may make video tracking difficult and may force the tape against the tape guides and damage the edge of the tape. Head azimuth The plate is also held from the base plate by another screw at the side of the assembly. This screw can be used to adjust the head azimuth, i.e. the sideways lean. Incorrect head azimuth will result in incorrect audio phase and incorrect relative position between the audio heads and the timecode head. Head position The base plate is fixed to the mechadeck with a number of screws. The fixing holes in the base plate are actually slots. Thus by loosening the screws the head position on the tape path can be adjusted. If the head position is incorrect the relative timing between the timecode and audio signals compared to the control and video signals will be wrong. Lip sync will be incorrect and timecode may be incorrectly read, resulting in bad edits. 201 Sony Broadcast & Professional Europe
  • 210.
    Part 19 –Timecode Vertical Interval Timecode The basis for VITC A form of VITC was proposed at the same time as LTC. However machines at the time generally found it difficult to maintain a good video signal at anything other than normal play speed, thus VITC offered no advantage over LTC. As VTR technology progressed, video playback heads were designed that could move to follow the helical tracks on tape at other than normal play speed. As designs improved it soon became possible for the video heads to follow the helical tracks at very slow speeds and even in still mode, while maintaining a steady picture. Figure 95 LTC and VITC speed comparison At slow and still speeds LTC becomes unreadable, and editing becomes difficult. Eventually a workable VITC was proposed about ten years after LTC, to get over this problem. It uses two lines during the field blanking Sony Training Services 202
  • 211.
    Broadcast Fundamentals period, (otherwise known as vertical blanking) to store serial data with much the same format and content as LTC. Because VITC is written into the video signal itself as part of the helical tracks it has the advantage over LTC in that it can be read at zero tape speed (still mode), because even at this speed the flying heads on the scanner are still moving over the tape. Another advantage of VITC over LTC is that it is field accurate, because a complete code is placed during the vertical interval of each field, whereas LTC requires a whole frame to convey one code. Because VITC is included in the video signal itself, it also has the advantage that cabling can be made simpler, with no extra cable required specifically for timecode. However at high tape speeds, no matter how good the heads are at following the helical tracks, they eventually lose their position on the helical tracks, and consequently lose the VITC signal. Another disadvantage of VITC is that there is no agreed set standard for the vertical blanking interval lines that should be used for the VITC signal. The proposal was simply put to the industry too late, and other uses had already been found for the vertical interval, teletext and vertical interval test signals being 2 examples. Therefore VITC can occur on just one line on field 1 anywhere between lines 9 and 22 and on field 2 anywhere between lines 322 and 335. VITC signal structure Because VITC is stored in the vertical interval it therefore conforms to the same basic rules as the video signal itself does. In fact VITC conforms to the same criteria as a monochrome video signal, i.e. the same bandwidth limitations, maximum slew rate, maximum and minimum voltage levels, and so on. Peak white level represents a ‘1’ and a black level represents ‘0’. VITC consists of 90 bits of data recorded serially during the chosen vertical interval line. The first bit must occur between 10us and 11us after the leading edge of the line sync pulse. It usually occurs at about 10.5us. 203 Sony Broadcast & Professional Europe
  • 212.
    Part 19 –Timecode 0 1 200 +_ 50 ns _ 1 = 80 + 1 0 IR E L e s s th a n 5 % 90% Peak to 50% peak (1 0 0 % ) _ 0 1 0% 0 = 0 + 10 IR E Figure 96 VITC signal details The whole VITC code normally take up most of the vertical line, and the last bit, bit 79, cannot occur less than 2.1us before the leading edge of the next line sync pulse. The VITC signal must also obey certain criteria which are outlined in Fig. on page . VITC sync bits VITC has a ‘10’ sequence on bits 0 and 1 and every 10 bits after. These allow the VITC reader to overcome timing jitter when reading the signal. Sony Training Services 204
  • 213.
    Broadcast Fundamentals 89 C R C C b it s 84 S y n c r o n is a ti o n b it s 80 U s e r b in a r y g r o u p 8 76 H o u r s te n s c o u n t 72 S y n c r o n is a ti o n b it s 68 U s e r b in a r y g r o u p 7 64 H o u r s u n i ts c o u n t S y n c r o n is a ti o n b it s 60 U s e r b in a r y g r o u p 6 56 M i n u t e s te n s c o u n t 52 S y n c r o n is a ti o n b it s U s e r b in a r y g r o u p 5 48 44 M i n u te s u n i ts c o u n t S y n c r o n is a ti o n b it s 40 U s e r b in a r y g r o u p 4 36 S e c o n d s te n s c o u n t 32 S y n c r o n is a ti o n b it s U s e r b in a r y g r o u p 3 28 24 S e c o n d s u n i ts c o u n t S y n c r o n is a ti o n b it s 20 U s e r b in a r y g r o u p 2 16 12 S y n c r o n is a ti o n b it s U s e r b in a r y g r o u p 1 8 F ra m e s u n its c o u n t 4 S y n c r o n is a ti o n b it s 0 Figure 97 The vertical interval timecode signal 205 Sony Broadcast & Professional Europe
  • 214.
    Part 19 –Timecode B it U s e (6 2 5 ) U s e (5 2 5 ) B it U s e (6 2 5 ) U s e (5 2 5 ) 0 S ync 1 45 M in u te s u n its b it 3 1 S ync 0 46 U s e r g ro u p 5 2 F r a m e u n its b it 0 47 U s e r g ro u p 5 3 F r a m e u n its b it 1 48 U s e r g ro u p 5 4 F r a m e u n its b it 2 49 U s e r g ro u p 5 5 F r a m e u n its b it 3 50 S ync 1 6 U s e r g ro u p 1 51 S ync 0 7 U s e r g ro u p 1 52 M in u te s te n s b it 0 8 U s e r g ro u p 1 53 M in u te s te n s b it 1 9 U s e r g ro u p 1 54 M in u te s te n s b it 2 10 S ync 1 55 B in a r y g r o u p fla g 0 B in a r y g r o u p fla g 1 11 S ync 0 56 U s e r g ro u p 6 12 F r a m e te n s b it 0 57 U s e r g ro u p 6 13 F r a m e te n s b it 1 58 U s e r g ro u p 6 14 U n a s s ig n e d D r o p fr a m e fla g 59 U s e r g ro u p 6 15 C o lo u r fr a m e fla g C o lo u r fr a m e fla g 60 S ync 1 16 U s e r g ro u p 2 61 S ync 0 17 U s e r g ro u p 2 62 H o u r s u n its b it 0 18 U s e r g ro u p 2 63 H o u r s u n its b it 1 19 U s e r g ro u p 2 64 H o u r s u n its b it 2 20 S ync 1 65 H o u r s u n its b it 3 21 S ync 0 66 U s e r g ro u p 7 22 S e c o n d s u n its b it 0 67 U s e r g ro u p 7 23 S e c o n d s u n its b it 1 68 U s e r g ro u p 7 24 S e c o n d s u n its b it 2 69 U s e r g ro u p 7 25 S e c o n d s u n its b it 3 70 S ync 1 26 U s e r g ro u p 3 71 S ync 0 27 U s e r g ro u p 3 72 H o u r s te n s b it 0 28 U s e r g ro u p 3 73 H o u r s te n s b it 1 29 U s e r g ro u p 3 74 B in a r y g r o u p fla g 1 B in a r y g r o u p fla g 2 30 S ync 1 75 B in a r y g r o u p fla g 2 F ie ld m a r k fla g 31 S ync 0 76 U s e r g ro u p 8 32 S e c o n d s te n s b it 0 77 U s e r g ro u p 8 33 S e c o n d s te n s b it 1 78 U s e r g ro u p 8 34 S e c o n d s te n s b it 2 79 U s e r g ro u p 8 35 F ie ld m a r k fla g B in a r y g r o u p fla g 0 80 S ync 1 36 U s e r g ro u p 4 81 S ync 0 37 U s e r g ro u p 4 82 C R C C 38 U s e r g ro u p 4 83 C R C C 39 U s e r g ro u p 4 84 C R C C 40 S ync 1 85 C R C C 41 S ync 0 86 C R C C 42 M in u te s u n its b it 0 87 C R C C 43 M in u te s u n its b it 1 88 C R C C 44 M in u te s u n its b it 2 89 C R C C Figure 98 VITC bits Sony Training Services 206
  • 215.
    Broadcast Fundamentals Drop frametimecode Drop frame timecode is only applicable to 525 line, NTSC based systems. It arises from a basic problem associated with the fact that NTSC systems do not count an exact number of frames per second. Instead NTSC systems run at a rate of 29.97 frames per second. This means that if timecode were to count at a rate of 30 frames per second continuously it would eventually count an extra 108 frames per hour, which amounts to about 3.6 seconds. Over a 24 hour period, the maximum possible with timecode this extra amounts to almost 1 minute! Drop frame timecode was devised to allow for this problem by jumping the timecode generator’s counter at certain specific times. Frames are therefore dropped from the count. Losing 108 frames from the timecode count is performed by first jumping the timecode generator two frames at the beginning of every minute. Therefore when the timecode generator reaches 09:42:59:29, for instance, it will increment to 09:43:00:02 instead of 09:43:00:00, missing out frames 09:43:00:00 and 09:43:00:01. This effectively losses 120 frames per hour. This is too much so the second counting scheme is used whereby the timecode generator is not jumped at the beginning of every 10 minutes, i.e. at 00, 10, 20, 30, 40, 50 & 60 minutes. Thus when the timecode generator reaches 09:49:59:29 it will increment to 09:50:00:00 as normal instead of jumping to 09:50:00:02. This chops 12 frames off the 120 frames that would originally be lost using the first counting scheme on its own to leave 108 frames lost in total per hour, just the number required! Which timecode am I using ? The generally accepted standard timecode is LTC. There are a number of reasons why. Firstly it was the first timecode to be proposed and used extensively, and therefore had a head start over VITC in general acceptance. Secondly, as explained in the description on VITC, there is an inability to standardise the VITC line on either field 1 or 2 of a video signal. This is because VITC was proposed later and other uses had already been found for the vertical interval lines before VITC could ‘grab’ any particular vertical interval line for its own exclusive use. Thus the VITC standard allows VITC to be put on any one of a number of vertical interval lines, and VITC timecode reader/generators often have to be altered to operate with the particular lines chosen, having first made sure that it is free from use by anything else. Thirdly, tapes used in an edit suite are often striped with continuous LTC timecode. This then becomes the timing reference for the tape. 207 Sony Broadcast & Professional Europe
  • 216.
    Part 19 –Timecode As editing takes place, using insert edits, the LTC track will not be recorded. Insert edits are made to the video and audio tracks only. Thus LTC became accepted as the timecode that would be guaranteed not to change in editing. This third reason is a little difficult to justify in modern VTR’s where there is the capability to replace VITC during video insert edits and guarantee the same code is replaced after the edit. However editors habits soon tended to regard LTC as the timecode to depend on. Habits account for a lot, to the extent that the Sony DVW- A500P series Digital Betacam machines, which could only playback analogue Betacam SP tapes, where modified so that the LTC track could be re-recorded to analogue tapes even though nothing else could. Timecode use in video recorders Most modern professional tape recorders have the capacity to read, generate, record and playback both LTC and VITC. When recording, timecode is used in two distinctive ways. The first is to record the time of day the recording was made. Camcorders and portable machines are often set to record the time of day as they are often used to record news or sport, and this means the time is also recorded. The second is to provide continuous timecode throughout a tape. Studio machines are often set to record continuous timecode. This means that although an edit may take days or weeks to complete, and is made up from many bits and pieces of video put together, the final master tape will have a continuous seamless timecode, starting from zero at the beginning of the tape. Typical VTR timecode controls There are a number of controls that can be found in a typical modern professional video tape recorder. It is not possible to mention all of them here, or the different names that might be given to each control within a particular machine. However a few are considered here to give a rough idea of the kind of things to look for. Rec Run / Free Run switch This switch allows a VTR operator to either select continuous timecode recording (Rec Run), or time of day recording (Free Run). With the switch set to Rec Run the machine’s internal timecode generator increments only when the machine is recording. With the switch set to Free Run the timecode generator continues to increment all the time. If the time of day is to be recorded the timecode generator then needs to be set to the time of day. VITC On Off switch As LTC is the industry accepted timecode, rather than VITC, it is often possible to switch the internal VITC reader/generator off, if it isn’t in use. Sony Training Services 208
  • 217.
    Broadcast Fundamentals VITC/AUTO/LTC switch This switch is often included in a machine. It allows the user to either force the machine to operate with VITC or with LTC, or to automatically select the timecode signal it is able to find. If both VITC and LTC have been recorded to tape, and played back at a variety of speeds, ranging from stop to fast-forward at 50 times play speed, the machine’s capacity to pick up both LTC and VITC is similar to Fig. below. With the machine stopped LTC cannot be read off the tape. At play speed both LTC and VITC can be read. As the machine’s speed is increased eventually VITC becomes unreadable. (If the machine’s speed is increased further,eventually it becomes difficult to pick up even LTC, but that situation is beyond this discussion.) Most machines will default to LTC, the preferred industry standard, at any speed where both LTC and VITC are detectable off tape. Drop Frame On Off switch This switch is only found on 525 line (NTSC) machines to allow the operator to work with continuous timecode, which ironically would not be related to time at all, or drop frame timecode which keeps to time by dropping frames at certain specific points. 625 line (PAL) machines do not include this switch, and a blank space will often be found where one would be fitted. Real Time to LTC/VITC User Bits switch As explained on page , timecode includes groups of bits in amongst the time address bit groups that can be used to store anything the user might wish. This switch allows the user to store the real time (time of day) into the user bits of either LTC of VITC on tape. Timecode Reset, Advance or Set buttons Machines generally have one or more buttons which allow the user to either reset the machine’s internal timecode generator to 00:00:00:00, or to set it at any predetermined count. If running in Rec Run mode the user might reset to 00:00:00:00 before beginning an edit session. Alternatively in Free Run mode the user might preset the timecode generator to some time count, say just one minute in the future, using something like the Advance button, and press Set the moment that time comes up, to ‘synchronise’ the machine to the time of day. External User Bits switch Some machines might include a switch to allow the user to either record the user bit code input to the machine or to use the user bit code set within the internal timecode generator, irrespective of what is happening to the time address bits of timecode. 209 Sony Broadcast & Professional Europe
  • 218.
    Part 19 –Timecode VITC line selector This selector may be a switch or a menu item. It allows the user to select which of the vertical lines will be used by VITC. LTC phase correction switch A bi-phase mark signal can be read no matter which way round it is (see page . However the LTC specification includes a polarity bit (bit 59 in 625 line, and bit 27 in 525 line systems). This bit ensures that each LTC code begins low. Normally this makes no difference to the code, and many VITC readers do not care if the code has been inverted. However if two signals are to be edited together, and one of the LTC signals has somehow been inverted there will be an error at the edit point. This switch inverts the incoming LTC timecode signal if there appears to be a problem with the edit. The future Most modern machines that can store video can also store timecode. LTC was designed specifically for tape recorders, where the timecode signal is recorded on a longitudinal track somewhere near the edge of the tape. VITC is, however, not designed specifically for tape recorders, but was designed purely for the video signal itself, regardless of where that signal might be put into. As technology progresses different types of video recorder are starting to appear. The two types that should be considered are hard disk recorders, and RAM recorders. Hard disk recorders store video on a hard disk, or, more commonly, on a hard disk array. Hard disk recorders generally have the disadvantage that they cannot store anything like the amount of video that a tape recorder can. RAM recorders are even worse in this respect. Although very fast and flexible, they generally store only a fraction of the amount of video a tape recorder can. However both hard disk and RAM recorders have no longitudinal track and must therefore ‘fake’ an LTC from VITC. If tape recorders are ever universally replaced by more advanced hard disk or RAM recorders, the difference between LTC and VITC may become more confused. Alternatively VITC may become the industry standard instead of LTC, which may eventually be dropped altogether. Sony Training Services 210
  • 219.
    Broadcast Fundamentals Part 18 SDI (serial digital interface) Parallel digital television During the late 70’s the television industry had begun to investigate the idea of digitising television signals. At first such attempts were confined to pieces of equipment such as standards converters, where the analogue video signal was input to the unit, converted to a digital signal, standards converted in the digital domain, converted to an analogue signal again, and output from the unit. The conversion to a digital video signal and back again was confined to within the unit itself, and thus the methods of conversion tended to vary from one manufacturer to another. It soon became obvious that by maintaining the video signal in the digital form as it passed from one piece of video equipment to another, the quality could be maintained at a higher level than was normal for conventional analogue video signals. A universal standard for digitising analogue video signals would thus be very useful to the video industry. In 1982 the CCIR met in Geneva. During their proceedings a recommendation was made for digitising analogue component video signals. The recommendation was CCIR 601 and, although only a recommendation, very quickly became a de-facto standard throughout the video industry. CCIR 601 was a very basic document when it was published in 1982. But it did promote thought throughout the video industry, and a few manufacturers started to design equipment that conformed to it. When the CCIR met again in 1986, CCIR 601 had been rewritten and much improved. It had been altered in some places and added to in others. A second recomendation was also drafted, called CCIR 656. Although not a strict and absolute division, CCIR 601 described the digital video signal itself and its conversion from an analogue signal, and CCIR 656 described the physical interface, including the type of connectors and cable. CCIR 601/656 CCIR 601/656 involves the digitising of component analogue video signals, and is commonly referred to as D1. (A standard involving the digitising of composite video signals is commonly referred to as D2. SDI is not based on composite video signals or D2.) There are two elements of CCIR 601/656 that need to be explained, the quantisation levels and the sample structure. Quantisation levels The CCIR decided that component signals would be described at a series of 8 bit binary numbers (words). This gives 256 possible quantisation levels, 0 to 255. 211 Sony Broadcast & Professional Europe
  • 220.
    Part 20 –SDI (serial digital interface) H e x a d e c im a l D e c im a l B in a ry 255 FF 11111111 N ot used P e a k w h i te 235 EB 1 1 1 01 01 1 2 2 0 q u a n ti s a t i o n l e v e l s Y B la c k 16 10 0001 0000 N ot used 0 00 00000000 S yncs not d ig itis e d 255 FF 11111111 Peak + ve N ot used 240 F0 1 1 1 1 0000 c o lo u r le v e l 2 2 5 q u a n ti s a t i o n l e v e l s R -Y or 'B l a c k ' l e v e l 1 28 80 1 0000000 B -Y P e a k -v e c o lo u r le v e l 16 10 0001 0000 N ot used 0 00 00000000 Figure 99 CCIR-601 digitisation In the case of the analogue Y signal, only the brightness part of the signal is digitised, i.e. any sync pulses added to the Y signal are ignored. Black level of the Y signal is set at 16 (10 hexadecimal or 00010000 binary). Peak white level is set at 235 (EB hexadecimal or 11101011 binary). The area between 0 and 15 is not used for digitising the Y signal, and although CCIR 601 actually specifies that the Y signal may occasionally be digitised beyond 235 for ‘super white’ signals, in almost all cases the area between 236 and 255 is not used. The samples resulting from the Y analogue component signal are referred to as Y samples. Sony Training Services 212
  • 221.
    Broadcast Fundamentals In the case of the analogue (R-Y) and (B-Y) signals, ‘black level’ or the zero point is set at 128 (80 hexadecimal or 10000000 binary). The peak positive excursion of each colour difference is set at 240 (F0 hexadecimal or 11110000 binary), and the peak negative excursion is set at 16 (10 hexadecimal or 00010000 binary). As with the Y signal, the area between 0 and 15 is not used for digitising the colour difference signals, and the area between 240 and 255 is also not used. The samples resulting from the (R-Y) analogue component colour difference signal are referred to as Cr samples, the samples resulting from the (B-Y) analogue component colour difference signal are referred to as Cb samples. Sample (word) structure Samples are taken from the three analogue component signals at a rate of 13.5 MHz. This frequency was chosen to give the greatest degree of commonality between 625 and 525 line television standards. Y Y Y Y Y Y Y Y Y Y O rig in a l 1 3 .5 M H z s a m p le s B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y B -Y R -Y F in a l 2 7 M H z s a m p le s C b Y C r Y C b Y C r Y C b Y C r Y C b Y C r Y C b Y C r Y C o - s it e d tr i p l e t C o - s it e d tr i p l e t C o -s i te d tr i p l e t C o -s i te d t r i p l e t C o - s i te d tr i p l e t S in g le Y S in g le Y S in g le Y S in g le Y S in g le Y Figure 100 CCIR-601 sample structure Each 13.5MHz ‘position’ therefore has a Y sample, a (B-Y) sample and a (R-Y) sample. The first sample position of each line gives one Cb, Y and Cr word. The Cb word is derived from the analogue (B-Y) component signal. Likewise the Cr word is derived from the (R-Y) signal. Although these three word occur sequentially in the CCIR 601/656 data stream, it is important to remember that they originate from the same point on the original image. They are therefore referred to as co-sited samples, or as a co-sited triplet. 213 Sony Broadcast & Professional Europe
  • 222.
    Part 20 –SDI (serial digital interface) 1 3 . 5 M H z s a m p l e p o s it i o n s A n a lo g u e Y c o m p o n e n t s ig n a l A n a l o g u e ( B -Y ) c o m p o n e n t s ig n a l A n a l o g u e ( R -Y ) c o m p o n e n t s ig n a l F in a l 2 7 M H z s a m p le s C b Y C r Y C b Y C r Y C b Y C r Y C b Y C r Y C b Y C r Y C o -s i t e d tr i p l e t C o - s i te d tr i p l e t C o - s i te d tr i p l e t C o - s i t e d tr i p l e t C o -s i t e d tr i p l e t S in g le Y S in g l e Y S in g l e Y S in g le Y S in g le Y Figure 101 Component analogue video and CCIR-601 sample comparison The colour content of the second sample position is ignored, leaving a single Y word. The third sample position is treated as the first, the forth like the second, as so on, giving a co-sited triplet, single Y, co-sited triplet, single Y, etc. structure. Thus CCIR 601/656 words occur at 27 Mhz, with the Y words at 13.5 MHz and each Cr and Cb word at 6.75 MHz. CCIR 601/656 syncs Up to this point the digital data stream does not contain any syncs. As mentioned on the last page. the digital Y signal does not contain any sync information as it is not digitised from the original analogue Y component signal. Syncs need to be added somewhere to the CCIR601/656 27MHz data stream so that the receiver can lock to the incoming digital signal. Sony Training Services 214
  • 223.
    Broadcast Fundamentals The CCIR recommended that a special sync signal would be added to the beginning and end of every video line, even during the vertical interval lines. C b Y C r Y C C IR 6 0 1 /6 5 6 C b Y C Y C b Y C Y C b Y C Y C b Y C Y r r r r d a ta s tr e a m C o -s i te d t r i p l e t C o -s i te d tr i p l e t C o -s i te d tr i p l e t C o -s i te d t r i p l e t C o -s i te d t r i p l e t S in g le Y S in g le Y S in g le Y S in g le Y S in g le Y T P re a m b le R S T i m in g R e fe r e n c e C o d e Figure 102 CCIR-601 syncs This signal is referred to as a Timing Reference Code (TRC). It consists of four data words. The first three words are a preamble, a particular sequence of data that will not occur anywhere else in the digital video data stream. The forth word is referred to as a Timing Reference Signal (TRS). This code will enable the receiver to ‘find its place’ in the digital video signal. As mentioned before the analogue Y signal is digitised between 16 and 235 and the colour signals are both digitised between 16 and 240. None of the samples will ever reach 0 or 255 (00 or FF hex.). These values are retained for the sync preamble words. The first preamble word if 255 (FF hex.), and the second and third preamble words are both 0 (00 hex.). Timing Reference Signal structure The TRS consists of 8 bits, as all other CCIR 601/656 words. The 8 bits are specified as shown in the picture. The most significant bit, bit 7, is always a ‘1’.Bit 6 is the F bit. It defines which video field this particular TRS is in. A ‘0’ signifies field 1, and a ‘1’ signifies field 2. Bit 5 is the V bit. It defines whether the TRS is in the active part of the video field, or the vertical blanking interval. A ‘0’ signifes the active portion, and a ‘1’ signifies the vertical blanking interval. 215 Sony Broadcast & Professional Europe
  • 224.
    Part 20 –SDI (serial digital interface) Bit 4 is the H bit. It defines whether the TRS is at the beginning or the end of the line. A ‘0’ signifies the start of active video (SAV), and a ‘1’ the end of active video (EAV). M .S .B . b it 7 1 0 0 1 0 : F ie ld 1 F ie ld b it 1 : F ie ld 2 1 0 0 F 0 : A c ti v e fi e l d p o r ti o n 1 0 0 V V e rtic a l b it 1 : F ie ld b la n k in g p o rtio n 1 0 0 H 0 : S ta r t o f a c ti v e v i d e o ( S A V ) H o riz o n ta l b it 1 : E n d o f a c ti v e v i d e o ( E A V ) 1 0 0 P3 1 0 0 P2 P r o t e c ti o n b i t s 1 0 0 P1 M .S .B . b it 0 1 0 0 P0 T i m i n g R e fe r e n c e S i g n a l F irs t p re a m b le w o rd S e c o n d p re a m b le w o rd T h ird p re a m b le w o rd Figure 103 CCIR-601 timing reference structure Bits 3 to 0 are Hamming code protection bits P3 to P0. A different combination of these four bits occurs for each combination of F, and H bits. The Hamming distance between each combination means that it is possible to detect and correct or one bit errors, and simply detect two bit errors, in any TRS. The allocation of protection bits is shown in the picture. M .S .B . ( a lw a y s 1 ) 1 1 1 1 1 1 1 1 F 0 0 0 0 1 1 1 1 V 0 0 1 1 0 0 1 1 H 0 1 0 1 0 1 0 1 P3 0 1 1 0 0 1 1 0 P2 0 1 0 1 1 0 1 0 P1 0 0 1 1 1 1 0 0 L .S .B . P 0 0 1 1 0 1 0 0 1 Figure 104 CCIR-601 TRS protection bits How a D1 reciever locks to an incoming D1 signal Imagine a receiver is switched on and a D1 source is connected. The first thing it will do is search for any words that have the value 255, i.e. all 8 bits are ‘1’. When the receiver finds this word it checks that the next two words are 0, i.e. all 8 bits of each word are ‘0’. Sony Training Services 216
  • 225.
    Broadcast Fundamentals If these three words check out, the forth word is placed into a special register which examines each bit individually. Bit 7 must be a ‘1’. The combination of F’ V and H bits must also correspond to the combination of four protection bits. If any part of this process fails the receiver disregards these words and starts the search again. If all this checks out the receiver looks at the H bit. If it is a ‘0’ the receiver knows it is at the beginning of the video line. If it is a ‘1’, it is at the end of the line. The receiver is now said to be line locked. The receiver now knows where each TRS will occur because it knows how many clock cycles occur between each one. It now looks at the F bit of each TRS until this bit changes state, i.e. either changes from a ‘0’ to a ‘1’ or from a ‘1’ to a ‘0’. A change from a ‘1’ to a ‘0’ signifies the beginning of field 1. A change from a ‘0’ to a ‘1’ signifies the beginning of field 2. The receiver is now frame or field locked, and the locking process is completed. The V bit is not actually required for locking, but is used by some systems to check the actual position of vertical blanking. The receiver then simply checks that each TRS after this is valid. If there is an error some systems ignore the complete TRC and check the next one. Other systems are not so ‘clever’ and immediately fall out of lock. The CCIR 601/656 interface driver As shown already CCIR 601/656 signals are transmitted at 27 MHz. This kind of frequency is far too high to transmit over any great distance with ‘normal’ logic circuits like TTL and CMOS drivers. D i ff e r e n t ia l t r a n s m it t e r T w i s te d p a i r D i ffe r e n t i a l r e c e iv e r In p u t O u tp u t 0 v o lts S y s te m g ro u n d C a b le s c re e n Figure 105 CCIR601/656 ECL driver ECL (emitter coupled logic) is capable of operating at higher frequencies than TTL or CMOS circuits can. Differential ECL is capable of transmitting over long distances with the proper sheilded cable. Thus CCIR 601/656 signals require one differential ECL driver for each bit and a further driver for the clock. Each driver has two output wires, 217 Sony Broadcast & Professional Europe
  • 226.
    Part 20 –SDI (serial digital interface) giving a total of 18 wires for 8 bit CCIR 601/656 video (16 for video and 2 for clock). The CCIR 601/656 connector CCIR 656 recommends a ‘D’ 25 connector be used to connect digital video signals. (However, although the CCIR recommended slide locks be used, most users prefer screw fitting D25 connectors, because they tend to be more secure and easier to fix in place than slide locks.) This pinout for this connector is shown in the picture. P in 1 P in 1 4 P in 2 5 P in 1 3 P in P in F u n c tio n F u n c tio n num ber num ber 1 C lo c k + 14 C lo c k - 2 S y s te m g r o u n d 15 S y s te m g r o u n d 3 D a ta b i t 7 + 16 D a ta b i t 7 - 4 D a ta b i t 6 + 17 D a ta b i t 6 - 5 D a ta b i t 5 + 18 D a ta b i t 5 - 6 D a ta b i t 4 + 19 D a ta b i t 4 - 7 D a ta b i t 3 + 20 D a ta b i t 3 - 8 D a ta b i t 2 + 21 D a ta b i t 2 - 9 D a ta b i t 1 + 22 D a ta b i t 1 - 10 D a ta b i t 0 + 23 D a ta b i t 0 - 11 S p a re b it A + 24 S p a re b it A - 12 S p a re b it B + 25 S p a re b it B - 13 C h a s s i s g r o u n d ( s h ie l d ) E x a m p le o f c o n n e c to r Figure 106 CCIR601-656 D25 parallel connector Pins 1 and 14 are allocated to the 27 MHz clock positive and negative differential ECL respectively. The clock is separated from the data pins by two system ground pins 2 and 15. There are eight data pin pairs starting with pins 3 and 16 for data Sony Training Services 218
  • 227.
    Broadcast Fundamentals bit 7, and finishing with pins 10 and 23 for data bit 0, positive and negative differential ECL respectively. Pin 13 is allocated as a chassis ground and can be connected to the connector shell itself. The increase to 10 bits Pins 11 and 24, and pins 12 and 25 were originally allocated by CCIR as two spare differential ECL pairs. It took very little time for the video industry to start using these two extra bits to extend the original 8 bit samples specified by CCIR to 10 bits. However it is important to remember that these two bits are used as half and quarter resolution bits. i.e. below the decimal point. Thus Y samples now extend from 16.0 to 235.75 in 880 increments of 0.25 each. Cr and Cb samples now extend from 16.0 to 240.75 in 900 increments of 0.25 each. 219 Sony Broadcast & Professional Europe
  • 228.
    Part 20 –SDI (serial digital interface) Serial digital television Parallel D1, based on CCIR 601/656, became widely used by the professional video industry in the latter part of the 80’s. However there was resistance to using D25 connectors. Compared to the BNC connectors and co-ax cables used for analogue video, these connectors were more expensive and unreliable, and the cable required was expensive and heavy. It was also found that parallel D1 could not reliably be made over long distances. CCIR 656 specified a serial interface, but this was based on 8 bit samples, and the industry had already moved ahead to 10 bit samples. The CCIR 656 serial interface could not be used. In the latter part of the 80’s Sony introduced two devices that helped set an industry de-facto standard for serial D1 digital video standard, the SBX1601A and the SBX1602A. SBX1601A is a parallel to serial converter for 10 bit D1 signals, an SBX1602A is a serial to parallel converter. Sony Training Services 220
  • 229.
    Broadcast Fundamentals Serial digitalaudio At the same time as digital video was developing, advancements were made to audio as well. However because the frequencies of audio were basically lower than those of video development was more rapid and a number of parallel digital audio standards became popular comparatively quickly. In the professional world a sampling frequency of 48 kHz became popular and in audio CD’s a sampling frequency of 44.1 kHz is used. 32 kHz was also used. Sample widths of 8 bits were used on cheaper older systems, but in the professional arena 18, 20 or 24 bit were becoming popular. The AES and EBU collaborated to draw up a standard for transmitting audio signals through a serial digital channel. With so many different parallel audio standards already in use,<N>any serial standard had to somehow be able to encompass all the popular professional parallel standards, and have some method of informing the receiver which parallel standard was being transmitted. The standard, known as the AES/EBU/IEC 958 standard, or simply as AES/EBU audio, contains two channels of audio, channel A & B, with a maximum sample size of 24 bits, and a maximum sample frequency of 48kHz, neatly covering the most demanding of the popular professional parallel standards. Channel coding The channel coding method chosen for AES/EBU audio was Bi-phase mark, otherwise known as the Manchester 1 code. This is the same channel coding method as is used for longitudinal timecode and for Ethernet in computer networks. As shown in the picture, Bi-phase mark places a transition at every bit boundary, and a transition in the middle of each bit period for each ‘1’ bit. Figure 107 Bi-phase Mark signal structure Thus Bi-phase mark is not only polarity and direction independent, but is also self clocking, i.e. if there are phase changes during transmission, all an AES/EBU receiver has to do is to look for regular transitions corresponding to the bit boundaries, and once locked to them, search for any bit periods with a transition in the middle. Any with no transitions are ‘0’s, and those with, are ‘1’s. 221 Sony Broadcast & Professional Europe
  • 230.
    Part 20 –SDI (serial digital interface) Signal structure The standard organises data into serial blocks. Each block contains 192 frames. Each frame contains two sub-frames, one for channel A and one for channel B. Every subframe contains one audio sample. This is shown in following picture. Figure 108 AES/EBU digital audio signal structure Sync bits (4 bits) These first four bits are used to define the beginning of a sub-frame. They are different to normal data because they violate the laws of the bi- phase mark channel coding system. Normally in bi-phase mark there is always a transition at every clock cycle, i.e. between one bit and the next. However syncs drops two of these clock transitions at specific points, as shown in the picture. There are three forms of syncs. Form X defines the start of sub-frame A, form Y defines the start of sub-frame B, and form Z defines the start of the block (which is also a sub-frame A). Thus the receiver searches for these ‘illegal’ portions of the signal, and decodes them to find out if it is a form X, Y or Z sync. From that it is able to determine where it is in the AES/EBU signal structure. Sony Training Services 222
  • 231.
    Broadcast Fundamentals Figure 109 AES/EBU audio syncs Auxillary bits (4 bits) These four bits serve two main purposes. They can be used to extend the twenty bits of audio sample to 24 bits to give greater resolution to the audio samples. Alternatively they can be used to provide an extra channel of audio. By combining three consecutive groups of auxiliary bits together you can make an extra audio channel with only twelve bit resolution at a third the sampling frequency. Using the auxiliary bits for an extra audio channel for both sub-frames gives a four channel audio transmission system with two high quality channels and two low quality channels. Audio data (20 bits) The next twenty bit are the audio sample itself. The LSB is just after the last auxiliary bit and the MSB is just before the V flag bit. Flags (4 bits) V flag - Validity - This flag indicates that the audio sample data is error free. U bit - User - This bit is not defines and can be used for any purpose. C bit - Channel status - See the section below. P flag - Parity - Parity for all bits in a sub-frame except the four sync bits. Channel status bits The third bit of the flags in each sub-frame is for channel status. Thus, with 192 channel A sub-frames in a block there are 192 channel A status bits. Channel B also has 192 channel status bits, one for each subframe throughout the entire block. The channel status bits define the type of audio being transmitted. The table in the picture shows the allocation of the 192 channel status bits. The same table can be used for either channel A or channel B. 223 Sony Broadcast & Professional Europe
  • 232.
    Part 20 –SDI (serial digital interface) Figure 110 AES/EBU channel status bits Channel status for Digital Betacam Digital Betacam uses a very specific type of AES/EBU. Staring at the top of the table, Digital Betacam uses professional use of the channel status block, thus bit 0 of byte 0 should be ‘1’. Bits 2, 3 & 4 of byte 0 define if the analogue signal from which the digital signal was sampled was emphasised, so the de-emphasis can occur when the signal is converted beck to analogue. Digital Betacam uses either no emphasis or CD emphasis. It does not use CCITT emphasis. Bits 6 & 7 of byte 0 define the sampling frequency of the AES/EBU audio signal. This can be either 32, 44.1 or 48 kHz. Digital Betacam uses a digital audio sampling frequency of 48 kHz only. Sony Training Services 224
  • 233.
    Broadcast Fundamentals Bits 0, 1, 2 & 3 define how the two channels are related to one another. Two channel mode means the two channel are not realated to one another at all and are two independent channels. Single channel mode means channel B is not used at all i.e. even though the sync and flag bits are as they should be, all the audio sample data bits are ‘0’. Primary/secondary channel mode means that channel A is copied to channel B to increase the chance that the signal will get through without error. Stereophonic mode means the two channels are related and should not be separated. Any processing that is applied to one channel A should also be applied to channel B. Channel A is taken as the left channel. Digital Betacam uses two channel mode. If the user chooses to make the two channels a stereo pair Digital Betacam will still treat them an two independent channels. The last useful part of the table is bits 0, 1 & 2 of byte 2. These define the maximum sample length. The sample can be defines as 20 bits, and the auxiliary bits are simply not used and left as ‘0’. The sample can be up to 24 bits, with the auxiliary bits being used to extend the normal 20 bits to 24 bits. The sample can also be 20 bits but using the auxiliary bit as the coordination channel. Digital Betacam uses AES/EBU as 20 bits, with the auxiliary bits not used. Relationship between the two channels Both channels in AES/EBU audio must have the same sampling frequency. This is a basic requirement. If one channel is defined as a stereo pair channel then the other channel must also be stereo. Further more if the two channels are a stereo pair every other aspect of the two channels must be the same, i.e. Their channel status bits must be identical. If one channel is defined as a primary/secondary channel, the other channel must also be, just as for stereo. However in this case the audio data itself must also be identical. If both channel are in two channel mode other aspects of the channel status bits may also differ. For instance channel A may be emphasised, channel B not. The sample length may also differ. AES/EBU audio bit rate frequency The final bit rate for AES/EBU audio depends on the original sample frequecy. Taking the Digital Betacam particular use of AES/EBU audio the sample frequency is 48 kHz, there are 32bits in each sample sub- frame and two channels. Thus the bit rate can be calculated from the following simple equation :- 48 x 1000 x 32 x 2 = 3.072 Mhz 225 Sony Broadcast & Professional Europe
  • 234.
    Part 20 –SDI (serial digital interface) SDI SDI (Serial Digital Interface) is a particular implementation of serial digital video based on CCIR 601 and CCIR 656, which also incorporates two channels of AES/EBU digital audio (four audio channels altogether). SDI has become very popular within Sony broadcast, professional and industrial equipment as a method of carrying video and audio from one piece of equipment to another. As shown before the CCIR 601/656 digitises the whole analogue video signal. This not only includes the active part of the signal, i.e. the picture itself, but also the vertical and horizontal blanking intervals as well. This part is always black and the digital data during this part of the signal is always the same. This represents a waste in terms of information density. The blanking interval would be best used to contain useful information. This is where embedded audio comes in. Embedded audio Given the bit rate frequency of AES/EBU audio as used in Digital Betacam, 3.072 Mhz, and the frequency of serial digital video, 270 MHz, there is actually enough space during the horizontal blanking interval of a video signal to pack nearly twenty channels of audio. Infact SDI uses only a portion of the available space during the horizontal blanking interval to embed two AES/EBU audio channels, or four audio channels. Video index The final addition to the SDI signal as far as Digital Betacam is concerned id video index. This system replaces vertical interval subcarrier (VISC) and colour frame identification (CFID) used in Betacam SP. Video index uses line 11 and 324 in 625 line video (PAL). These two lines are within the vertical interval of the video signal. Bit 2 of each colour sample is used so that video index is still transmitted even in 8 bit digital video. Sony Training Services 226
  • 235.
    Broadcast Fundamentals Part 19 Video compression The reason for compression is space, or lack of it. In a transmission system like an aerial transmitter, satellite link or cable television link, there is a constant fight to get as much use out of the link as possible. Television companies want to put as many different television channels into one link as they can. Traditional analogue signals In traditional analogue transmission systems there is a severe limit to the number of channels that can be squeezed into each link. A traditional analogue satellite link can take just one channel. It is much the same for cable. Aerial transmission systems can take a few analogue channels but each one takes up a lot of the aerial’s capacity. Compressing analogue signals There are in fact various compression techniques used to compress analogue signals and save bandwidth. The analogue colour difference signals are compressed from the original (R-Y) and (B-Y) signals to U and V signals before combining them with the luminance, Y, signal to make a final composite signal. This is a form of compression. The problem of analogue compression An analogue signal is always susceptible to change and interference by cross talk, noise and other unwanted additions. In analogue transmission you try to make the analogue signal big enough or different enough from the noise or cross talk to make it easy to differentiate them at the receiving end. Analogue compression tends to push the original signal down into the noise, making it more difficult to separate later on. Analogue to digital conversion Analogue to digital conversion involves converting the television signal (video and audio) from a continuously changing signal to a signal comprising a series of defined numerical values. Converting a television signal (video and audio) into a digital signal does not save any space in the transmission system, in fact digital signals demand greater bandwidth than analogue signals. As digital signals use an entirely different method of encoding they are less susceptible to interference by the kind of unwanted additions that plague analogue signals. Compressing digital signals Compressing digital signals involves replacing the digital information with a smaller amount of different digital information. The essential point is that the original data and the compressed data are both digital data, 227 Sony Broadcast & Professional Europe
  • 236.
    Part 21 –Video compression and therefore just as resistant as each other to interference from analogue noise. Digital errors in transmission Digital data is very resilient to the kind of interference that affects analogue signals. However if analogue noise is too excessive it can alter digital data as well. If there is a break in the transmission path this can cause digital data to be corrupted. Compensating for digital errors There are techniques for removing errors from transmitted digital data. You can either use error correction techniques to replace the error data with the original data, or, failing that, use concealment techniques to replace the error data with data calculated to be similar to what the data should have been. The advantage of digital compression Analogue compression tends to reduce the signal’s resolution and force it into the noise. This reduces its quality. Digital compression can be compressed as much as you like. The compressed data is still digital data, and is still not affected by analogue noise. As mentioned, excessive analogue noise does alter digital data. However the amount of noise required is greater than with analogue signals. Both analogue signals and digital signals can be corrupted by intermittent breaks in the transmission link. However there are techniques for correcting or concealing errors in digital data. Entropy and redundancy Any signal, data, or multimedia material of any sort, may be divided into two basic types, entropy and redundancy. Entropy Entropy is another word for chaos. It is essentially unpredictable. In many systems entropy is something bad. Something to be eliminated. Entropy destroys order and introduces uncertainty. However in multimedia signals entropy is the information we want to keep. It represents the interesting parts of the data or signal. Redundancy Redundancy is something that can be dropped or eliminated. In multimedia signals redundancy represents the parts of a signal that are entirely predictable, and repetitious. If any part of a signal or data can be Sony Training Services 228
  • 237.
    Broadcast Fundamentals predicted then it is unnecessary to include it in the signal or data at all. It is, be definition, redundant. Entropy and redundancy is video signals If we assume a full quality digital video signal is the kind of signals specified by CCIR601 and as used by SDI links, the total data rate is 270Mbps. Thus digital video comprises a proportion of entropy and redundancy totalling 270Mbps. For the simplest possible video signal all of this 270Mbps is redundant data. For the most complex all of it is entropy. In practice video signals are never that simple or complex. However it is possible to draw a very simple graph showing the proportion of entropy and redundancy for video signals of all different types, from the simplest to the most complex. In reality video signals never become so complex that they comprise entirely of entropy. There is always some redundancy somewhere in the signal. It may be spatial redundancy, because each frame comprises very little detail, or it may be temporal redundancy where each frame is similar to the last. Considering a 2:1 compression ratio Imagine you wanted to compress the video signal to a half of its original size, i.e. a 2:1 compression ratio. If it was reasonably simple it would be easy to reduce the signal by the required amount by removing a proportion of the redundancy. In many cases you may not even need to remove all the redundancy. 229 Sony Broadcast & Professional Europe
  • 238.
    Part 21 –Video compression However it the video signals became particularly complex, the amount of entropy may exceed half of the original signal. In this case you would have to throw away some of the entropy to reduce the signal by the amount required. The key point is that there is no loss except for the most complex video signal. In many cases the video signal can be compressed and decompressed with no loss of details at all. Considering a 5:1 compression ratio Now imaging you want to compress the same video signals by a 5:1 compression ratio, i.e. to a fifth of its originals capacity. In this case the video signal needs to be that much simpler for there to be no loss in compression. For anything but the most simple video signals you would need to throw away some entropy in order to achieve the required compression ratio. The purpose of any compression scheme The sole purpose of any compression scheme is to separate entropy from redundancy, keep the entropy part and throw away the redundancy part. If the compression scheme is poor it will not be able to find all the redundancy in the video signal and will not be able to compress the signal enough without needlessly throwing away entropy. A good compression scheme will be able to use a selection of tools and techniques to find enough redundancy so that it is able to achieve the required compression ratio without having to throw away any entropy. Lossless and lossy compression There are two basic types of compression systems used, lossless compression and lossy compression. Both have advantages and disadvantages, and are appropriate to the kind of data or signals they are compressing. Lossless compression As the name suggests, lossless compression is a scheme where there is no loss in the compression / decompression path. After decompression you end up with exactly the same data or information as you started with before you compressed. Lossless compression is vital for computer data compression. If you are compressing an executable, or peripheral driver file, you cannot accept any loss during compression at all. If just one bit is wrong after decompression the compression system has failed. Lossless compression will remove as much redundancy as possible, but will not remove any of the entropy. The disadvantage of lossless compression is that you cannot specify a compression ratio. It will depend on the kind of information that needs to Sony Training Services 230
  • 239.
    Broadcast Fundamentals be compressed. A simple 1Mb file, like a bitmap image of a snowy scene where most of the image is white, will compress much more than a complex 1Mb file, like an executable file. Examples of lossless compression schemes A perfect example of a lossless compression scheme is PKZip or WinZip. These programs are designed to compress computer files by as much as possible without loss. Lossy compression The advantage of lossy compression is that you may specify the compression ratio. This is vital in multimedia transmission systems, satellite and cable television links, digital video and audio tape and disk storage systems. Examples of lossy compression schemes The most popular lossy compression schemes for video signals is MPEG and DV. For audio they are MP-3 and ATRAC. There are others. For video tape there is Digital Betacam, Betacam SX, IMX, and derivatives of DV like DVCAM and DVC Pro. For computers there are higher compression ratio schemes like .AVI and Real Motion. There are other lossy compression schemes intended for still images, like the JPEG, GIF, and TIFF formats. Inter-frame and Intra-frame For video there are two basic methods of compression, inter-frame and intra-frame compression. Inter-frame compression Inter-frame compression looks at the difference between frames. It is not actually a compression scheme at all, but a way of processing the video signal before compression takes place in order to achieve more efficient compression. There are three types of inter frame, the P (predicted) frame, the B (between) frame and the R (reverse) frame. The P frame The P frame is a frame derived from a comparison between the frame in question and the previous frame. In most cases the difference between a frame and the previous frame is small. The B frame The B frame is a frame derived from a comparison between the frame in question and the average of the previous frame and the following frame. 231 Sony Broadcast & Professional Europe
  • 240.
    Part 21 –Video compression R frame The P frame is a frame derived from a comparison between the frame in question and the next frame. Intra-frame compression Intra-frame compression is a scheme for reducing the amount of data in one video frame. The data is reorganised in order to separate redundancy from entropy and the redundancy is discarded. The toolbox Intra-frame compression uses a series of tools. This set of tools is sometimes referred to as a toolbox. The tools that are generally used are DCT (discrete cosine transform), quantisation zig-zag scanning, run length coding, variable length coding and data buffering. What is DCT? DCT (discrete cosine transform) is a method of describing data as a discrete weighted set of cosines. Put simply the original data is described not as its original samples or data words, but as how these samples change. The church organ In understanding DCT a good place to start is to look at the church organ. Organ pipes produce a nice clean tone. The note they produce is pretty close to a sine. Organists add the sound from other pipes to the note they are actually playing to produce different tones. Organists call these stops. Turning the church organ upside down It should be possible to analyse any of the many different sounds and tones coming from a church organ and work out exactly which stops the organist had pulled out. The scientific equivalent In fact it is possible to do this with any tone. You can break down any note from any musical instrument into a series of sine waves. The lowest frequency sine is the note itself, often called the fundamental. The rest are all higher frequencies, normally multiples of the fundamental, generally called the harmonics. Taking things one stage further, it is actually possible to break any repetitive or periodic signal into a series of sine waves or cosine waves, or, to be more exact, a series of sine and cosine waves with associated phases. This process is commonly referred to as a Fourier analysis or a Fourier transform. Sony Training Services 232
  • 241.
    Broadcast Fundamentals The Fouriertransform A Fourier transform is a method of describing any periodic signal as a series of sines and cosines. When you do a Fourier transform of pure sine wave you get a single spike at the frequency of the wave. A violin creates something approaching a sawtooth waveform. A sawtooth is made up from the fundamental and all the harmonics. The amplitude of the fundamentals falls exponentially as the frequencies increase. A classic rich organ sound approaches a square wave. This wave is made up from the odd harmonics of the fundamental. The amplitude of these harmonics falls with increased frequency in the same way as it does for the sawtooth wave. 233 Sony Broadcast & Professional Europe
  • 242.
    Part 21 –Video compression In both the pure sawtooth and square waveforms there are an infinite number of harmonics. This is impossible. The high frequency harmonics will always be lost no matter how small the loss is.Thus it is also impossible to create an absolutely perfect sawtooth or square wave. The mathematics of Fourier transforms The mathematical formulae used for Fourier transforms looks hideous but is actually quite simple. The basic Fourier transform expression is :- ∞ F ( s) = ∫ −∞ f (t ) exp − j 2πst dt Where F(s) is the Fourier transform, f(t) is the original signal. The “exp” is a neat mathematical shorthand for a particular addition of a sine and cosine, which goes like this :- exp − j 2πst = cos(2πst ) + j sin(2πst ) The reason for this is that although a signal is made up from a number of pure sine waves, the amplitude and phase of each one may be different. The expression above is a way of describing a sine wave at any amplitude and phase by describing it in terms of a sine and cosine in the real and imaginary plane. Sony Training Services 234
  • 243.
    Broadcast Fundamentals The integral is simply an infinite sum of all the “f(t) exp” for every point in time (t) from the beginning of time (-∞) to the end of time (∞). The pros and cons of Fourier transforms Fourier transforms work well for continuous periodic signals, and are therefore very useful for things like analysing music, signals from deep space, or vibration analysis on racing cars or aircraft. However they do not work well for digital signals where there are a series of discrete samples in time. For this we need a discrete version of the Fourier transform. The Discrete Fourier Transform (DFT) DFT is a special form of Fourier transform for periodic signals made up from discrete values, i.e. digital signals. With analogue signals we use normal Fourier transforms which use integrals. This is because analogue signals are continuous and any summing analysis like Fourier transforms need to imagine the sum made up from an infinite number of infinitely small time periods all along the analogue signal we are studying. We are not interesting in taking infinitely small points in time any more. The samples now occur at specific points in time related to the sample rate, or period. Therefore the integral in a normal Fourier transform can be replaced by a simpler summing function ‘∑’. A few other changes will take place. Rather than talking about a continuous signal we are now looking at discrete samples. Thus let us replace f(t) with f(k). Likewise the continuous Fourier function, F(s), will be replaced by a discrete one, F(r). The normal integral based Fourier transform changes in the case of the DFT to :- k =∞ F (r ) = ∑ f (k ) exp k = −∞ − j 2πrk Taking a set number of samples. We have managed to get rid of the integral and replace it with something more applicable to samples. However we are still stuck with this concept of the signal stretching from the beginning of time to the end of time, i.e - ∞ to ∞. We can adjust the expression so that we can find the DFT of a set number of samples. To do this we imagine the DFT will repeat the samples we are interested in again and again, and take the sum across just the samples we want. We do not want to decide how many at the moment so we will give the quantity of samples a letter. N would be good. This means that the DFT will change to :- 235 Sony Broadcast & Professional Europe
  • 244.
    Part 21 –Video compression k = N −1 − j 2πk 1 F (r ) = N ∑ k =0 f (k ) exp N It is conventional that the first sample is “0” and the last is (N-1). There is no particular reason for this other than it makes the equations a little simpler, but it means that if, for instance, you have 8 samples in your selection “k” will go from “0” to “7”. The judder problem of DFT There is a problem with DFT over a set number of samples. We are imagining the samples repeating again and again. However there is no guarantee that the last sample will be anywhere near the same value as the first sample. This will produce a judder in the signals as it repeats. This judder is energy and thus has its own harmonics that are nothing to do with the original samples. Sony Training Services 236
  • 245.
    Broadcast Fundamentals Discrete CosineTransform (DCT) solution to judder One neat way of removing the sudden jump between the last sample and the first sample is to take twice as many samples, with half of them being a mirror image. Mirrored sines and cosines An interesting thing happens to sines and cosines when they are reflected about zero. The sines cancel out! Put mathematically sin(-x) = -sin(x) and cos(-x) = cos(x) 237 Sony Broadcast & Professional Europe
  • 246.
    Part 21 –Video compression This can help to reduce the complexity of the DFT. We need only consider the cosine parts of the transform. Thus by taking twice the number of samples and considering only the cosines the original expression drops to :- 2 k = N −1 (2k + 1)rπ F (r ) = ∑ N k =0 f (k ) cos 2N What does the result of DCT look like? The original Fourier transform shows the frequencies that make up the original signal. DCT does the same thing. The samples are replaced by numbers that describe frequencies that make up the original sample data. These numbers are called coefficients. The samples are discrete, so the number of frequencies is also discrete. In fact there are as many DCT coefficients as there are samples. If you take more samples in your analysis, you get more coefficients. The formula also places the lowest frequency coefficient first, in place of the first sample. Remember that we are imagining that the set of samples we are doing a DCT of is repeated again and again, because DCT is based on Fourier transforms which only work on periodic signals that go on for ever. The odd conclusion from thi is that the lowest frequency coefficient is actually the average level of the samples, sometimes called the DC coefficient. It is the second coefficient that is the same as the fundamental frequency we looked at with church organs right at the beginning. DCT in video Imagine a video picture made up of individual pixels. The pixels are laid out in rows and columns. As we have seen DCT operates on a group of samples. It cannot operate on just one sample, because it is analysing how the samples are changing, and what frequency coefficients the samples are made from. Sony Training Services 238
  • 247.
    Broadcast Fundamentals The same thing applies to DCT as it is used in video. We need to analyse a group of pixels in both the horizontal (x) and vertical (y) directions. Simple 2 by 2 DCT block The smallest group of pixels DCT can work with is a 2 by 2 block. So let us look at how this might work. The whole picture is now split into 2 by 2 blocks. DCT will then remove each block and replace it with 4 coefficients that describe how the original pixels vary across the block. The top left corner of the block will now hold the DC coefficient. The top right pixel is replaced by a number describing how all 4 pixels are changing in the horizontal direction. This number is called the horizontal AC coefficient. The bottom left pixel is replaced by a number describing how all 4 pixels are changing in the vertical direction. The principles are the same as the top right pixel, but for the vertical direction. This number is called the vertical AC coefficient. The bottom right pixel is replaced by a number describing how the 4 pixels are changing at a 45 degree angle from top left to bottom right. This is called the diagonal AC coefficient. Thus the 4 DCT coefficients describe the original pixels in 2 dimensions in much the same way as a simple DCT operates on simple 1 dimensional samples. There is nothing lost in making this transform. Describing the pixels in terms of their frequency coefficients, i.e how they vary, is just as accurate a method as describing them as individual pixels. 239 Sony Broadcast & Professional Europe
  • 248.
    Part 21 –Video compression A practical 8 by 8 DCT block In virtually all practical compression schemes the simple 2 by 2 DCT block is too small. There are benefits of selecting a larger block size. The practical size most commonly used is an 8 by 8 DCT block, transformed from 64 original pixels. The basic principles are the same. The top left corner of the DCT block is the DC coefficient, in the same basic way as it was in the simple 2 by 2 block. The 8 coefficients along the top describe how the original 64 pixels vary in the horizontal direction, in the same way as the top right coefficient did in the simple 2 by 2 block. However there are now 8 possible coefficients. Thus the left most of these coefficients describes the overall low frequency horizontal variation of the 64 pixels. This is the fundamental in the 1 dimensional DCT we looked at before. The next coefficient to the right corresponds to the next highest frequency change, and so on until the top right most coefficient describes the highest frequency horizontal change in the original 64 pixels. These are the same as the harmonics in the 1 dimensional DCT. The same principle applies to the coefficients down the left side for the vertical direction, and likewise for the coefficients down the centre of the block from top left to bottom right for the 45 degree diagonal direction. However, with 64 original pixels there is now opportunity to provide coefficients that describe how the original 64 pixels are varying at other angles between vertical, 45 degrees and horizontal. Sony Training Services 240
  • 249.
    Broadcast Fundamentals Thus the 8 by 8 group of DCT coefficients now describes a wide range of frequency changes and angles. In fact the DCT block exactly describes the original pixels, but in a different way. The mathematics of DCT as used for video The mathematical description of DCT so far has been 1 dimensional. The expression we eventually concluded is :- 2 k = N −1 (2k + 1)rπ F (r ) = N ∑ k =0 f ( k ) cos 2N 241 Sony Broadcast & Professional Europe
  • 250.
    Part 21 –Video compression Now we need to replace the one dimensional DCT F(r) part with a two dimensional part, F(u,v). The f(k) will be replaced by a two dimensional f(x,y). The cos part of the expression now needs to be done in two dimensions as well, once for the x direction and another for the y direction to create the u direction and v direction in the final DCT respectively. We therefore have :- 2 x = N −1 y = M −1 (2 x + 1)uπ (2 y + 1)vπ F (u , v) = N M ∑ ∑ x =0 y =0 f ( x, y ) cos 2N cos 2M Where the original pixel block is N pixels wide by M pixels high. This is not exactly correct. In practice we need a ‘fiddle factor’ for all the coefficients across the top row or along the left column, i.e. when either u=0 or v=0. This factor is √2. Thus the two dimensional DCT as used in video is :- 2 x = N −1 y = M −1 (2 x + 1)uπ (2 y + 1)vπ F (u , v) = Cu Cv N M ∑ ∑x =0 y =0 f ( x, y ) cos 2N cos 2M Where :- 1 Cu = for u = 0 and C u = 1 for u = 1to N 2 1 Cv = for v = 0 and C v = 1 for v = 1to N 2 This expression looks pretty hideous, but you can see the vaious elements in it, and where they come from. It is also worth bearing in mind that it may look a lot simpler if you know more about the size of the group of pixels you are looking at. For instance, if the group is always square, we could say that the group is N pixels by N pixels and eliminate the M and the annoying square roots at the beginning, Thus :- 2 x = N −1 y = N −1 (2 x + 1)uπ (2 y + 1)vπ F (u , v ) = C u C v N ∑ ∑ x =0 y =0 f ( x, y ) cos 2N cos 2N If we also say that the pixel block is going to be 8 pixels by 8 pixels, which it is for JPEG and all MPEG compression schemes, the expression becomes even simpler, thus :- 1 x =7 y =7 (2 x + 1)uπ (2 y + 1)vπ F (u , v) = Cu Cv ∑ ∑ f ( x, y) cos cos 4 x =0 y =0 16 16 Sony Training Services 242
  • 251.
    Broadcast Fundamentals DCT inaudio Right at the beginning we looked at how audio signals can be broken down into a fundamental and a group of harmonics. DCT is very appropriate to breaking down digital audio signals. Audio is a one dimensional stream of data, and there is no need to use the kind of two dimensional DCT we use for video signals. MPEG uses DCT in compressing audio signal. The modern trend for MP-3 players are based on MPEG, which in turn uses DCT. Basis pictures Machines often do not go through the tedium of calculating DCT values from scratch. They use pre-calculated values in a kind of look-up table. These look up tables are called basis pictures. There are as many basis pictures as there are samples. And each basis picture contains as many numbers as there are samples. Thus for an 8 by 8 group of video pixels there are 4096 basis picture numbers. Machines use simple matrix multiplication to perform DCT using basis pictures. Why bother? DCT is used to rearrange the pixels in a video picture into frequency coefficients because more video pictures have their energy in the low frequency and DC coefficients. 243 Sony Broadcast & Professional Europe
  • 252.
    Part 21 –Video compression Therefore if you do a DCT of a complete video frame you will invarably find all the big numbers in the top left corner. The bottom right corner tends to be full of zeros. Looking at it another way DCT allows us to separate the video frame’s entropy and redundancy, with all the entropy in the top left corner of each DCT block and the redunancy in the bottom right corner. This helps in the variable length coding part of a compression system to reduce the number of bits required. Huffman codes are a particular type of variable length code. Huffman codes are particularly efficient at reducing the number of bits required to describe data. While the Huffman coding principle is almost certainly not the best variable length coding system for every eventuality, and there may be new coding systems in the future that are more efficient than Huffman coding, it remains the most common variable length coding system for data compression. Huffman’s three step process Huffman coding is performed in three steps. The first is to analyse the probability of the original data. The second is to use this analysis to generate a series of Huffman codes. The third is to use these codes to reduce the original data. Step 1 - Analysis In the analysis step, a probability is assigned to each possible number in the original data. Depending on the chance of it occurring. Let us assume we want to reduce the text on the following page. There are 7462 words including spaces, and the quantity of letters, numbers and symbols is as follows :- Letter Quantity Letter Quaniity Letter Quantity A 557 Q 5 7 1 B 114 R 377 8 0 C 167 S 367 9 1 D 266 T 520 0 4 E 653 U 144 ( 11 F 150 V 54 ) 11 G 142 W 125 . 43 H 287 X 3 , 69 I 382 Y 124 ‘ 44 J 3 Z 3 ! 1 Sony Training Services 244
  • 253.
    Broadcast Fundamentals K 67 1 0 ? 2 L 275 2 2 @ 1 M 133 3 0 - 29 N 422 4 0 Space 1336 O 473 5 1 & 0 P 94 6 0 % 0 245 Sony Broadcast & Professional Europe
  • 254.
    Part 21 –Video compression F9or many who live in Argyll, occasional visits to Glasgow are a part of life, especially if you live in the nearer mainland parts of Argyll as I do. Unless you've had to head there in its rush hour, the city is less than an hour and a half from Strachur by the 'Rest and be Thankful' pass and Loch Lomond-side. (In passing, why Rest and be Thankful? The original military road is steepest of all near its very top and my guess is that your horse did the resting and you did the being thankful - that the poor animal hadn't collapsed with the effort and let your cart, carriage or whatever run back over the edge - but I'd be interested in other versions, especially the correct one if different from this. Doubly welcome if with sources). But back to Glasgow. Why do we go? is a question I've often asked myself when traipsing around wet streets and sticky shops. I suppose there are almost as many answers as visits, but apart, obviously, from the much greater number and variety of shops than Argyll (population 90 000 or so including all its towns) can offer, there are also all sorts of entertainments and cultural and commercial reasons for making the effort. Despite too much continuing poverty and some areas of awful housing, it's also a very attractive place that's had a long, hard time getting past the 'no mean city' image in the minds of outsiders, amongst whom I have to count myself. To-day we were mainly going to take Ewan out for a meal in the evening, but shops and a film were to be added in. At least as good as the shops and films, as far as I was concerned, would be the chance to explore another corner of the city on foot and, since we had Tess the dog with us, I had a good excuse. Despite forecast warnings of blizzards on northern hills, they certainly didn't apply here and now. I've sometimes heard Glasgow referred to, tongue-in-cheek, as the 'dear green place' and I'd be at least as keen to know the derivation of this one as of the 'Rest and be Thankful'. I do know that you can find yourself in several places within the city boundaries where that title just doesn't sound ridiculous at all and I was trying to find myself a new one of these to-day. I was heading for the north-west of the city, a little beyond Anniesland and between the road to Bearsden and Maryhill Road, home of Partick Thistle FC. In this little corner, with the aid of a partially-remembered scrutiny of the A to Z, I reckoned to put together a fairly peaceful 2 mile triangle from the end of 'Switchback Road': one side being the towpath of the Forth and Clyde Canal, a second the banks of the River Kelvin (a tiny part of the 'Kelvin Way') and the third a crossing of Dawsholm Park and so it turned out (the fact that the canal had been recently drained on this stretch was a bit of a surprise - but it's too early in the year for the mud to be smelly and, as it's being restored (hooray!), it'll be full again in time). I started near Lock 27, where the new-looking canalside pub of the same name had towpath tables and outdoor drinkers to go with them - not bad, pre-Easter. An AA roadsign nearby indicated a microbrewery, which might repay following up (memo to non- Brits, AA can be 'Automobile Association', as well as Alcoholics Anonymous). Having hardly started, I put aside thoughts of a pint of real ale and headed east for a couple of enormous gasometers that weren't yet industrial archaeology but, like the mud, were smell free. Just as well. A little further on, where a road crossed the canal, I came across the first sign of canal restoration work in the form of a brand new bridge bearing the carved legend 'Forth and Clyde Canal' and a relief of what I think must be the giant new wheel arrangement near Falkirk. When completed, this will lift boats from the Forth and Clyde to the Union Canal so that they can sail on to Edinburgh for the first time in decades. A man in the contractor's hut nearby reckoned that the work in this drained Glasgow section would be ready in a handful of months, but that the restoration of the whole canal would take a couple more years. Time enough yet to book your canalboat holiday across central Scotland. Continuing on, with only a few people and a pair of mute swans for company (having passed the temporary earth dam holding back the water from the canal's dry section) and a mile, now, from my starting point, I came all of a sudden on one of the canal's biggest engineering achievements - the aqueduct spanning the steep-sided den of the River Kelvin Beyond the aqueduct, a flight of locks rose to cross Maryhill Road. Between the locks seven pairs of brand-new, heavy wooden lock gates lay stacked around a small basin waiting to be installed. The flight was further than I wanted to go, but I paused briefly on the aqueduct to admire a pair of cormorants sitting somewhat grandly on top of an abandoned sandstone pier that once carried a railway across the Kelvin. Bereft of its railway track and all connection with either bank, the pier looked for all the world like a sea-stack lost in the middle of this most urban of rivers and the cormorants, therefore, seemed oddly 'at home'. 'Most urban' isn't very fair. Descending to the far bank of the Kelvin took me down through some fine broadleaved woods to a quiet riverside and turning to go upstream soon revealed as fine a patch of primroses as I've seen this spring. I passed no-one at all by the river (5 ish on a Saturday afternoon), though beyond a road bridge there was soon more nearby housing. In fact it was all green and pleasant to the next railway bridge (West Highland line) and green beyond, past low sandstone cliffs, to a low-level road bridge where it's possible to cross back to the right bank again and search west for a way into Dawsholm Park. Crossing this bridge brought great news in the form of a man fishing. Added to the cormorants' presumed need for food it was becoming clear that the Kelvin may be urban, but certainly isn't dead. The fisherman said it was greatly improved, claiming trout and sea-trout for it, and even suggested Partick Mill (downstream) as a place to go between September and November to watch the salmon passing over the weir. I think I shall. I heard recently that, down in England, the River Tame, which rises in the ominously-named 'Black Country' before flowing through Birmingham and which I remember, from more than thirty years ago, as possibly the dirtiest and deadest in the whole of industrial Britain, now also sees anglers on its banks. Without wanting to be Pollyanna-ish, not all environmental news is black. Leaving the angler to his rod, and wondering idly about the less-contemplative youth suggested by the scars on each of his cheeks, I headed through some scrub by guesswork, crossed a group of all-weather football pitches and found the gates of Dawsholm Park, again by guesswork. By more guesswork I climbed a pine-wooded ridge to be all of a sudden re-oriented by the middle- distant gasworks and then by a grand view along the length of the industrial Clyde through Glasgow and out to Clydebank with its shipyard cranes. Sure of my position again, a balcony of a path took me along a drumlin-crest above a pasture containing four very hairy Highland Cattle (Glasgow is full of surprises) to a place where I could slip down easily to my car. Too late to seek out the microbrewery, but another day. It really is a dear green place if you look. John Fisher aboutargyll@compuserve.com Sony Training Services 246
  • 255.
    Broadcast Fundamentals Now the letters are rearranged in order of the number of times they occur in the text. The actual quantity is expressed as a probability with regard to the total number of letters, 7462 :- Letter Prob. Letter Prob. Letter Prob. 1 0 Z 0.000402 G 0.01903 3 0 0 0.000536 U 0.01930 4 0 Q 0.000670 F 0.02010 6 0 ( 0.001474 C 0.02238 8 0 ) 0.001474 D 0.03565 & 0 - 0.003886 L 0.03685 % 0 . 0.005762 H 0.03846 5 0.000134 ‘ 0.005896 S 0.04918 7 0.000134 V 0.007237 R 0.05052 9 0.000134 K 0.008979 I 0.05119 ! 0.000134 , 0.009247 N 0.05655 @ 0.000134 P 0.012597 O 0.06339 2 0.000268 B 0.015277 T 0.06969 ? 0.000268 Y 0.016617 A 0.07464 J 0.000402 W 0.016752 E 0.08751 X 0.000402 M 0.017824 Space 0.17904 Now the two lowest probabilities are summed together and the letters are lumped together. A new table is made in the same way with, and the two lowest probabilities are summed with the two letters expressed together. This process is repeated until there are just two letters left. 247 Sony Broadcast & Professional Europe
  • 256.
    Part 21 –Video compression Variable length codes are used predominantly in digital compression schemes, and is the most effective method of reducing the amount of data. Entropy codes is another term used for variable length codes. Hamming codes are a particularly efficient type of variable length, or entropy, codes. The principle behind variable length codes The idea behind variable length codes is to know which number are more likely to occur that others in your digital data, and replace these with a special code that is smaller that the original number. Numbers that are likely to occur a little less often are changed for slightly larger codes. Numbers that and very unlikely to occur are replaced with codes larger than the original number. The hope is that there is a stronger chance that likely numbers will occur in the data than unlikely numbers and there will be a lot more small codes than big codes. The results of discrete cosine transforms DCT (discrete cosine transforms) replace original video pixel data with numbers corresponding to how the data is changing. These numbers are called coefficients. DCT operated on a matrix of pixels. MPEG uses a matrix of 8 by 8 pixels. Other matrix sizes are used but 8 by 8 pixel matrices are the most popular. Conventionally the DC coefficient is placed in the top left corner of each matrix, replacing the pixel data that original occupied this position. This DC coefficient has a flat statistical probability curve. This means that is just as likely that that it will contain any number. AC coefficient bell curves The rest of the coefficients are AC coefficients. The AC coefficients all have bell shaped statistical probability curves. That is to say that there is a greater chance that the number is somewhere in the middle. The high frequency coefficients have a sharper bell curve than the low frequency coefficients. That is to say there is a stronger chance that the high frequency coefficients will contain a number closer to the middle that with the low frequency AC coefficients. Using bell curves for variable length coding It is very useful that all the DCT AC coefficients have a bell curve probability. It is possible to design variable length codes that takes advantage of this bell curve by having small codes for numbers at the peak of the curve and large codes for numbers at the outer edges of the curve. Sony Training Services 248
  • 257.
    Broadcast Fundamentals Numbering systems for the bell curves The original video samples are all 10 bit values and have a range from 0 to 1023. The DCT AC coefficients use a negative numbering system and therefore have a range from –512 to +511. The peak of the bell curve is therefore centred about zero. In terms of the original video the DCT AC coefficients are centred around the colour grey. Using a simple variable length coding system Imagine a simple variable length coding system based on the bell curves for the DCT AC coefficients. DCT AC coefficient Variable length code -9 11111111101 -8 1111111101 -7 111111101 -6 11111101 -5 1111101 -4 111101 -3 11101 -2 1101 -1 101 0 0 +1 100 +2 1100 +3 11100 +4 111100 +5 1111100 +6 11111100 As you can see the code for a zero is just one bit, a “0”. It is also easy to see the pattern for negative and positive numbers. The number itself indicates the number of “1”s with negative numbers ending in “01” and positive numbers ending in “00”. Let us consider a stream of 14 DCT AC coefficients thus :- +1 –2 0 +4 –3 0 0 +6 –1 –3 +4 0 –1 +2 Converted into their simple variable length codes gives us :- 100110101111001110100111111001011110111110001011100 The original DCT AC coefficients were all 10 bit samples. Thus in this stream there are 140 bits. The variable length codes for this stream have 51 bits. 249 Sony Broadcast & Professional Europe
  • 258.
    Part 21 –Video compression It is important to realise that the variable length codes are exactly the same data as the original coefficients, they are just described in a more efficient way. Decoding variable length codes If it is a little difficult to believe how variable length codes can be such an efficient bit saver, decoding the original data from the apparent random stream of variable length codes will seem a miracle. However decoding variable length codes, while fiendishly clever, is actually very simple. Starting from the beginning you look to see if there is a code corresponding to the first bit. If there is not then look to see if there is a code for the first and second bit together. If not try the first, second and third bits together. In this case the first bit is a “1” which is not a code, The first and second bits “10” is also not a code. However the first, second and third bits give a good code “100”. The original coefficient for this code is “+1”. Now discard these bits and start again, looking for the first time you see a good code. The next good code is “1101”. Anything less than this is not a good code. “1101” is the code for “-2”. Cut this off and start again. The next bit is a “0”. This is a good code in itself and gives a “0” as the coefficient data. This same method can be used all the way through the variable code stream replacing the good codes as you find them with the original coefficients. Disadvantages of variable length codes Variable length codes can suffer from errors in the same way as any other signal, both analogue and digital. Variable length codes can also suffer when they are read from the middle, rather than from the beginning. However where variable length codes fail is when the original data does not fall into the assumed statistical pattern, and when you want to decode it starting in the middle. Errors in variable length codes Let us imagine that there is and error in the variable length codes thus :- 100110101110001110100111111001011110111110001011100 The eighth “1” has been received as a “0”. Decoding this data starts OK but goes wrong when we reach the mistake thus :- +1 –2 0 +3 0 –3 0 0 +6 –1 –3 +4 0 –1 +2 Instead of “+4” we now have “+3” “0”. A mistake in the variable length codes can ripple down the data giving rise to a few incorrect coefficients. Sony Training Services 250
  • 259.
    Broadcast Fundamentals In practice the variable length codes and remarkably resilient and errors dot not ‘travel’ too far down the variable length stream and only a few coefficients are affected. Error correction schemes can be used to correct these minor mistakes and replace the original coefficients again. Decoding variable length codes from the middle The second disadvantage of variable length codes is that you will get an error if the variable length code stream is decoded from the middle. To illustrate this assume that we start reading the original variable length stream we made starting at the tenth bit. Discard these 10 bits 100110101 and start reading here… 111001110100111111001011110111110001011100 The result of this is :- +3 –3 0 0 +6 –1 –3 +4 0 –1 +2 So, even though the first coefficient is in error, the rest of the stream has been decoded correctly. Bad statistical patterns Variable length codes can stand a few numbers that do not follow statistical assumptions. Assume that there are a few large numbers, both negative and positive, in the stream mentioned above, thus :- +1 –2 0 -50 –3 0 0 +6 –10 –3 +4 0 –1 +2 One of the coefficients has been replaced by –50 and another by –10. The new stream is now thus :- 100110101111111111111111111111111111111111111111111111111 1011110100111111001111111111011110111110001011100 While this appears to be a massive increase in the number of bits we had before it is still only 106 bits and still a saving on the 140 bit we had originally. However, what if the original stream were replaced by something like this :- +23 –12 +56 +1 +8 –3 -112 –31 +90 –121 +53 +27 +78 –94 This is still 14 coefficients. Each one is 10 bits, giving 140 bits altogether. However the variable length stream corresponding to these coefficients is :- 111111111111111111111110011111111111101111111111111111111 111111111111111111111111111111111111110010011111111001110 111111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111110 111111111111111111111111111111110111111111111111111111111 111111111111111111111111111111111111111111111111111111111 111111111100111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111111 111111111111111111101111111111111111111111111111111111111 251 Sony Broadcast & Professional Europe
  • 260.
    Part 21 –Video compression 111111111111111110011111111111111111111111111100111111111 111111111111111111111111111111111111111111111111111111111 111111111111001111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111100 This is 737 bits! A huge increase on the original 140 bits. Thus it is very important that variable length codes are as closely related to the statistical pattern of the original data. This is the reason why variable length codes cannot be used on the DCT DC coefficients, because there is no guarantee that it will be close to zero at all. Sony Training Services 252
  • 261.
    Broadcast Fundamentals The television station The studio The studio is the area where programmes are made. It may be small, being not must more and a small room, to a large enclosure big enough to fit a small group of houses. Studios have sets and lighting. A set is a construction that provides a background, or foreground, to the action taking place in the studio. Lighting can be hung from the ceiling or placed on the floor. It not only provides light to the set, but also adds mood and colour. Both set design and lighting design require experience and skill to perfect. Studios also contain video cameras. There are often fewer cameras in news studios that those of drama, pop music shows, etc. However, cameras in news studios can be very complex computer controlled cameras, able to move between a few certain but very specific positions. Studios can be used for news programmes, entertainment programmes like pop music shows, chat shows and games shows, and for drama. Drama, which includes everything from soaps to high profile period drama productions, probably involve the most complex set and lighting design in any studio. Indeed a soap will involve continuous use of a very complex set design that operates like a well oiled machine. The gallery Studios often have an associated gallery. A gallery is a control room generally placed next to the studio. It is also often set at a higher level to the studio, so that gallery staff can look down on the studio. Studios with galleries are generally used for live, or near live, programme making. The gallery crew can direct the programme making process to either create a programme that goes directly out to air, or a programme recording that requires very little subsequent editing. The post production studio The post production studio is not really a studio at all. In fact it is more like a studio gallery. Post production studios do not have sets or lighting. They are designed specifically to take the recorded material from a studio and edit it into a final programme (or series of programmes). Post production studios are not as hectic and busy as galleries. The work that is done in postproduction is not done to the clock, with all the time critical nature of the work done in the live studio and gallery. Post production may take many days or weeks for a single programme. Although the equipment used in post production is similar to that used in the gallery, there are a few obvious differences. For instance, special effects machines in the gallery tend to be expensive pieces of hardware. They can produce a few effects at high speed. In post production the 253 Sony Broadcast & Professional Europe
  • 262.
    Part 22 –The television station special effects machine will produce some very complex effects, and not necessarily in real time. The edit suite The edit suite is really another name for the post production studio, although there is probably the expectation that the post production studio is more complex that the simple edit suite. The linear edit suite The linear edit suite generally used tape. All edits involve playing back one or more tapes and recording to another tape. It is called linear because edit have to be performed in a linear fashion, as they are recorded to tape. The non-linear edit suite Non-linear edit suites are a recent invention compared to the linear edit suite. Non-linear edit suites used to have a reputation for lower quality results compared to the linear edit suite. They generally use computer technology to allow ‘drag & drop’ timeline style editing. Non-linear edit suites offer a very much more flexible way of working than linear edit suites. Non-linear editing is slowly increased in quality as the power of computers has increased. It is now possible to perform non-linear edits with the same high quality as conventional linear edits. The news studio The news studio is not really a single room, although there will be a studio in the basic sense. A news studio will contain maybe three rooms. There will be a studio proper, where the news programme will be made. There will be an associated gallery as well. The third room will be the news room itself. An integral part of the studio complex, the news room is used to bring all the news items together, with their associated script, video or film footage, and audio material. The news room is a little like the opposite to a post production suite. The work done in it is before the programme. There is some editing, although it is not as complex as the editing equipment found in post production. News editing is always simpler, and often done to very short and rigid deadlines. The outside broadcast vehicle The outside broadcast vehicle, often called the outside broadcast truck, or OB truck, is a small self contained transportable production facility. It contains all the equipment to shoot, record, control, and edit a programme. And when finished, transmit the result back to the television station. Sony Training Services 254
  • 263.
    Broadcast Fundamentals OB trucks are used for recording sports events, national celebration events, important news events, etc. or any situation requiring production facilites, where there are none normally. OB truck are sometimes huge with separate rooms for video editing and production, audio and camera control. They can just as easily be small, with one room inside for everything. OB trucks are often measured by the number of cameras they have. A 1 camera OB truck is small, an 8 camera OB truck is large. 255 Sony Broadcast & Professional Europe
  • 264.
    Part 23– CCTV,security & surveillance Part 20 CCTV, security & surveillance What is CCTV? CCTV stands for closed circuit television. It encompasses any television systems that is not connected to any kind of transmitter, and is generally a single connection between the source and destination. There would normally be just one or two television screens at the destination, CCTV also includes the use of microwave or infra-red links, video over IP, and other such technologies. With these technologies the signal is directed to a specific destination, rather than being broadcast to anyone. As such, it is still closed and can therefore be regarded as CCTV. CCTV does not include terrestrial broadcast television, satellite or cable network television. It does not even include subscriber television. Such services are only available to the customers who buy the subscription and who have the relevant technology to receive the signal. Although they could be regarded as ‘closed’ they do not fall into the description ‘CCTV’ because the number of receivers is relatively high. CCTV privacy & evidence CCTV has gained a reputation in many peoples minds as the instrument of “big brother”, and a method for the authorities to pry on innocent individuals. Sensible use of CCTV should never be an infringement of privacy. In many cases CCTV is used for situations where personal privacy is not an issue. Instrument monitoring, remote monitoring in harzardous conditions, search and rescue, and many other applications for CCTV do not involve personal privacy at all. Data Protection Act 1998 Laws around the world are designed to protect people from incorrect use of CCTV technology. In Great Britain the Data Protection Act 1998 includes 62 legally enforceable points to ensure correct use of CCTV technology, and 30 suggested good practice points to improve public perception of the technology. Details of the Data Protection Act 1998 can be found at http://www.dataprotection.gov.uk . Continuity of evidence CCTV equipment can be invaluable at collecting evidence as part of legal proceedings. Recordings and images can all help build up a convincing case. However evidence is useless if it has been tampered with. It is important that any images, video or sound material are not placed in a position where they can easily be altered, or deleted, between the point where they were recorded and where they are presented in court. There may also be a necessity to guarantee that material is not tampered with after it is presented in court, right up to the time it is destroyed. Sony Training Services 256
  • 265.
    Broadcast Fundamentals The process of ensuring that CCTV material is not altered between recording, through its court appearance and its eventual destruction is called continuity of evidence. It is impossible to absolutely guarantee continuity of evidence. All that can be done is to reduce the likelihood of tampering to such a level that it is improbable. Two methodologies can be used to ensure continuity of evidence, trusted personnel and technology. Using both methods can provide very convincing of evidence. Trusted personnel Recorded evidence can be placed in the hands of trusted personnel. These people are trusted to prevent the material from being tampered with either because it is their job or because their reputation depends on it. Trusted personnel include security guards, bonded store keepers etc., as well as notaries, like judges, doctors, police etc.. All trusted personnel may be corrupted, but it is unlikely. It is this unlikelihood that provides continuity of evidence. Technology Recorded CCTV material can be protected at every point from the camera through the court room and eventually to destruction by using technology. Simple technology includes lock & key, safes, security doors etc.. These are the kind of technologies that security guards and bonded store keepers would use to back up their trusted personnel status. CCTV material can also employ signal scrambling techniques and electronic watermarking to make tampering difficult. Making multiple copies of recordings, at separate remote sites can improve security, and prove tampering. CCTV use CCTV’s primary use is in security & surveillance, and has gained a ‘big brother’ reputation, with all the concerns of civil liberties and privacy. However CCTV is important is increasing levels of safety in public areas, offering better levels of monitoring and control for inspection, machine operation, medical applications, and for work in remote or hazardous environments. Examples of CCTV usage Town centre surveillance Local councils and police are using CCTV increasingly to monitor city and town centres. Cameras are mounted on buildings or posts at strategic points. Many are fitted inside motorised environmental 257 Sony Broadcast & Professional Europe
  • 266.
    Part 23– CCTV,security & surveillance housings. The signals from these cameras are fed back to a central control office, where staff can monitor activity in shopping areas, public amenities, car parks, etc.. Although there are concerns about public liberties, these systems have been very successful in reducing robberies, vandalism and mugging in town centres. On-the-spot views for sports events. CCTV, and especially the use of miniature camera technology, is being used increasingly to allow people to experience the thrill of sports events by mounting cameras to racing cars, motorcycles and jockeys. Signals can be fed back to the production studio where they can be fed into the broadcast chain. As well as allowing people at home to see what the racing driver or rider is seeing while racing down the track, it also provides valuable information to pit crews and officials. Train crew assistance systems Train platforms can be very long and often curved. It is often difficult for drivers and guards to see the whole length of a platform as the train is pulling out. CCTV is often used as a means of checking that doors are closed and passengers are clear of the edge of the platform before starting the train. Cameras are mounted to the wall or ceiling at strategic points along the length of the platform fed to two or three small monitors mounted just outside the train window, so that the driver or guard can easily see them without having to turn or stretch. Biometric identification At the cutting edge of CCTV is personal identification using biometrics, Whilst not generally regarded as CCTV by many people, biometrics uses the same basic systems as all other CCTV systems. Entry systems to high security areas can involve specialist cameras linked to digitizers and computers. These can be used to perform facial scans, fingerprint scans or retinal scans to help identify people. Search and rescue CCTV is used extensively when searching for survivors in fires, collapsed building, caves and pot-holes. Miniature cameras can be pushed into places humans cannot get into. Small cameras can be fitted to robot crawlers and sent into dangerous environments that are too dangerous for humans. Another important area or search and rescue takes advantage of the fact that cameras can see part of the electromagnetic spectrum humans cannot see, i.e. infra-red. Helicopters can search for people or animals in open country, or sea, from their heat signature. Special cameras that are sensitive to heat can make people or animals shine out like beacons even at night. Sony Training Services 258
  • 267.
    Broadcast Fundamentals Medical procedure monitoring CCTV is now being used to monitor the progress of medical operations and procedures. From endoscopes to remote control microsurgical equipment. CCTV equipment is becoming increasingly important. Linking CCTV equipment to shape recognition and 3D modelling software, and systems can be built to assist medical teams in diagnosis. Another important area of development is in remote consultation. By using CCTV with video over IP, or streaming technology, it is possible for top surgical consultant to assist difficult surgical procedures anywhere in the world. Microchip production and inspection Many modern production processes operate at dimensions far too small for humans to see with the naked eye. As dimensions become smaller and smaller, it is also impossible see things using normal light. The wavelength of light itself becomes a problem and other forms of radiation must be used. CCTV sensitive to these wavelengths are used to allow people to see these tiny dimensions. Probably the most important of these is microchip production. CCTV is used extensively in the production process, by coupling it to microscope technology using ultra-violet and X-ray radiation. CCTV terminology CCTV technology has parallels with professional and broadcast video. However some of the terms and jargon used in the CCTV industry is entirely different to those used in broadcast video Activity detection The ability of a system to react to movement. A processing unit will compare video frames to check for differences, i.e. movement. This can be sent out as a signal to the systems control to switch to that camera. Multiplexers can be adjusted to devote more, or all time to the camera that has sensed movement. It is also possible to send zoom pan and tilt control signals to the camera to make it zoom in closer on the detected movement. Alphanumeric video generator (AVG) The CCTV equivalent to a character generator. Quality is generally not as good as broadcast character generator, but the requirements for character insertion in CCTV are not as stringent, and cost saving is an issue. CCD iris A term used in CCTV to describe auto iris. 259 Sony Broadcast & Professional Europe
  • 268.
    Part 23– CCTV,security & surveillance C mount Cine mount. The original lens mounting system for CCTV cameras. 1” (25.4mm) diameter with 32 threads/inch. Back flange to CCD distance standardised at 0.69” (17.526mm). Conditional refresh A system used to save transmission bandwidth by transmitting video frames only when a change is detected. CS mount Cine short mount. An more contemporary lens mount to the C mount standard offering cheaper smaller lenses. CS mount has exactly the same dimensions as C mount, but the back flange to CCD distance is reduced to (12.5mm) Back light compensation (BCL) A camera feature that automatically compensates for strong background lighting to improve detail in darker areas of the image that would appear as just a black shape. Dense waveband division multiplexing (DWDM) A technology that places a large number of video channels onto a fibre optic cable. Dome camera Any CCTV camera installed in a dome. Modern dome cameras a pre built as a complete assembly, often with pan, tilt and zoom capability, and are sometimes referred to a PTZ cameras. Some dome cameras have a network output rather than a conventional video output, so that the video signal can be sent out on a computer network as a compressed data stream. Duplex Used in CCTV to describe a multiplexer that can perform more than one function at one time, like displaying and recording multiple images. Dwell time The time a multiplexer stays on one camera in its rotation. Hi-Z A common term in the CCTV industry to denote an unterminated analogue video cable connection. The line impedance of video cable is 75ohm. Many coaxial analogue video connectors have a switch that can be switched between Hi-Z and Lo-Z or 75ohm. Equipment can be daisy chained on the same video connection. All equipment except for the first and last are set to Hi-Z (high impedance – Sony Training Services 260
  • 269.
    Broadcast Fundamentals terminator switched off) The first and last pieces of equipment are set to 75ohms impedance, either by switching the equipment’s terminator switch to 75ohms or by fitting a BNC terminator to the cable end. Kangaroo lens A lens with two fixed iris positions – fully open and partially closed. Designed to be used with cameras that have electronic (sensor) auto-iris capability as a way of providing 2 ‘gears’ for the auto-iris. Lambert radiator A primary source of light that is designed to be imperfectly diffused. Lambert reflector Like a Lambert radiator but for a secondary (reflected) light source. Minimum object distance (MOD) The closest distance a particular CCTV lens can focus to, measured from the front of the lens to the object. Multiplexer A unit that multiplexes a number of video signals. A multiplexer can be designed to show more than one video image on one screen by down converting the incoming video images and placing these smaller images into one outgoing signal. Can be refered to as spacial multiplexing. Each image has poorer resolution that the original but is running at real time. Probably the most popular of these is the Quad. A time division multiplexer divides each video frame, or series of frames, between a number of inputs. The output effectively chops between the inputs on a rotational basis. Each video input has the same resolution but is not as smooth. The output of a time division multiplexer is not easy to view because the image flickers from one image to another. Time division multiplexers are generally used as a way of recording multiple video signals to one tape. Timecode is also recorded and is used by the multiplexer during playback to demultiplex the recorded signal. Pan & tilt head (P/T head) A motorised camera mount that allows the camera to be panned (move round) and tilted (moved up and down) remotely. Pan & tilt heads are often combined with a zoom camera to give a pan tilt zoom assembly commonly called a PTZ camera. Many dome cameras are also PTZ cameras 261 Sony Broadcast & Professional Europe
  • 270.
    Part 23– CCTV,security & surveillance Pre-position lens A CCTV lens which outputs signals for zoom and focus positions, so that hey can be stored in the control station and thus allow pre-set positions to be called up by the controller quickly. PTZ camera See Pan & tilt head. Quad A unit that spacially multiplexes four video signals into one signal; to show all four images on one monitor. The resolution of each image is a quarter of the original but is running at real time. Repeater A unit that can be placed part way along a very long transmission path to amplify the signal back to full amplitude again. Cable repeaters can be used to re-amplify video and audio signals. Microwave repeaters do the same thing for video and audio signals on microwave links. Analogue repeaters also amplify noise, so there is a limit to the number of analogue repeaters that can be used before the noise level becomes so great that it swamps the original video or audio signal Retained image A term used in CCTV to describe an image that remains on the camera sensor after the object has gone. Retained image is a temporary artifact as a result of a delay in the camera sensor, but is sometimes used to describe the burnt-in image because a CCTV camera is looking at the same thing all the time. Vari-focus lens A manual zoom lens. All other industries regard all lenses with a variable focal length as zoom lenses. The CCTV industry need to be able to differentiate manual zoom lenses from motorised ones, because many CCTV cameras are operated remotely. The focal length of a vari-focus lens can be altered during installation, then must be considered set from the point of view of the operator in the control room. (See zoom lens.) Z Zoom lens A lens with a remotely controlled motorised variable focal length. The CCTV industry differentiates between manual and motorised zoom lenses. The manual variety are called vari-focus lenses. (See vari-focus lens.) Sony Training Services 262
  • 271.
    Broadcast Fundamentals The typicalCCTV chain A typical CCTV chain consists of a number of devices communicating with one another. Video and audio signals go in one direction, from the camera or microphone to the monitor or speaker. Control signals go in the opposite direction, from the control point to the camera or microphone. The camera All CCTV systems start with the camera. This is the main input to the whole system. There may be just one, or many cameras. Cameras may be monochrome or colour, and can vary in size and quality. Some CCTV cameras are fitted into environmental housings to protect them from weather or harzardous conditions. Some are hidden behind screens or domes to make them more discrete. Many cameras may have controls for zoom and iris, as well as motors for pan and tilt. The microphone Many CCTV systems are video only systems. Some have associated audio. In many cases microphones are fitted to the cameras themselves. Increasingly microphones are fitted separately from the camera. This allows it to be strategically sited to pick up the best sound. The transmission chain The transmission chain relays video and sound back to the control or processing station. In most CCTV systems the transmission system consists of simple video cables routed directly from each camera to the control station, and audio cables routed directly from the microphones to the control station. The transmission chain may consist of switch gear, routers, signal compressors and decompressors, IP packetisers, microwave or infra-red links. The control station The control station consists of a control panel to either switch between incoming video and audio signals directly, or to send control signal to a remote switcher or router to perform the switching elsewhere. Video signals are switched between cameras and recorders and the monitors, and between the microphones and monitors and the speakers and headphones The control station can also send signals to cameras to perform zoom, pan and tilt movements, as well as iris control and filters, depending on the camera’s capability. Outside cameras may have wipers to clear water and dust from the camera housing window and heaters inside the housing to heat the camera if the temperature falls below freezing. These could be automatic or controlled manually from the control station. 263 Sony Broadcast & Professional Europe
  • 272.
    Part 23– CCTV,security & surveillance The processing station In some CCTV systems the control station is replaced by a processing station. This is popular in automated CCTV systems for applications like automatic recognition systems. The images from the cameras are not viewed by an operator but are processed by computer and either placed in a database of compared to a database. The speakers & headphones Audio signals are routed from the control station’s panel into either speakers or headphone, so that the operator can hear the sound. In most cases this will be a mono, rather than a stereo feed. The video recorder Many CCTV systems have recording capability. This allows the controller to review past events, and acts as a good source of evidence. Video recorders can be simple VHS machines or more expensive digital machine like those based on the DV tape format. If the CCTV system has audio capability this is recorded on the same tape as the video signal. In many cases the video recorder records one camera feed. In some cases the video recorder is able to record more that one camera feed on the same tape. The monitor The monitor is the final destination for the video signals. They can be fed from any of the cameras or from one of the video recorders. Sony Training Services 264
  • 273.
    265 Figure 111 C a m e ra & M o n ito r s & s p e a k e r s m ic r o p h o n e M o to r is e d R o u tin g m a tr ix c a m e r a in h o u s in g & Broadcast Fundamentals m ic r o p h o n e S w itc h e r M o to r is e d c a m e r a in h o u s in g M ic r o w a v e lin k C o n tr o l s ta tio n C a m e ra & m ic r o p h o n e In te r n e t M in i c a m e r a V id e o r e c o r d e r s C o m p re s s o r D e c o m p re s s o r /p a c k e tis e r /d e p a c k e tis e r Typical CCTV system Sony Broadcast & Professional Europe
  • 274.
    Part 23– CCTV,security & surveillance CCTV cameras CCTV cameras are essentially the same as those used for broadcast, although there tends to be less emphasis on quality. General purpose cameras Most CCTV cameras are designed for general indoor and outdoor use. They vary in size from about 10x3x3cm to 30x10x10cm. General purpose cameras normally sell without a lens, and one needs to be bought and fitted. Both the older C and newer CS lens mounts are popular. General purpose cameras normally have an analogue composite output. Some have an analogue component, and a few have digital video outputs. General purpose cameras can be fitted into weather housings. This housing can be fitted with a heater to ensure it still operate in cold weather, and a wiper to ensure the housing window is kept clear of rain drops and dust. General purpose cameras can also be fitted to pan and tile mechanisms. This allows then to be repositioned remotely. The control for these cameras is through separate RS-232, RS-422 and RS-485 connection with proprietary control protocols. Net cameras and web cameras This is a relatively new type of camera. It incorporates all the functionality of a standard camera but with a single network connection. This connection can be used for both the video signal from the camera and the control signals to the camera. Software is installed on a computer which allows the camera to be controlled. Network connected cameras are easy to install. Images can be sent to any location on Sony Training Services 266
  • 275.
    Broadcast Fundamentals a company network, or out over the Internet. Control can, likewise, be sent from anywhere on the network, or the Internet, with the appropriate software. Net cameras use some form of image compression to reduce the amount of data sent on the network link, while keeping image quality as high as possible. JPEG is a common compression format. Some use the more complex MPEG compression to improve the compression/quality ratio. Some net and web cameras have pan & tilt as well as zoom capability built in. small motors in the camera assembly allow for this kind of control, which is achieved by sending control signals over the network link, from a controlling computer. Dome cameras This type of CCTV camera is becoming more popular. They are not discrete, as most people recognise these domes for what they are. Older dome cameras were really housings for a number of cameras, with each one pointing in a different direction. Modern dome cameras now have pan and tilt built in. Many also have zoom control as well. These are commonly called PTZ cameras. Domes are generally made from some kind of high impact plastic, to give some protection from vandalism. They are generally tinted. This hides the actual orientation of the camera inside, but reduces the sensitivity of the camera. Many dome cameras have standard analogue composite video outputs. The control for pan and tilt for these cameras is through proprietary RS-232, RS-422 or RS-485 connection, just like their general purpose camera equivalent. An increasing amount of dome cameras have network connections. These cameras offer the same advantages as network and web cameras, but in a protective dome. Night vision cameras and dual condition cameras Night vision cameras are designed to operate after dark. Two methods are 267 Sony Broadcast & Professional Europe
  • 276.
    Part 23– CCTV,security & surveillance used. The first is through intensification, and the second through the use of infra-red. Intensification cameras cannot work in complete darkness. They use techniques to intensify the sensor, allowing then to pick up objects with just the slightest amount of light. Images tend to be lower quality than normal because the noise is intensified as well. Infra red cameras have extended sensitivity beyond the normal electro- magnetic spectrum into the infra-red. They can pick up heat, and build up an image from a combination of what little light there is and the temperature of the objects in the scene. Infra-red night vision CCTV cameras sometimes have infra-red lamps mounted in the same construction as the camera itself. This provides a good picture through the camera, while remaining completely dark to the human eye. Night vision cameras are all monochrome because they are looking for basic form and shape with whatever light or heat is available. Their colourimetry is wrong. Dual condition cameras are able to switch between normal daylight colour operation and monochrome low light operation. This can either be achieved by switching on and off the sensitivity to infra-red, or switching on and off intensification. Wireless CCTV cameras This kind A e r ia l of camera C a m e ra has no cable Pow er A e r ia l R e c e iv e r Pow er V id e o o u tp u t connection, other than power. It has an in-built wireless transmitter operating in the GHz range of frequencies, and can transmit short distances to a receiving station. Wireless CCTV are easy to install, and reposition, and are ideal for temporary installations, and installations were cameras may need to be moved on occasions. However wireless cameras can be easy to tap into. All you need is the same receiver on the same frequency. Sony Training Services 268
  • 277.
    Broadcast Fundamentals Pin-hole & bullet cameras Pin hole cameras used to refer to basic cameras consisting of a box with a pin hole in the front. The name has been ‘stolen’ and now also refers to a classification of sub-minature camera with a very small lens at the front. These types of CCTV camera remain at the fringe of main stream CCTV, with all the concerns of privacy, spying, and discrete surveillance. The largest of these is the bullet camera, sometimes called the lipstick camera. The processing electronics is designed into a separate unit, and the camera head is nothing more than the lens and the sensor. This makes it as small as possible, like a bullet or lipstick tube (hence the name). The separate unit is often called the camera control unit (CCU), although, in truth, it contains as much of the electronics that can be removed from the camera head itself. These cameras offer a good compromise between reasonable quality and a discrete camera. The smallest cameras are all single unit, single CCD or CMOS sensor type. The lens, sensor and electronics are all integrated onto the same small circuit board. There is no electronic control, and very little iris or focus control. What there is, is always manual. This type of CCTV camera is popular for fitting into clock faces, wall mounted electrical sockets, light fittings, etc.. Image quality tends to suffer because of the restricted space for electronics and the small lens. True pin-hole CCTV cameras, sometimes called SWAT cameras, have a thin shaft or flexible tube, protruding from the front of the camera head with a very small lens mounted at the front. The shaft itself is a light guide. These cameras provide for the smallest identifiable intrusion in to a room space and are the most discrete of all CCTV cameras. A standard camera can be fitted with a pin-hole lens. This lens is often long and thin, and comes to a point. The whole camera can be fitted behind a wall with the only visible part being the tiny front element of the lens. Pin-hole cameras tend to be wide angle and the image is often greatly distorted. 269 Sony Broadcast & Professional Europe
  • 278.
    Part 23– CCTV,security & surveillance Biometrics cameras Biometric cameras are specifically designed to scan specific human physical features. This includes hand geometry, face, iris, & retina. Biometrics can also be used for signature recognition. Biometric camera outputs can either have conventional video composite or component outputs or direct computer connections. These outputs are connected directly into a computer. Video connections need to be fed into a plug in board with an appropriate video input. The board performs the image capture. The computer then analyses the image. Computer connections include RS-232 and RS-422 connections, USB and 10-BaseT connections. This type of connection is becoming more popular because it is easier to fit. The image capture is performed by the camera into an internal frame store. The computer output is a digital download of the frame store. Some biometric cameras can now perform some of the image analysis, internally in dedicated hardware, searching the image for relevant information, discarding the rest, and even breaking the relevant data down into a digital code. This greatly reduces the computer’s workload, and speeds up transmission by only sending relevant data to the computer rather than the whole image. Once in the computer the image, or code, is compared to a database of known patterns to recognise the person. Instrumentation cameras This is a loose definition of CCTV camera. Instrumentation cameras are similar to other CCTV camera types in many respects, and cameras designed for other purposes can be used as instrumentation cameras. Instrumentation cameras are specifically designed to be fitted to precision machines and instruments. They are intended for monitoring of inaccessible areas, or looking on very low light level areas or in special lighting conditions, like microscopes. Many instrumentation cameras use the same design techniques as some pin-hole cameras. Some have small camera heads linked via light guides. Many have separate camera controllers to reduce weight and size on the instrument itself. Instrumentation cameras are often able to operate in very low light levels. However this feature should not be confused with night vision cameras. Night vision cameras often use sensors sensitive to infra red, and illuminate the scene with an infra-red lamp. Intensification night Sony Training Services 270
  • 279.
    Broadcast Fundamentals vision cameras are very sensitive, but are also designed to be reasonably robust. Either they only employ mild amounts of intensification, or the camera will not be damaged by exposing it to bright light. However instrumentation cameras designed for very low light levels are designed for only for low light levels, not for a different kind of light, like infra-red. They are not at all robust, and can often be damaged by exposing them to normal light levels. They require expert setting up, care and maintenance. Intensified CDD (ICCD) cameras use an intensifier before a CCD sensor. This boosts the amount of signal produced by the light before the sensor reads it. Reading CCTV camera specifications All CCTV camera manufacturers produce specifications, making the figures and details look as appealing as possible. Therefore, it is worth investigating exactly how some of these specifications are found, and things one should bear in mind when reading them. Camera format CCTV cameras are designed in a variety of formats depending on the size of their sensor. All sensors have a 4:3 aspect ration, in common with standard domestic television. It is a common misconception that the camera format is the same as the distance from one corner of the CCD sensor to the opposite corner, i.e. a ½” sensor is ½” across its diagonal. This is not so. Sensor diagonals are about 0.6 times the format size. The reason for this goes back to the days of tube cameras where the sensitive area of the old 1” tube was only about 0.6 of the overall tube diameter. 271 Sony Broadcast & Professional Europe
  • 280.
    Part 23– CCTV,security & surveillance T h e tr a d itio n a l 1 " c a m e r a tu b e 1" S e n s o r d ia g o n a l a p p r o x 6 0 % o f tu b e d ia m e te r 1" 2 /3 " 1 /2 " 1 /3 " 1 /4 " CCD s e n s o rs 1 2 .8 x 9 .6 8 .8 x 6 .6 6 .4 x 4 .8 4 .8 x 3 .6 3 .2 x 2 .4 A ll d im e n s io n s 1 5 .9 d ia g o n a l 1 1 d ia g o n a l 8 d ia g o n a l 6 d ia g o n a l 4 d ia g o n a l in m m u n le s s s p e c ifie d Figure 112 CCTV format sensor sizes The table below shows sensor dimensions for various camera formats and the ratio differences between the format and the sensors. 1” 2/3” ½” 1/3” ¼” Sensor horizontal 12.8 8.8 6.4 4.8 3.2 Sensor vertical 9.6 6.6 4.8 3.6 2.4 Sensor diagonal 15.9 11 8 6 4 Sensor ratio 1:1.6 1:1.53 1:1.59 1:1.41 1:1.59 The camera format has an effect on the kind of lens that can be fitted, and how it will behave. This is covered in more detail in the section on CCTV lenses. Resolution Resolution is a measure of the resolving power of the camera. All CCTV cameras, colour or monochrome, are the single sensor type. The sensor pixels in colour CCTV cameras are divided between the three primary colours. Thus, for the same sensor density, there is a difference in resolution between monochrome and colour CCTV cameras. Monochrome CCTV cameras will therefore tend to have a higher resolution than colour CCTV cameras. Sony Training Services 272
  • 281.
    Broadcast Fundamentals Still cameras often use the number of pixels in the sensor as a measure of resolution. However this is not a good method of defining resolution in video cameras. Sensor resolution will give a basic figure for the sensor itself, not of the eventual output signal. In many cases only a proportion of the pixels are actually used in the picture. If specifications mention ‘active pixels’ or ‘effective pixels’ rather than simply ‘pixels’, this will give greater assurance that all these pixels are part of the picture. The camera’s circuitry will also affect the sensor’s resolution. Badly built circuitry will have a poor bandwidth that will reduce the resolution provided by the sensor by the time the signal reaches the output. Having a good sensor and bad circuitry is a waste. CCTV camera resolution figures should always be related to the final output signal. Resolution figures are sometimes given as vertical resolution. This is the number of active lines in the picture. All PAL based CCTV cameras are built around the PAL television system with 625 lines per frame. Of this 575 lines are active. All PAL based CCTV cameras should be able to achieve a vertical resolution of 575 lines. Most resolution figures normally define the horizontal resolution. This is a measure of the number of individual pixels per line the camera is able to resolve, and is measured in vertical lines. Horizontal resolution can never be higher than the sensor’s horizontal resolution, and is often lower, due to bandwidth limitations of the circuitry between the sensor and the output. Horizontal resolution and bandwidth are related by the equation :-  1  Bandwidth =    Period  Each horizontal line lasts about 50uS long (exactly 52uS). The pixels, or vertical lines, are divided up into this 50uS. The period is one clock cycle, producing two vertical lines, one black, one white. Therefore :- 50 × 10 −6 Period =  Lines     2  1 × 10 −4 = Lines Therefore the bandwidth can be found by combining these two equations :-      1  Bandwidth =  −4    1 × 10       Lines      273 Sony Broadcast & Professional Europe
  • 282.
    Part 23– CCTV,security & surveillance = Lines × 10000 These equations boil down to a very simple rule. If the number of lines or pixels is measured in hundreds, and the bandwidth in MHz, the two are equal, i.e. 400 vertical lines = 4MHz bandwidth, 600 lines = 6MHz bandwidth. This is a rough approximation as the exact PAL line duration is 52uS not 50uS. Bandwidth, probably more than any other parameter, is the figure that is more difficult to achieve. Bandwidth costs money and separates the good cameras from the bad ones. For square pixels the horizontal resolution would need to be 768 vertical lines, or pixels, which gives almost 8MHz bandwidth! No CCTV camera can achieve this. Cameras achieving 600 vertical lines are considered good quality. SNR A camera’s SNR is found by comparing the amount of video signal to the amount of noise, in decibels, with the equation :-  video  20 log dB  noise  As a guide, an SNR of about 20dB is poor and is probably not viewable. 30dB will give a barely distinguishable image. 50dB is acceptable and 60dB good. As a ratio of video signal to noise, 20dB is 10:1, and 60dB is 1000:1. Sensitivity Sensitivity is a measurement of how much signal the camera produces for a certain amount of light. Sensitivity can be measured as the minimum amount of light that will give a recognisable picture, and is sometimes called ‘minimum illumination’. Figures of below 10 lux should be possible for standard CCTV cameras. However although this method provides an easy guide to CCTV planners and installers, it is a highly subjective measurement. What is a recognisable picture to one person may be unrecognisable to another. Professional and broadcast cameras use a different, more quantifiable method for measuring sensitivity. The camera is pointed towards a known light source. This is often a 2000 lux source at 3200K light temperature (colour). The iris is then closed until the output is exactly 700mV. Thus a reasonable sensitive camera may be f11 at 2000lux, whereas a less sensitive camera may be f8 at 2000lux. CCTV camera specifications are often not so consistent. Different lux levels are specified. In the case of low light and night cameras normal colour temperatures are meaningless because the camera is not designed to be lit with standard 3200K light! These cameras often Sony Training Services 274
  • 283.
    Broadcast Fundamentals specify the minimum illumination sensitivity, and should quote figures very much less than 1. Dome camera manufacturers specify sensitivity with the dome removed, because the figure is better than with it fitted, Some give figures with the dome fitted as well. The camera would normally be used with the dome fitted. This factor needs to be remembered. Dome cameras need to be more sensitive than other cameras, if they are to overcome the losses through the dome itself. Cameras with AGC CCTV cameras with automatic gain control (AGC) add another complication to the specifications. Manufacturers will quote sensitivity figures with AGC switched on. However they will generally quote SNR figures with the AGC switched off. The reasons for this are obvious. It makes the figures look better! Output formats CCTV cameras use many different video output formats, from the simple analogue composite output fitted to most cameras, through the analogue Y-C output format, digital formats of one kind or another, and direct computer network outputs used by some of the latest cameras. Specifications always show the SNR, sensitivity, etc. from the best output. The most common output connection people use is the analogue composite output. Many cameras have it fitted and it is a simple connection. However it is also the worst quality output. Some cameras have a component output, so called the Y-C output. This provides higher quality but is more difficult to connect. Some newer CCTV cameras have computer network connections, These cameras convert the video into a compressed data stream which can be sent down a network cable. Many use the JPEG format, some use the MPEG format, which can be modified and set up to give good quality at low data rates. Camera mounting Most general purpose cameras have a screw hole underneath them to secure them to a tripod or bracket. This is the same as is used by many professional still cameras and is based on the ¼” Whitworth thread, with 20 thread per inch. Enclosure types Most CCTV enclosures quote conformance to the National Electrical Manufacturers Association (NEMA) standards. These are American standards but are often quoted in manufacturers specifications. IEC Publication 60529 also specifies enclosure types, and are sometimes used in specifications referred to simply as IP numbers. Type Purpose Usage IP No 1 Indoor. General. Accidental contact prevention. Falling dust/dirt. 10 2 Indoor. Light water As Type 1 with protection from falling & light 11 275 Sony Broadcast & Professional Europe
  • 284.
    Part 23– CCTV,security & surveillance protection splashing non-corrosive liquid. 3 Indoor/outdoor. Light water As Type 2 with protection from sleet & ice. Dust & 54 protection. rain tight. 3R Indoor/outdoor. Light water As Type 3 with further protection from ice build-up 14 & ice protection 3S Indoor/outdoor. Light water As Type 3S but can still operate with heavy ice 54 & heavy ice protection build-up. 4 Indoor/outdoor. Heavy As Type 3 with protection from falling, splashing 56 water protection. and hose fed water, and condensation. 4X Indoor/outdoor. Heavy As Type 4 with corrosion protection. 56 water protection. Corrosion resistant 5 Indoor. Light industrial. Protection from lint dust & dirt, light splashing, 52 dripping, seepage & condensation of non corrosive liquids. 6 Indoor/outdoor. Light As 3R with protection from limited water 67 submersion. submersion. 6P Indoor/outdoor. Prolonged As 3R with protection from prolonged water 67 submersion. submersion. 7 Indoor. Hazardous Protection from light explosions, hazardous dust, - conditions. pressure differentials, acetylene, hydrogen, various hydro-carbons. 8 Indoor/outdoor. Hazardous Protection from light explosions, hazardous dust, - conditions. pressure differentials, acetylene, hydrogen, various hydro-carbons. 9 Indoor. Hazardous Protection from light explosions, hazardous dust, - conditions. pressure differentials, metal dust, carbon dust, grain dust, fibres. 10 Indoor/outdoor. Hazardous Mine safety. Health administration. Protection - conditions. from methane and coal dust. 12 Indoor. Light industrial. As 5 but with oil protection & no knock-outs. 52 12K Indoor. Light industrial. As 5 but with oil protection & no knock-outs. 52 13 Indoor. Heavy industrial As 12 with heavier protection. 54 NEMA Type 4 is a popular enclosure type for many outdoor CCTV cameras. Some attain NEMA Type 6 or 6P. Sony Training Services 276
  • 285.
    Broadcast Fundamentals CCTV lenses CCTV lenses are highly functional, and very much built to purpose. They are simpler than professional and broadcast video, and still, camera lenses. These lenses are on a quality equal with domestic and consumer cameras and camcorders. General purpose CCTV cameras were traditionally based on the 1” video tube. Lenses were mounted to the camera with a C mount. (Cine mount. 1” (25.4mm) diameter, 32 threads/inch. Back flange to sensor : 0.69” (17.526mm).) Later general purpose CCTV cameras have the more compact CS lens mount. This mount is exactly the same as the C mount but with a lens to sensor distance reduced to 12.5mm. Many new CCTV cameras are now supplied with fixed lenses. This reduces cost, is simpler for the system designer to work with and install, and allows the camera manufacturer to integrate the lens and camera more closely, adding features to the complete camera that can only be added if the lens and camera are fixed together. Some smaller CCTV cameras are just too small to allow the lens to be removed. Pin-hole cameras and web cameras often only have a simple single convex lens. Choosing CCTV lenses Assuming you have a general purpose CCTV camera, that has been delivered without a lens. What lens do you fit to it? What criteria do you have to bear in mind when choosing a lens? Focal length CCTV lenses are available as either fixed focal length lenses or lenses where the focal length can be varied. Fixed focal length lenses, sometimes called prime lenses, are available in a number of different focal lengths from fish eye and wide angle, through normal, to telephoto and super telephoto. Wide angle lenses Wide angle lenses see a wider angle of view than the human eye. Although it will cover a greater area, it is difficult to make out detail. The image will also look distorted, with near objects appearing to be very near and far objects very far. Fish eye lenses are super wide angle. The angle of view may be over 180 degrees, and image distortion is so great that it becomes circular. Normal lenses A normal, or standard, lens is a lens that will produce about the same scene on the monitor as the human eye. Geometry and angle of view are similar to the human eye, although the human eye actually has a very odd view of the world that the brain sorts out for us. The normal 277 Sony Broadcast & Professional Europe
  • 286.
    Part 23– CCTV,security & surveillance focal length is about the same as the diagonal distance across the sensor, so will vary from one camera format to another. Telephoto lenses Telephoto lenses have a narrower angle of view than the human eye. They magnify the scene. They also shorten depth, with near objects and far objects appearing closer to each other. Super telephoto lenses allow you to see detail from a long distance. The angle of view is very small, and they are difficult to set up and maintain in the correct position. The slightest movement can push them off target. The table below shows the focal lengths and angles of view for various lenses, for all 5 common CCTV formats. Angle ¼” 1/3” ½” 2/3” 1” Fish eye > 100 < 1.5 <2 < 2.5 <3 <5 Wide angle 40 - 100 1.5 - 5 2-7 2.5 - 9 3 - 12 5 - 17 Normal or standard @ 40 @5 @7 @9 @ 12 @ 17 Telephoto 8 - 40 5- 24 7 - 34 9 - 45 12 - 62 17 - 90 Super telephoto <8 > 24 > 34 > 45 > 62 > 90 Zoom and vari-focus lenses Variable focal length lenses fall into two groups, The motorised lenses are commonly referred to as zoom lenses, just as all variable focal length lenses are in still photography, domestic and professional camcorders. Manual variable focal length CCTV lenses are commonly referred to as vari-focal lenses, to differentiate them from the motorised ones. The focal length can be set up during installation. Once set up they effectively become fixed focal length lenses to the operator, because they cannot be altered remotely. Format The next thing to consider is the lens and camera format. Older CCTV cameras were based on the 1” tube. All lenses were therefore matched to these sensors. As tubes were replaced by CCD sensors, so camera designs became smaller. These first sensors copied the 1” tubes, allowing the same lenses to be used. Later, more compact, CCTV cameras were designed with smaller sensors, which required new lenses to match to. We now have 5 different CCTV lens and camera formats. It is important to match each camera to the correct lens. Sony Training Services 278
  • 287.
    Broadcast Fundamentals Sensor R e s u ltin g im a g e Lens 1 " le n s o n a 1 " c a m e r a Sensor R e s u ltin g im a g e Lens 1 /3 " le n s o n a 1 /3 " c a m e r a Figure 113 Good camera and lens combinations Each sensor should have a matched lens fitted. If a 1/3” lens is used on a 1” camera the lens will try to focus the whole image onto a small part of the 1” sensor. The rest of the sensor will pick up nothing. Whether you see a rectangular or circular image in the centre of the picture will depend on if the lens itself has an internal rectangular mask. Sensor R e s u ltin g im a g e Lens 1 /3 " le n s o n a 1 " c a m e r a Sensor R e s u ltin g im a g e Lens 1 " le n s o n a 1 /3 " c a m e r a Figure 114 Poor camera and lens combinations However, if a 1” format lens is fitted to a 1/3” format camera it will still work, although the optics will be incorrect. The lens will be trying to 279 Sony Broadcast & Professional Europe
  • 288.
    Part 23– CCTV,security & surveillance project the image onto a 1” sensor. Although the image will be in focus, the sensor will only pick up the central portion on the image. It will appear that you fitted a lens with a larger focal length. Put another way, it will be as if you fitted a telephoto lens onto the camera Lens mounts An important parameter to consider is the lens mount. Most CCTV cameras with removable lenses have a C or CS mount. C mount (cine mount) was the original mount for CCTV cameras. CS (cine small) is a newer mount intended for more compact design. Both the C and CS mount are a 1” diameter, 32 threads per inch screw. The only difference is that the distance from the back of the lens to the sensor, the back flange to sensor distance, is 17.526mm on the C mount and 12.5mm on the CS mount, i.e. about 5mm. G o o d m o u n ts (in fo c u s ) B a d m o u n ts (o u t o f fo c u s ) 1 2 .5 m m 1 7 .5 2 6 m m C S le n s o n a C S c a m e r a C S le n s o n a C c a m e r a 1 7 .5 2 6 m m 1 2 .5 m m C le n s o n a C c a m e r a C le n s o n a C S c a m e r a 5m m 1 2 .5 m m C le n s & a d a p to r o n a C S c a m e r a Figure 115 C & CS lens mount combinations If the camera is a C mount you can only fit C mount lenses. If it is a CS mount, you can fit either a C or CS lens. However if you fit a C lens to a CS mount camera you must remember to fit a C-CS adaptor. This is Sony Training Services 280
  • 289.
    Broadcast Fundamentals really nothing more than a threaded ring that pushes the CS lens away from the camera by 5mm so that the back flange to sensor distance is the same as that of a C mount. It is important to remember that some lenses protrude from the back of the back flange. It is therefore possible to damage either the lens or camera if you try to screw a C mount lens to a CS mount camera without an adaptor. Ultra miniature lens mounts Mounts used on smaller CCTV cameras include the M10.5 0.5mm and M15.5 0.5mm threaded lenses. These offer a very small mount and are popular in instrumentation CCTV and “lipstick” cameras. Iris and aperture control What is the iris? The iris is a mechanical device that varies the size of a hole somewhere inside the lens. The hole itself is called the aperture. The iris allows you to control the amount of light through the lens. The reason for an iris is that camera sensors have a certain range of sensitivity. Too little light and detail in the shadows and gloomy areas is lost. This whole area of the image will turn black. Too much light and detail in light areas of the image tend to be burnt out. This whole area will turn white and may spread into other areas of the image. It is important to adjust the iris so that there is reasonable detail across the whole image. (Of course, if you can also control the lighting this will help.) Aperture and depth of field The iris also adjusts the depth of field. This is an important and often forgotten aspect of the iris. Installers often open the iris as far as possible, to create as bright an image as possible, and wonder why it is so difficult to focus. A wide aperture gives a bright image and a narrow depth of field. Focussing is more difficult. A narrow aperture gives a dark image and a wide depth of field. Focussing is easier. f stop numbers The aperture is defined as an f number. The lower the number the larger the hole in the lens and the more light gets through. The numbers are standardised as “stop” numbers. The standard numbers start at 1 and each stop is 1.4 times the last. Thus 1.0,1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, 45 and so on. Each f stop lets twice the amount of light through as the next f stop up the scale. The difference betweenf1, f1.4 and f2 is much greater than it is for high f numbers. Therefore ½ lens stops are often used close to f1, like f1.2 and f1.8. 281 Sony Broadcast & Professional Europe
  • 290.
    Part 23– CCTV,security & surveillance All lenses are defined by the lowest f number possible. The higher the quality, the lower this number is. Hence a 9mm f1.2 lens is a better, and probably more expensive, lens than a 9mm f1.8 lens. The f1.2 lens will have larger lens elements in it to allow more light to get through when the iris is fully opened. The f number is also mathematically a function of the focal length of the lens. Hence longer focal length lenses have higher f number ranges than shorter focal lengths. Put another way, it is more difficult to get a lot of light through telephoto and super telephoto lenses. In broadcast and still cameras irises close to apertures of about f32. This represents a hold only about 2-3mm diameter. CCTV cameras often specify minimum apertures far smaller than this. f360 or even higher that f1000 may be used. These lenses are necessary for ultra sensitive night vision cameras, if they are also used for daylight operation. CCTV lens iris control A CCTV lens iris may be manual or automatic. Manual irises are set during installation and cannot be altered by the operator afterwards. They are good for indoor use where there is little change in lighting conditions during the day (or night). Automatic irises are motorised. There are two type of automatic iris control, video servo and DC servo. With video servo iris control the video signal is sent to the lens. Circuitry in the lens measures the video signal and adjusts the iris so that the video signal is standard 1 volt. With DC servo iris control the video signal is measured in the camera, and a simple DC signal is sent to the lens to control the iris. This type of control is sometimes called galvo control and the lenses called galvanometric lenses. Video servo makes the lens a little more expensive than DC servo, and visa-versa for the camera, although, in practice, most cameras have both video and DC servo outputs. There has been some confusion in the past about iris control connection between 1 3 the camera and lens. For a while there were many different connector types. Camera manufacturers would often provide a plug that installers could fit onto the end of the lens cable, before fitting it 2 4 to the camera. Some camera manufacturers opted to fit simple screw D C s e rv o V id e o s e r v o terminals to make it as easy as possible to 1 C o n tro l - +9v pow er fit any lens. 2 C o n tro l + - 3 D r iv e + V id e o However an increasing number of camera 4 D riv e - (G n d ) G nd and lens manufacturers are opting for a standard 4 pin square plug for iris control. The connector is often called the Panasonic connector, or Hi-Rose connector from the Japanese Hirose Electric company. Sony Training Services 282
  • 291.
    Broadcast Fundamentals Camera sensor auto-iris Some cameras have sensors that can assist the lens iris. The sensor has an electronic shutter which can be used just like an iris. This is explained in more detail on page 113. Lens filters CCTV lenses are sometimes fitted with a filter. These are used to correct for colour imbalance, or protect the camera from possible damage. They include neutral density, and neutral density spot filters, coloured filters and polarising filters. Special effect filters are not used in CCTV cameras. These would remove from the clarity of the image and almost certainly go against the purpose of the camera. Filters are covered on page 91. CCTV switchers and control stations A CCTV system may be designed with just one camera and one monitor. This is popular for instrumentation, machine control and remote monitoring in harzardous environments. However many CCTV systems are devised for security and surveillance. In these scenarios there are likely to be many cameras involved. The central control room could have one monitor fitted for each camera. Some systems are designed this way. It allows an operator to view every camera all the time. However if there are a lot of cameras a single operator will find it more difficult to keep and eye on all the monitors at once. If the site being surveyed has little activity, there may be little need to have a monitor on all the time, for every camera. Not to mention cost, which becomes a problem if many monitors need to be purchased and maintained. The solution is to view many cameras on just one monitor. This can be done in two ways. Either you can switch the monitor to show a different camera, or you can squash the output from many cameras and fit them all onto one monitor screen as a mosaic. Camera switching systems By far the most common way of sharing one monitor amongst many cameras is by using a series of switches to select the camera you want to look at. Switching monitor The simplest way of doing this is to have a simple switch in the monitor itself. The monitor has a row of push button switches on its front panel. Pressing one of these buttons connects that camera to the monitor screen. Switching monitors can be used for up to about 8 cameras. 283 Sony Broadcast & Professional Europe
  • 292.
    Part 23– CCTV,security & surveillance Simple video switch This system can be improved by putting the switch in a separate box. The switch box has many inputs, one for each camera, and a single output for the monitor. Operation is similar to the switching monitor, but this modular arrangement allows for a system to be built with more cameras, and lends itself better to improvement and expansion later on. Remote video control Increasing the complexity a little more, systems are available that separate the video signals from the control box. Video signals do not go through the controller’s switches themselves, of indeed through the controller’s push button panel at all. The push button panel simply sends a control signal to a separate box that has all the video connections and the video switchgear. This method removes the video from the controllers panel, making it easier to route cables and making the controllers area much tidier. Quality tends to be better because the video signals are screened better. Computer controlled switching Extending this idea to its logical conclusion, the control link between the controllers panel and the video switching box becomes a computer network link. Both the panel and the box have network addresses. The control panel itself is replaced by a computer. The push buttons become virtual buttons on the computer monitor. This approach provides the ultimate flexibility. Now that the control is over a standard computer network, the control computer can be a very long distance from the video switching box. The system can also be designed with many control computers, and each computer may be given different levels of access to the system. Camera control systems CCTV systems often have cameras with remote zoom, pan and tilt capability. Control signals must be sent from the control station to all those cameras that need them. Zoom, pan and tilt are normally controlled by a joystick on the control panel. The controller could have one joystick for every camera. However, if there are many cameras with zoom, pan and tilt capability, this could result in a control panel covered in joysticks. So it is logical to provide one joystick that controls all the cameras. It is also logical to only control the camera that the operator has selected to look through. Linking camera motion control selection to camera viewing selection Control signals are not sent to all the cameras at the same time. It is logical to only control the camera that the controller is actually looking through. Switchers characteristics CCTV switchers have a number of important characteristics that define their quality and suitability. Sony Training Services 284
  • 293.
    Broadcast Fundamentals Bandwidth The quality of CCTV equipment is often defined by its bandwidth. ‘Width’ suggests a difference between a lower frequency and an upper frequency. However video equipment is easily able to operate to very low frequencies. We assume that the lower frequency is in fact zero. The only interesting frequency is the upper limiting frequency. The upper frequency limit is the same as the bandwidth. All pieces of video equipment are able to transmit a certain range of frequencies. Relatively low frequencies up to about 1MHz are easy to process and transmit. Indeed it should be possible for video equipment to handle frequencies of several MHz. However above about 5MHz the higher the frequency the greater the losses. There is no sudden loss with increasing frequency. The signal power gradually drops and the frequency increases. So at what point you do decide enough is enough? A CCTV switcher’s bandwidth is defined as the frequency at which the signal power has dropped by half it level. This is the same as a 3dB drop (or a –3dB gain) in power. Some specifications quote the bandwidth or frequency response as the 3dB point. Pow er -3 d B B a n d w id t h F re q u e n c y Figure 116 Bandwidth Some specifications quote specific drops in power at specific frequencies. This is not an ideal method as it makes it more difficult to compare with other specifications. Indeed this may be done specifically to hide a poor bandwidth. Signal to noise ratio (SNR) This is another important characteristic of any CCTV processing equipment. There is always a certain amount of noise contained within video signals. 285 Sony Broadcast & Professional Europe
  • 294.
    Part 23– CCTV,security & surveillance The SNR is defined as a ratio of the noise power to the video power. If the video signal is properly set up at 0.7V, it is easy to find out what the actual noise level is. CCTV over IP Character and shape recognition Sony Training Services 286
  • 295.
    Broadcast Fundamentals Part 21 Numbers & equations Decibels A measure of relative power. First used to measure audio power as Bels. The Bel is now seldom used, the decibel is far more common. 1B = 10dB. Now used to measure signal power in many application areas. Bel is a logarithmic ratio defined as :-  P1  B = log 10    P2  Where P1 and P2 are the 2 power levels being measured. Therefore decibels can be found by the equation :-  P1  dB = 10 log 10    P2  Decibels & absolute sound levels Decibels are used as a measure of absolute sound levels. However the decibel is a ratio, not an absolute quantity. It needs some reference level. The threshold of hearing, the lowest level of sound that the human ear can hear is used as P2 in the equation above. The table below shows audio levels in dB’s. dB Sound level 150-160 Eardrum perforation. Space shuttle taking off. 140-150 Jet fighter taking off 130-140 Threshold of pain. 120-130 Sheet metal rivet gun. 110-120 Rock cancert, on stage. Close thunder clap. 100-110 Busy motorway underpass. 90-100 Middle of orchestra playing 1812 overture. 80-90 Busy street traffic or motorway hard shoulder. Vacuum cleaner 70-80 Average factory. 60-70 Department store. Normal close conversation. 50-60 Average office. 40-50 Quiet street. Average household. Mosquito. 30-40 Soft music. Average fridge. 20-30 Country garden. Babbling brook. 10-20 Rusting leaves. Quiet whisper. 0-10 Rustling leaves. 0 Threshold of hearing. Perceived silence. 287 Sony Broadcast & Professional Europe
  • 296.
    Part 24 –Numbers and equations Decibel as a direct ratio It is sometimes easy to forget exactly what signal ratio gives a specific decibel level. The diagram below shows direct power ratios compared to decibel quantities. R a t io 1 0 0 0 0 = 4 0 d B dB 30 R a tio 1 0 0 0 = 3 0 d B 20 R a tio 1 0 0 = 20dB 10 R a t io 1 0 = 10dB R a tio 1 = 0dB -1 0 10 20 30 40 50 60 70 80 90 100 R a tio 0 .5 = -3 .0 1 d B D ir e c t r a tio -1 0 -2 0 Figure 117 Decibel to direct ratio relationship You can see that a ratio of 1 is exactly 0dB, as one would expect. A ratio of 10 is 10dB, and a ratio of ½ is about –3dB. Above 10dB the ratio to dB relationship climbs at a dramatic rate. At 20dB there is a 100 direct ratio, and at 40dB there is a 1000 direct ratio. CD is 96dB signal to noise ratio. As a direct ratio this is about 4,000,000,000! Decibels are used because the human ear is logarithmic, i.e. we are very sensitive to the slightest sound, but can still handle relatively loud sounds without damaging our ears. Signal to noise ratio The decibel can be used as a measure of signal to noise ratio. Using the equation above P1 is the signal power and P2 is the noise power, thus :-  Signal  log 10    Noise  Sony Training Services 288
  • 297.
    Broadcast Fundamentals Part 22 Things to do Find depth of field picture Finish the filter table Rewrite the Dichroic block chapter Sort out the CCD sensor chapter Sort out the VTR chapter Sort out timecode chapter 289 Sony Broadcast & Professional Europe