SlideShare a Scribd company logo
1 of 34
Download to read offline
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 1
CHAPTER 1
JAIHIND TV PROFILE
Launched on 17 August 2007, by Congress President and UPA Chairperson, Mrs.
Sonia Gandhi with the mission to meet the aspirations of the family and the nation, JAIHIND
has consistently remained in the forefront with innovative programming, setting new industry
benchmarks in news coverage and entertainment packaging. JAIHIND TV is a channel with a
multi-genre infotainment package for the whole family.
JAIHIND TV, Malayalam Channel own and operated by M/s Bharat Broadcasting
Network Ltd. is a subsidiary of Jaihind Communications Pvt. Ltd, a company registered
under the Companies Act 1956 with an initial investment of Rs. 33 crores. Jaihind TV’s
registered office is located at Karimpanal Arcade, East Fort, Thiruvananthapuram.
Sri. Ramesh Chennithala, President of Kerala Pradesh Congress Committee is the
President of JAIHIND TV. Sri. M.M.Hassan, former Minister and Senior Leader of the
Indian National Congress Party, is the Managing Director. He is a Popular Figure
championing various public interest causes. Bharat Broadcasting Network Ltd is chaired by
Sri. Kunjukutty Aniyankunju, a prominent NRI businessman based at UAE. Sri. Vijayan
Thomas, a renowned NRI business personal, is the chairman of Jaihind Communications Pvt
Ltd. Board of Directors and investors are eminent NRI and Resident business personalities.
JAIHIND TV is headed by Sri.K.P.MOHANAN, a veteran journalist well-versed both in
Print and Electronic Media with professional experience spanning four decades. A Permanent
Fellow of the USA based World Press Institute, Mr. Mohanan is the winner of the Rajiv
Gandhi Award for excellence in Journalism.
With the declared motto “For The Family, For The Nation” JAIHIND TV is
committed to quality entertainment packages with Social and Ethical Binding and News and
Current Affairs programmes with development objectives. The programmes of the Channel
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 2
focus on upholding Democracy, Secularism and Nationalism. The Channel brings together
some of the finest talents in the industry and transcends stereo typical television.
Jaihind TV aims to be the voice of the global Malayalees and to address the issues of
global relevance. A well-experienced and highly qualified professional team and state-of-the-
art studio with multi-dimensional shoot facilities at the Kinfra Film and Video Park that has
fully-digital production and post-production facilities and a Live News room in Trivandrum
with static and dynamic connectivity to all the Districts in Kerala as well as to Delhi,
Mumbai, Chennai and Dubai, make Jaihind TV a media organization with competitive edge.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 3
CHAPTER 2
INTRODUCTION
2.1 WHAT'S IN A TELEVISION STATION?
"Photography is going to marry Miss Wireless, and heaven help everybody when they get
married. Life will be very complicated."
-- Marcus Adams, Society photographer, in the Observer, 1925.
2.2 THE TV STATION AS A WHOLE
The basic television system consists of equipment and people to operate this gear so
that we can produce TV programs. The stuff you'll find in a television station consists of (and
this list is not exhaustive!):
 one or more television cameras
 lighting, to see what we're shooting
 one or more audio consoles, along with sound control equipment, to manipulate the
sounds we generate with microphones, audio recorders and players, and other devices
 one or more videotape recorders, or other video recording technologies, in any of a
number of formats
 one or more video switchers, to select video sources, perform basic transitions
between those sources, and to create special effects
 EFP (electronic field production) shooting and production equipment, and storage
facilities
 perhaps a post-production editing facility, to assemble videotaped segments
together
 some special effects: either visual or aural; electronic, optical or mechanical
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 4
2.3 THE STUDIO INFRASTRUCTURE
Whether or not you work in a traditional studio or a new "studioless" environment,
the same principles apply. You'll still need:
 an intercom system (with headset stations for floor directors)
 floor monitors (video and audio)
 electrical outlets for both regular AC and lighting equipment
In addition, your control room or control centre will have:
 various program and preview monitors
 program audio speakers
 time of day clock
 video switcher
 an audio control room, with audio console, cart machines, turntable and/or CD
player, reel to reel and/or cassette and/or DAT recorder and/or other digital recording
and playback technology, and auxiliary audio enhancement equipment
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE
Fig.3
3.1 STUDIO
A television studio is an installation in which a
for the recording of live television
post-production. The design of a studio is similar to, and derived from,
few amendments for the special requirements of television production. A professional
television studio generally has several rooms, which are kept separate for noise and
practicality reasons. These rooms are connected via
among these workplaces.
PROGRAM
STUDIO
NEWS
STUDIO
PROGRAM
INDUSTRIAL TRAINING REPORT
CHAPTER 3
WORK FLOW
Fig.3 1: WORKING OF CHANNEL
is an installation in which a video productions take place, either
live television to video tape, or for the acquisition of raw
. The design of a studio is similar to, and derived from, movie studios
few amendments for the special requirements of television production. A professional
n studio generally has several rooms, which are kept separate for noise and
practicality reasons. These rooms are connected via intercom, and personnel will be divided
PROGRAM
PCR
Page 5
take place, either
, or for the acquisition of raw footage for
movie studios, with a
few amendments for the special requirements of television production. A professional
n studio generally has several rooms, which are kept separate for noise and
, and personnel will be divided
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 6
The studio floor is the actual stage on which the actions that will be recorded take place.
A studio floor has the following characteristics and installations:
 decoration and/or sets
 professional video camera (sometimes one, usually several) on pedestals
 microphones
 stage lighting rigs and the associated controlling equipment.
 several video monitors for visual feedback from the production control room (PCR)
 a small public address system for communication
 a glass window between PCR and studio floor for direct visual contact is usually
desired, but not always possible
Various types of cameras Used
 SONY PD 170
 SONY DSR 400
 SONY D 50
 SONY D 55
3.2 PRODUCTION-CONTROL ROOM
The production control room (PCR) is the place in a television studio in which the
composition of the outgoing program takes place.
Facilities in a PCR include:
A video monitor wall, with monitors for program, preview, VTRs, cameras, graphics and
other video sources.
 A vision mixer, a large control panel used to select the multiple-camera setup and
other various sources to be recorded or seen on air and, in many cases, in any video
monitors on the set. The term 'vision mixer' is primarily used in Europe, while the
term 'video switcher' is usually used in North America.
 A professional audio mixing console and other audio equipment such as effects
devices.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE
 A character generator (CG), which creates the majority of the names and full
on-screen graphics that are insert
television screen
 Digital video effects, or DVE, for manipulation of video sources. In newer vision
mixers, the DVE is integrated into the
 A still store, or still frame, device
name suggests that the device is only capable of storing still images
 The technical director's station, with
control units (CCU) or remote control panels for the CCUs.
 VTRs for recording.
 Intercom and IFB equipment for communication with talent and
 A signal generator to lock all of the video equipment to a common reference that
requires color burst.
VISION
MIXER
(VIDEO
SWICHER)
FOR-A 700
INDUSTRIAL TRAINING REPORT
(CG), which creates the majority of the names and full
that are inserted into the program lower third portion of the
, or DVE, for manipulation of video sources. In newer vision
mixers, the DVE is integrated into the vision.
A still store, or still frame, device for storage of graphics or other images. While the
name suggests that the device is only capable of storing still images
's station, with waveform monitors, vectorscopes and the
(CCU) or remote control panels for the CCUs.
equipment for communication with talent and television crew
to lock all of the video equipment to a common reference that
Fig.3.2 BLOCK DIAGRAM OF PCR
Page 7
(CG), which creates the majority of the names and full digital
portion of the
, or DVE, for manipulation of video sources. In newer vision
for storage of graphics or other images. While the
and the camera
television crew
to lock all of the video equipment to a common reference that
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 8
3.2.1 Recording format
 Jaihind TV is using the latest Sony MPEG IMX recording format for programme
production to deliver the excellent picture quality to the viewers.
DV CAM Specifications
Tape: ME (Metal Evaporate)
Track Pitch: 15 micrometers
Track Width: 15 micrometers (10 micrometers on some early gear)
Tape Speed: 28.215 mm/sec
Record Time: Standard: 184 mins, MiniDV: 40 mins.
Compression: Intra-frame, 5:1 DVC-format DCT, 25 Mbps video data rate
Resolution & Sampling: 720x576, 4:2:0 (PAL), 720x480, 4:1:1 (NTSC)
Audio:
2 ch @ 48 kHz, 16 bits; 4 ch @ 32 kHz, 12 bits
Will accept 2 ch @ 44.1 kHz, 16 bits via 1394 I/O; locked.
3.2.2 IMX Format Specifications
IMX, also known as Beta cam IMX or MPEG IMX, records SD NTSC and PAL video
using high-quality MPEG-2 compression.
a) Storage Medium
One of the features of the IMX format is that it is not restricted to a single media type.
IMX can be recorded on XDCAM, a Sony optical disc format, as well as the IMX tape
format.
IMX VTRs bridge the gap between conventional tape decks and modern computer editing
systems with the following features:
 Playback of older video formats such as Beta cam SP, Beta SX, and Digital Beta cam.
These formats can be converted and output to MPEG IMX in real time.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 9
Note: Not all IMX VTRs support playback and recording of all Beta cam formats.
 IMX digital video file transfer via networking interfaces such as Ethernet and TCP/IP
protocols
b) Video Standard
IMX supports both SD NTSC and SD PAL.
c) Aspect Ratio
NTSC and PAL IMX both have an aspect ratio of 4:3.
d) Frame Dimensions, Number of Lines, and Resolution
IMX can store video at two possible resolutions: NTSC (525) and PAL (625). The
numbers refer to the number of analog lines of the corresponding video formats. However,
many of these analog lines are not used to store picture information. In Final Cut Pro, the
following frame dimensions are used:
 NTSC IMX: 720 pixels per line, 486 lines
 PAL IMX: 720 pixels per line, 576 lines
In both formats, standard definition rectangular pixels are used, just as with DV, DVD,
Digital Beta cam, and other SD digital video formats.
e) Frame Rate
IMX supports NTSC and PAL frame rates of 29.97 fps and 25 fps, respectively.
f) Scanning Method
IMX supports interlaced recording.
g) Color Recording Method
IMX records a 4:2:2 Y′CBCR (component) digital video signal. Each sample (pixel)
has a resolution of 8 bits.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 10
h) Data Rate and Video Compression
IMX uses I-frame-only MPEG-2 compression. IMX is a restricted version of MPEG-2
4:2:2 Profile @ ML. The official SMPTE designation is D10, as specified in SMPTE
standard 356M.
Three compression ratios are supported:
 30 Mbps: 5.6:1 compression
 40 Mbps: 4.2:1 compression
 50 Mbps: 3.3:1 compression
i) Audio
IMX supports two audio channel configurations:
 Four audio channels, sampled at 48 kHz with 24 bits per sample
 Eight audio channels, sampled at 48 kHz with 16 bits per sample
j) Time code
IMX supports 30 and 25 fps time code.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE
3.3 EDIT SUITE
Fig.3.3: SCHEMATIC DIAGRAM OF EDIT SUITE
Non-linear editing is a video editing method which enables direct access to any
frame in a digital video clip, without needing to play or scrub/shuttle through adjacent
footage to reach it, as was necessary with historical
most natural approach when all assets are available as files on
recordings on reels or tapes, while linear editing is related to the need to sequentially view a
film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the
"cut and paste" technique used in film editing. However, with the appropriation of non
editing systems, the destructive act of cutting of film negatives is eliminated. Non
non-destructive editing methods began to appear with the introduction of
technology. It can also be viewed as the
why it is called desktop video editing in the consumer space.
Video and audio data are first captured to
storage devices. The data are either
INDUSTRIAL TRAINING REPORT
Fig.3.3: SCHEMATIC DIAGRAM OF EDIT SUITE
linear editing is a video editing method which enables direct access to any
, without needing to play or scrub/shuttle through adjacent
to reach it, as was necessary with historical video tape linear editing systems. It is the
most natural approach when all assets are available as files on hard disks
recordings on reels or tapes, while linear editing is related to the need to sequentially view a
film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the
" technique used in film editing. However, with the appropriation of non
editing systems, the destructive act of cutting of film negatives is eliminated. Non
methods began to appear with the introduction of
technology. It can also be viewed as the audio/video equivalent of word processing
editing in the consumer space.
Video and audio data are first captured to hard disks, video server, or other d
storage devices. The data are either direct to disk recording or are imported from another
Page 11
linear editing is a video editing method which enables direct access to any video
, without needing to play or scrub/shuttle through adjacent
linear editing systems. It is the
s rather than
recordings on reels or tapes, while linear editing is related to the need to sequentially view a
film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the
" technique used in film editing. However, with the appropriation of non-linear
editing systems, the destructive act of cutting of film negatives is eliminated. Non-linear,
methods began to appear with the introduction of digital video
word processing, which is
, or other digital
or are imported from another
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 12
source. Once imported, the source material can be edited on a computer using application
software, any of a wide range of video editing software.
In non-linear editing, the original source files are not lost or modified during editing.
Professional editing software records the decisions of the editor in an edit decision list (EDL)
which can be interchanged with other editing tools. Many generations and variations of the
original source files can exist without needing to store many different copies, allowing for
very flexible editing. It also makes it easy to change cuts and undo previous decisions simply
by editing the edit decision list (without having to have the actual film data duplicated).
Generation loss is also controlled, due to not having to repeatedly re-encode the data when
different effects are applied.
Compared to the linear method of tape-to-tape editing, non-linear editing offers the
flexibility of film editing, with random access and easy project organization. With the edit
decision lists, the editor can work on low-resolution copies of the video. This makes it
possible to edit both standard-definition broadcast quality and high definition broadcast
quality very quickly on normal PCs which do not have the power to do the full processing of
the huge full-quality high-resolution data in real-time.
A multimedia computer for non-linear editing of video will usually have a video
capture card to capture analog video and/or a FireWire connection to capture digital video
from a DV camera, with its video editing software. Various editing tasks can then be
performed on the imported video before it is exported to another medium, or MPEG encoded
for transfer to a DVD.
One of the best things about nonlinear editing, is that the edit is instant - no splicing
and waiting for the glue to set as in film, and no having to actually play back the entire shot
for its full duration, as in videotape. One mouse click and it's done - on to the next edit. And
the shot can, of course, be placed absolutely anywhere, even in between two frames of a
previously laid down shot. Another advantage of nonlinear editing can be its reduced VTR
cost. Obviously, to have instant access to all of your shots, you have to shoot on a non-digital
camera and VTR, and dump the raw footage into the nonlinear editing system, but this
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 13
requires only one VTR (you can use the same videotape machine to record the finished
product afterwards, too.) The down side of this, however, is the cost of hard drive storage -
many editors only dub over the final takes of their shoot, not all of their raw footage.
3.4 COMPILER
Fig.3.4: LAYOUT OF COMPILER
3.4.1 Production
This part will discuss the production of the video. But not the creation of the
video from the movie maker, this part is when the video is already done. One of the most
important parts will be the election of the medium to watch. Depending if the video is
designed and transmitted for watching through internet, or if is for television. Leaning on one
or the other the requirements will be different. First of all the size of the file is noticed that
the quality of the image in a DVD or in a TV program is better than the quality in an online
transmission on a PC; higher quality implies higher size and vice versa. As a result of, is
interesting for a online view smaller file extension for a clearer transmission.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 14
A DVD movie can be like 4 Gigabytes hence would not be worth to view by internet
streaming connection. The term streaming means that it is a direct current (without
interruption). The user can listen or watch when they want. This technology allows to be
stored in a buffer that will be listened or viewed. Streaming makes possible to play music or
watch videos without having to be downloaded first. Up-to-date there are new software
applications to listen music via streaming. Spotify uses those technology to listen music only
needing an internet connection and the program. Besides, bigger size needs more bandwidth
to be transferred by the network. Additionally, the file must be smaller than the original.
Thenceforth a more powerful compression is needed for reduce the size of the movie file.
For more compression, more energy and so on more power consumption. Furthermore
there are new algorithms with the aim of reduce the size relevantly but also with a minor
quality decrease. Consequently is important to select the correct devices related with the type
of transmission and which compression is required. Before making the choice is useful to
know which compression tools are craved. Here is a list of the different video coding
standards:
 MPEG-1: Is the standard of audio and video compression. Provides video at a
resolution of 350x240 at 30 frames per second. This produces video quality slightly
below the quality of conventional VCR videos. Includes audio compression format of
Layer 3 (MP3).
 MPEG-2: audio and video standard for broadcast of television quality. Offers
resolutions of 720x480 and 1280x720 at 60fps with audio CD quality. Matches most
of TV standards even HDTV. The principal use is for DVDs, satellite TV services and
digital TV signals by cable. An MPEG-2 compression is able to reduce a 2 hour video
to few gigabytes. While decompressing a MPEG-2 data stream no needs much
computer resources, the encoding to MPEG-2 requires more energy to the process.
 MPEG-3: Designed for HDTV but was replaced for MPEG-2
 MPEG-4: Standard algorithm for graphics and video compression based on MPEG-1,
MPEG-2 and Apple QuickTime technology. The MPEG-4 files are smaller than JPEG
or QuickTime, therefore are designed to transfer video and images through a narrow
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 15
bandwidth and sustain different mixtures of video with text, graphics and 2D or 3D
animation layers.
 MPEG – 7: Formally called Multimedia Content Description Interface, supplies a set
of tools for multimedia content. Performed to be generic and not aimed at a specific
use.
 MPEG – 21: Allow a Rights Expression Language (REL) and Rights Data
Dictionary. Describes a standard that defines the description of the content and the
processes to access, search, store and protect the copyright of the content discordant
with other MPEG standards that define compression coding methods. The above-
mentioned are the standard but each one has specific parts depending on the use.
Among these types the most important contemporaneously are:
 MPEG-2
 MPEG-4 → particularly the part number 10 which without increasing the complexity
of design and the file size, implements good video quality at lower bit rates than the
previous. Technologically called MPEG-4 H.264 / AVC.
3.4.2 MPEG-2 (H.262)
MPEG-2 is a standard for “the generic coding of moving pictures and associated
audio information”. Is an extension of the MPEG-1 international standard for digital
compression of audio and video signals created to broadcast formats at higher bit rates than
MPEG-1. Initially developed to serve the transmission of compressed television programs via
broadcast, cablecast, and satellite, and subsequently adopted for DVD production and for
some online delivery systems, defines a combination of lossy video compression and lossy
audio data compression using the actual methods of storage, like DVDs or Blu-Ray, without a
bandwidth restriction.
The main characteristics are:
 New prediction modes of fields and frames for interlaced scanning.
 Improved quantification.
 The MPEG-2 transport stream permits the multiplexing of multiple programs.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 16
 New intra-code variable length frame (VLC). Is a code in which the number of bits
used in a frame depends on the probability of it. More frame probability implies more
bits intended by frame. Strong support for increased errors.
 Uses the discrete cosine transform algorithm and motion compensation techniques to
compression.
 Provides for multichannel surround sound coding. MPEG-2 contains different
standard parts to suit to the different needs. Also annexes various levels and profiles.
3.4.3 MPEG-2 FUNDAMENTALS
Nowadays, a TV camera can generate 25 pictures per second, i.e., a frame rate
of 25Hz. But in order to convert it to a digital television is necessary to digitalize the pictures
in order to be processed with a computer. An image is divided in two different signals:
luminance (Y) and chrominance (UV). Each image has one luma number and two
chrominance components. The television colour signal Red-Green-Blue (RGB) can be
represented with luma and chrominance numbers. Chrominance bandwidth can be reduced
relative to the luminance signal without an influence on the picture quality.
An image can also be defined with a special notation (4:2:2, 4:2:0). These are types of
chroma sub-sampling relevant to the compression of an image, storing more luminance
details than colour details. The first number refers to the luminance part of the signal, the
second refers to the chroma. In 4:2:2 luminance is sampled 4 times while the chroma values
are sampled twice at the same rate. Being a fact that the human eye is more sensitive to
brightness than colour, chroma is sampled less than luminance without any variation for the
human perception. Those signals are also partitioned in Macro blocks which are the basic unit
within an image. A macro block is formed by more blocks of pixels. Depending on the codec,
the block will be bigger or smaller. Normally the size is a multiple of 4. MPEG-2 coding
creates data flow by three different frames: intra-coded frames (I frames), predictive-coded
frames (P-frames), and bidirectional-predictive-coded frames (B-frames) called “GOP
structure” (Group of Pictures structure).
 I-frame: Coded pictures without reference to others. Is compressed directly from a
original frame.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 17
 P-frame: Uses the previous I-frame or P-frame for motion compensation. Each block
can be predicted or intra-coded.
 B-frame: Uses the previous I or P picture and offers the highest compression. One
block in a B-picture can be predicted or intra-coded in a forward, backward or
bidirectional way. A typical GOP structure could be: B1 B2 I3 B4 B5 P6 B7 B8 P9
B10 B11 P12. I-frames codes spatial redundancy while B-frames and P-frames code
temporal redundancy. MPEG-2 also provides interlaced scanning which is a method
of checking an image. The aim is to increase the bandwidth and to erase the
flickering showing the double quantity of images per second with a half frame rate.
For example, produce 50 images per second with a frame rate of 25Hz. The scan
divides a video frame in two fields, separating the horizontal lines in odd lines and
even lines. It enhances motion perception to the viewer. Depending on the number of
lines and the frame rate, are divided in:
 PAL / SECAM: 25 frames per second, 625 lines per frame. Used in Europe.
 NTSC: 30 frames per second, 525 lines per frame. Used in North America.
MPEG-2 encoding is organized into profiles. A profile is a "defined subset of the syntax
of the specification". Each profile defines a range of settings for different encoder options. As
most of settings are not available and useful in all profiles, these are designated to suit with
the consumer requirements. A computer will need a hardware specific for the use, the same
with a television or a mobile phone, but it would be capable to rate it in a particular profile.
Then an encoder is needed to finish the compression.
3.4.4 MPEG-2 COMPRESSION BASICS
Spatial Redundancy:
A technical compression type which consists of grouping the pixels with
similar properties to minimize the duplication of data in each frame.
Involves an analysis of a picture to select and suppress the redundant information, for
instance, removing the frequencies that the human cannot percept. To achieve this is
employed a mathematical tool: Discrete Cosine Transform (DCT).
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 18
Intra Frame DCT Coding:
The Discrete cosine Transform (DCT) is a based transform with Fourier
Discrete Transform with many applications to the science and the engineering but basically is
applied on image compression algorithms. DCT is employed to decrease the special
redundancy of the signals. This function has a good energy compaction property and so on
accumulates most of the information in few transformed coefficients. In consideration of this
the signal is converted to an new domain, in which only a little number of coefficients
contain most of the information meanwhile the rest has got unappreciated values. In the new
domain, the signal will have a much more compact representation, and may be represented
mainly by a few transform coefficients. It is independent of the data. The algorithm is the
same, regardless of the data applied in the algorithm. It is a lossless compression technique
(negligible loss).The DCT is capable to interpret the coefficients in a frequency point. As a
result of that, it can take a maximum of compression capacity profit. The result of applying
DCT is an 8x8 array composed of distinct values divided in frequencies:
• Low frequency implies more sensitive elements for the human eye.
• High frequency means less cognizant components.
Temporal Redundancy:
Temporal compression is achieved having a view in a succession of pictures.
Situation: An object moves across a picture without movement. The picture has all the
information required until the movement and is not necessary to encode again the picture
until the alteration. Thereafter, is not necessary to encode again all the picture but only the
part that contains the movement owing that the rest of the scene is not affected by the moving
object because is the same scene as the initial picture. The notation with is determined how
much movement is contained between two successive pictures is motion compensated
prediction.
As a result of isolating a picture is not a good fact because probably an image is going
to be constructed from the prediction from a previous picture or maybe the picture may be
useful to create the next picture.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 19
Motion Compensated Prediction:
Identify the displacement of a given macro block in the current frame respect
from the position it had in the frame of reference.
The steps are:
 Search for the same macro blocks of the frame to be encoded in the frame of
reference.
 If there is not the same macro block then the corresponding motion vector is
encoded.
 The more similar macro block (INTER) is chosen and later on is necessary to
encode the motion vector.
 If there is no similar block (INTRA) these block is encoded using only the spatial
redundancy.
3.4.5 H.264 / MPEG-4 AVC
H.264 or MPEG-4 part 10 defines a high-quality video codec compression developed
by the Video Coding Expert Group (VCEG) and the Motion Picture Experts Group (MPEG)
in order to create a standard capable of providing good quality image, but using rates actually
lower than in previous video standards such as MPEG-2 and without increasing the
complexity of its design, since otherwise it would be impractical and expensive to implement.
A goal that is proposed by its creators was to increase its scope, i.e., allow the standard to be
used in a wide variety of networks and video, both high and low resolution, DVD storage,
etc.
In December 2001 came the Joint Video Team (JVT) consisting of experts from
VCEG and MPEG, and developed this standard to be finalized in 2003. The ISO / IEC
(International Organization for Standardization / International Electro technical Commission)
and ITU-T (International Telecommunication Union-Telecommunication Standardization
Sector) joined this project. The first is responsible of the rules for standards by focusing on
manufacturing and the second focuses mainly on tariff issues. The latter planned to adopt the
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 20
standard under the name of ITU-T H.264 and ISO / IEC wanted to name him MPEG-4 Part
10 Advanced Video Codec (AVC), hence the name of the standard. To set the first code they
firstly based on looking at the previous standard algorithms and techniques to modify or if
not create new ones:
 DCT structure in conjunction with the motion compensation of previous versions was
efficient enough so there was no need to make fundamental changes in its structure.
 Scalable Video Coding: An important advance because it allows each user, regardless
of the limitations of the device, receives the best possible quality, issuing only a single
signal. This is possible because it provides a compressed stream of video and users
can take only what you need to get a better video quality according to their technical
limitations of receipt.
The MPEG-4 has more complex algorithms and better benefits giving a special quality
improvement, which provides a higher compression rate than MPEG-2 for an equivalent
quality.
3.4.6 MAIN ADVANTAGES
For the MPEG-4 AVC the main important features are:
1. Provides almost DVD quality video, but uses lower bit rate so that it's feasible to transmit
digitized video streams in LAN, and also in WAN, where bandwidth is more critical, and
hard to guarantee.
2. Dramatically advances audio and video compression, enabling the distribution of content
and services from low bandwidths to high-definition quality across broadcast, broadband,
wireless and packaged media.
3. Provides a standardized framework for many other forms of media — including text,
pictures, animation, 2D and 3D objects – which can be presented in interactive and
personalized media experiences.
4. Supports the diversity of the future content market.
5. Offers a variety of so-called “profiles,” tool sets from the toolbox, useful for specific
applications, like in audio-video coding, simple visual or advanced simple visual profile, so
users need only implement the profiles that support the functionality required.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 21
6. Uses DCT algorithm mixed with motion compensation. It clearly shows that MPEG4
wants to be a content-based representation standard independent of any specific coding
technology, bit rate, scene type of content, etc. This means it shows at the same time why and
how MPEG4 is different from previous moving pictures coding standards.
8. Low latency
The most important and relevant are:
1. Reduces the amount of storage needed
2. Increases the amount of time video can be stored
3. Reduces the network bandwidth used by the surveillance system
3.4.7 MAIN NEW FEATURES
 Type of image : Adding two extra frames, apart from I, P, B, the SP (Switching P)
and SI (Switching I) which allows passing from one video to another applying the
temporal or spatial prediction with the possibility to reconstruct accurate values of
the sample even when using different reference images than the images used in the
prediction process.
 Motion compensation: With variant sizes of the sub-blocks; 16x8, 8x16, 8x8 pixels,
8x8 size can be partitioned into 8x4, 4x8 or 4x4 groups of pixels to providing a
greater accuracy into the estimation.
 Transform: a DCT modification with a 4x4 pixel size, using Integer coefficients to
avoid the approximation errors, getting a better precision to calculate the coefficients.
 Entropy coding: Coding method without errors that consist in treating all the
transform array in a zigzag way, bringing together groups with similar frequencies,
insert coded zeros and applying VLC coding for the rest. This technique reduces in a
5% the file size but with a larger coding and decoding time.
3.5. MASTER-CONTROL ROOM
The Master control room is the place where the on-air signal is controlled. It may
include controls to playout television programs and television commercials, switch local or
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE
television network feeds, record
may be in an adjacent equipment rack room. The term "studio" usually refers to a place where
a particular local program is originated. If the program is broadcast live, the signal goes from
the PCR to MCR and then out to the transmitter
Fig 3.5: SCHEMATIC LAYOUT OF
When all of the diverse elements of the daily program schedule come together, there
has to be some way of integrating them in a connected way. That is where master control
comes in. Master control operators (and their equipment) are
and look of the station.
3.5.1 ROUTER / SWITCHER
INDUSTRIAL TRAINING REPORT
feeds, record satellite feeds and monitor the transmitter, or these items
jacent equipment rack room. The term "studio" usually refers to a place where
a particular local program is originated. If the program is broadcast live, the signal goes from
the PCR to MCR and then out to the transmitter.
Fig 3.5: SCHEMATIC LAYOUT OF MCR / PLAYOUT
When all of the diverse elements of the daily program schedule come together, there
has to be some way of integrating them in a connected way. That is where master control
comes in. Master control operators (and their equipment) are responsible for the final output
/ SWITCHER
Page 22
feeds and monitor the transmitter, or these items
jacent equipment rack room. The term "studio" usually refers to a place where
a particular local program is originated. If the program is broadcast live, the signal goes from
When all of the diverse elements of the daily program schedule come together, there
has to be some way of integrating them in a connected way. That is where master control
responsible for the final output
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 23
The heart of the master control operator's equipment is the switcher, which can
perform cuts, dissolves, and keys like a production switcher. There is one significant
difference, however - the MCR switcher also has the ability to take audio from any of the
sources selected, and is therefore called an "audio follow" switcher.
In addition to the regular program audio, the MCR operator (or "MCO") has the
capability of sending out additional tape or digital cartridge material either to replace the
existing audio completely, or mixing it over the continuing source. This is done by lowering
the level of the program material and is called "voice over" mode.
The keyer on the switcher is generally used to superimpose station identification
information over program material, or other statistics such as the time of day. With all of this
information and technical detail to watch over, master control operations are beginning to
computerize. The computer remembers and activates transitions sequences. It also cues, rolls,
and stops film projectors and VTRs, and calls up any number of slides from the still store
system. The development of MCR switchers is going in two directions. One can have the
switcher perform more and more visual tricks. Or, it can be kept simple enough so that the
operator does not have to climb all over it to reach everything, or take a computer course to use
it.
3.5.2 CHARACTER GENERATOR
The character generator looks and works like a typewriter, except that it writes letters
and numbers on the video screen instead of paper. This titling device has all but replaced the
conventional studio card titles, and has become an important production tool. The more
sophisticated character generators can produce letters of various sizes and fonts, and simple
graphic displays such as curves and rectangles (blocks). In addition, backgrounds can be
colorized.
To prepare the titles or graphs, you enter the needed information on a computer-
like keyboard. You can then either integrate the information directly into the program in
progress, or store it for later use. Whether it is stored on computer disk or in the generator's
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 24
RAM (random access memory), the screen full of information is keyed to a specific address
(electronic page number) for easy and fast recall.
Most character generators have two output channels - a preview and a program. The
preview channel is for composing titles. The program channel is for actually integrating the
titles with the major program material. The preview channel has a cursor that shows you where
on the screen the word or sentence will appear. By moving the cursor into various positions,
you can centre the information, or move it anywhere on the screen. Various controls allow you
to make certain words flash, to roll the whole copy up and down the screen, or to make it crawl
sideways. More sophisticated generators can move words and graphics on and off the screen at
any angle, and at any of several speeds.
3.5.3 LOGO / GRAPHICS GENERERATOR
With graphics generators, you can draw images on the screen. These units are also
called video art, or paint box, systems. You draw onto an electronic tablet with a stylus. An
output monitor immediately reflects your artistic efforts, and another one offers a series of
design options (such as colour, thickness of line, styles of letters, or special effects) to improve
your art work.
Newer systems will allow you to change the size and position of a graphic so that it
can be used full-frame, in an over the shoulder position (for news), or in lower third for
information display. More than one original image can be integrated into a new full-frame
composite (a montage). Character generators are now often part of the software package.
Retouching (softening edges, adding or removing highlights, matching colors, blending objects
into their backgrounds) is commonly available.
Most graphics generators work in real time; your drawing is immediately
displayed on the screen. Less sophisticated paint boxes take a moment before they display the
effects. Also, better quality units have built-in circuits and software to prevent aliasing (jagged
details on diagonal lines). These are hidden by slightly blurring the edges of the lines.
Personal computers are now often used to generate television graphics, both as
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 25
character generators and paint boxes. Some amazing effects can be produced with these easy to
use graphics machines. These systems have an internal board to convert the regular PC output
to standard NTSC scanning.
3.5.4 ENCODER
An Encoder is a device used to convert analog video signals into digital video
signals. Most of them compress the information so it can be stored or transmitted in the
minimum space possible. To achieve this it takes advantage of video sequences that have
spatial or temporal redundancy. Therefore, eliminating redundant information obtains that
encode information more optimal. The spatial redundancy is erased with DCT coefficient
coding. To delete temporal redundancy is used the motion compensation prediction, with
motion estimation between successive blocks.
The operation method is:
 Signals are separated luma (Y) and chroma (C).
 Find the error of estimation to make the DCT.
 The coefficients are quantified and entropy coded (VLC).
 Coefficients are multiplexed and passed to the buffer. The buffer controls the quality
of signal.
 Check that the outflow bit stream of the buffer is not variable, because the signal is
thought to be transmitted on a channel with a steady speed.
 The quantified image is reconstructed for future reference for prediction and motion
estimation.
The DCT algorithm and the block quantification can cause visible discontinuities at
the edges of the blocks leading to the known “Blocking effect”, because the DCT omits
the 0 in the matrix, so may produce imperfections. As a result of, new standard video
coding like H.264/MPEG-4 ACV, includes filter algorithms able to decrease that effect.
3.5.5 MODULATOR
In the televisions systems, signals can be carried between a limited frequency
spectrum, which a concrete lower and upper frequencies. A modulator is a device charged of
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 26
transporting a signal inside another signal to be transmitted. Is able to transform a low-
frequency signal into a other frequency signal. As a result of, the frequency can be controlled
and the problem above can be solved. Mainly, the aim of modulate a signal consist in
changing a parameter of the wave according to the variations of the modulation signal
(information to be transmitted).
3.5.6 UPCONVERTER
Upconverter is used to convert television signals to VHF or UHF signals
regardless of whether it is a digital signal or analog signal. The device detects the kind of
incoming signal and being based on if is a digital or analog, creates the suitable reference
signal.
3.6 DIGITAL VIDEO BROADCASTING (DVB)
Once the video is encoded in the desired format (MPEG-2 or MPEG-4/H.264
AVC) will have to be put on the network to be distributed and transported to the end user. In
this field there are different connections, satellite, terrestrial, cable, etc., Depending on the
type of means used in transport will have different formats, but mostly in terrestrial digital
television will use a terrestrial connection (DVB-T) mixed with a connection via satellite
(DVB-S). These connections will be in charge of transmitting the digital signal to the end
user, but first this signal must be created, since only at this point you have video signals,
audio and data. Digital Video Broadcasting (DVB) has become the synonym for digital
television and for data broadcasting world-wide. DVB services have recently been introduced
in Europe, in North- and South America, in Asia, Africa and Australia.DVB is the technology
that makes possible the broadcasting of “data containers“ in which all kinds of digital data up
to a data rate of 38 Mbit/s can be transmitted at bit-error rates in the order of 10-11
.
3.6.1 BASEBAND PROCESSING
The transmission techniques developed by DVB are transparent with respect
to the kind of data to be delivered to the customer. They are capable of making available bit
streams at (typically) 38 Mbit/s within one satellite or cable channel or at 24 Mbit/s within
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 27
one terrestrial channel. On the other hand a digital video signal created in today’s TV studios
comprises of 166 Mbit/s and thus cannot possibly be carried via the existing media. Data rate
reduction or “source coding“therefore is a must for digital television.
One of the fundamental decisions which were taken during the early days of DVB
was the selection of MPEG-2 for the source coding of audio and video and for the creation of
programme elementary streams, transport streams etc. - the so-called systems level. Three
international standards describe MPEG-2 systems, video and audio. Using MPEG-2 a video
signal can be compressed to a datarate of for example 5 Mbit/s and still can be decompressed
in the receiver to deliver a picture quality close to what analogue television offers today.
The term “channel coding” is used to describe the algorithms used for adaptation of
the source signal to the transmission medium. In the world of DVB it includes the FEC
(Forward Error Correction) and the modulation as well as all kinds of format conversion and
filtering.
3.7 DVB-S2: THE SECOND GENERATION STANDARD FOR
SATELLITE BROAD-BAND SERVICES
DVB-S2 is a digital satellite transmission system developed by the DVB
Project. It makes use of the latest modulation and coding techniques to deliver performance
that approaches the theoretical limit for such systems. Satellite transmission was the first area
addressed by the DVB Project in 1993 and DVB standards form the basis of most satellite
DTV services around the world today, and therefore of most digital TV in general. DVB-S2
will not replace DVB-S in the short or even the medium term, but makes possible the
delivery of services that could never have been delivered using DVB-S. The original DVB-S
system, on which DVB-S2 is based, specifies the use of QPSK modulation along with various
tools for channel coding and error correction. Further additions were made with the
emergence of DVB-DSNG (Digital Satellite News Gathering), for example allowing the use
of 8PSK and 16QAM modulation. DVB-S2 benefits from more recent developments and has
the following key technical characteristics:
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 28
There are four modulation modes available, with QPSK and 8PSK intended
for broadcast applications in non-linear satellite transponders driven close to saturation.
16APSK and 32APSK, requiring a higher level of C/N, are mainly targeted at professional
applications such as news gathering and interactive services.
DVB-S2 uses a very powerful Forward Error Correction scheme (FEC), a key factor
in allowing the achievement of excellent performance in the presence of high levels of noise
and interference. The FEC system is based on concatenation of BCH (Bose-Chaudhuri-
Hcquengham) with LDPC (Low Density Parity Check) inner coding.
Adaptive Coding and Modulation (ACM) allows the transmission parameters to be
changed on a frame by frame basis depending on the particular conditions of the delivery
path for each individual user. It is mainly targeted to unicasting interactive services and to
point-to-point professional applications.
DVB-S2 offers optional backwards compatible modes that use hierarchical
modulation to allow legacy DVB-S receivers to continue to operate, whilst providing
additional capacity and services to newer receivers.
3.8 TELEPORT
A telecommunications port—or, more commonly, teleport—is a satellite
ground station with multiple parabolic antennas (i.e., an antenna farm) that functions as a hub
connecting a satellite or geocentric orbital network with a terrestrial telecommunications
network. Teleports may provide various broadcasting services among other
telecommunications functions, such as uploading computer programs or issuing commands
over an uplink to a satellite.
3.9 ENPS
 ENPS (Electronic News Production System) is a software application developed by
the Associated Press's Broadcast Technology division for producing, editing, timing,
organizing and running news broadcasts. The system is scalable and flexible enough
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 29
to handle anything from the local news at a small-market station to large
organizations spanning remote bureaus in multiple countries.
 The basic organization of each news broadcast is called a "rundown" (US) or "running
order" (UK). The run-down is a grid listing scripts, video, audio, character generator
data, teleprompter control, director notations, camera operator cues, and timing
estimates for each section of the broadcast.
 ENPS integrates scripts, wire feeds, device control, and production information in a
server/client environment. On the server side, ENPS runs an identical backup server
(called a "buddy") at all times as a fail-safe. If the primary server fails, all users are
redirected to the buddy server until such time as the primary comes back on-line. All
document changes are queued on the buddy and copied back to the primary
automatically when it returns to production. Note that this is not a mirror server as the
changed data is copied to the buddy, but there is no direct replication inherent within
the intercommunications between the servers, so if the data is corrupted due to
hardware failure on one server, this corruption will not be replicated to the "buddy".
 Device control can be managed either through a serial interface, or the MOS (Media
Object Server) protocol. MOS functionality is included in the base ENPS license, but
may be an extra add-on for the device that needs to interface with ENPS. MOS items
such as video or audio clips can be added directly to scripts, and then used by third
party software and devices during the broadcast.
 ENPS was originally developed by the Associated Press for use at the BBC in the
United Kingdom as a replacement for the text mode system BASYS (which
developed into Avid iNEWS), and the Corporation has the largest installation of the
system with over 12,000 users in 300 different locations.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 30
CHAPTER 4
DIGITAL SATELLITE NEWS GATHERING (DSNG)
4.1 DSNG
Fig.4.1 BLOCK DIAGRAM OF DSNG
Satellite offers the unique possibility to transmit images and sound from almost
anywhere on the planet. From a fully equipped uplink truck or from a light flyaway travel
case, DSNG makes it possible to bring live news and sport to millions of viewers.
MIC
ANTENNA TRACKING
SYSTEM USING GPS
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 31
DSNG is all about being at the right place at the right time. This often means to travel
light in order to get there fast. But you don't need to compromise on picture or signal quality
to be there just where it is happening. You also don't need to be cut-off from your news
production facilities.
Digital satellite news gathering (DSNG) is a system that combines electronic news
gathering (ENG) with satellite news gathering (SNG). As time passed and electronic devices
become smaller, a whole DSNG system was fitted into a van. DSNG vans are now common;
they're extensively used in covering news events.
4.2 EQUIPMENT
Fig.4.2 DIGITAL SATELLITE NEWS GATHERING VAN
The DSNG van, also known as an “outside broadcast” (OB) van, is a mobile
communications systems using state-of-the-art equipment to produce and transmit news as it
happens, where it happens. A typical DSNG van is outfitted with a two-way high-power
amplifier satellite system, a production component and an energy framework. The DSNG van
also comes with a custom and powerful electrical system, as it needs to power all the
equipment it carries without the need for any external source.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 32
There are also several additional equipments inside the van: a traveling wave tube
amplifier (TWTA) administrator system, an encoder/modulator, primary and secondary
monitors, video synthesizer/mixer and an audio mixer. External equipment includes high
definition cameras, a solid state power amplifier and a low noise block down converter.
Most DSNG manufacturers can outfit vans with the necessary equipment with ease.
Some manufacturers even offer stand-alone modular DSNG equipment systems, whereby
qualified operators can move and install equipment from one vehicle to another easily. DSNG
vans have five main working sections: the monitoring section, the audio engineering section,
the data and power storage area, the video control area and the transmission area.
4.3 TRANSMISSION MECHANICS OF OLDER SYSTEMS
With older DSNG setups, as soon as a camera captures news images, the satellite in
the OB van transmits real-time images to an uplink satellite, which in turn sends raw footage
to a geostationary network. The network produces a local copy of the received images for
editing. During this editing process, archive images from the network’s libraries are
sometimes integrated into the edited video as the network sees fit. The edited video is then
ready for play-out.
4.4 TRANSMISSION MECHANICS OF NEWER SYSTEMS
With the advent of interactive tapeless methods of DSNG, editing is done
simultaneously through a laptop-based rewriting/proofreading terminal. The van is equipped
with transmission and reception facilities, which allow rough and condensed files to be
transmitted to and received from a remote geostationary network. A prime video server
processes the files for storage and eventual broadcast. The DSNG system maximizes
bandwidth to allow faster turnover of news in real time.
A modern DSNG van is a sophisticated affair, capable of deployment practically
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 33
anywhere in the civilized world. Signals are beamed between a geostationary satellite and the
van, and between the satellite and a control room run by a broadcast station or network. In the
most advanced systems, Internet Protocol (IP) is used. Broadcast engineers are currently
working on designs for remotely controlled, robotic DSNG vehicles that can be teleoperated
in hostile environments such as battle zones, deep space missions, and undersea explorations
without endangering the lives of human operators.
4.5 WORKING OF DSNG
DSNG of Jaihind TV is communicating using NSS 12 570
E. Before the
communication starts we need to synchronize the dish antenna mounted on the DSNG with
our satellite. The position of the satellite and the antenna must be in Line Of Sight (LOS).
The position of the antenna can be determined in such a way that the transmitted beacon
signal is received with its maximum power. After tracking the location of the satellite, we can
start the communication. The camera out will be taken and then it will be modulated using
QPSK. Next we will encode, and then we will get an L-band signal. This signal will be
upconverted and then amplified using a TWTA. The obtained signal is send to the feed arm
of the antenna using a rectangular waveguide. The signal from the feed arm is radiated to the
antenna dish in such a way that maximum power will be radiating in the LOS direction. This
is the basic working of a typical DSNG. One of the difficult and important operation in these
steps is the tracking of DSNG with its satellite. One thing to be noted is that the vehicle must
be always kept in a horizontal direction, for that special equipment are used. DSNG vehicles
are designed and provided by VSSC.
INDUSTRIAL TRAINING REPORT
DEPARTMENT OF ECE, AJCE Page 34
CHAPTER 5
CONCLUSION
We have undergone our industrial training in Jaihind TV at Thiruvananthapuram. We
got a great experience and exposure in the world of broadcasting. We learned the telecasting
methods for live programs, news and recorded programs. Also we got an idea about the
operation of Digital Satellite News Gathering vehicle of Jaihind TV. The transmission of the
programs to the satellite is from Noida (Telport own by EsselSyam Satcom) at Delhi up to
which programs are carried through optical fiber cable. The signals from the satellite are then
received in remote stations using commercial satellite recivers. We have got an idea about the
Production Control Room, Master Control Room, Live Room, etc.

More Related Content

What's hot

What's hot (20)

Basics of tv production
Basics of tv productionBasics of tv production
Basics of tv production
 
Direct satellite broadcast receiver using mpeg 2
Direct satellite broadcast receiver using mpeg 2Direct satellite broadcast receiver using mpeg 2
Direct satellite broadcast receiver using mpeg 2
 
Over view of radio broadcasting: New trends
Over view of radio broadcasting: New trendsOver view of radio broadcasting: New trends
Over view of radio broadcasting: New trends
 
Audio consoles
Audio consolesAudio consoles
Audio consoles
 
Radio Communication
Radio CommunicationRadio Communication
Radio Communication
 
DTH Power Point Presentation
DTH Power Point PresentationDTH Power Point Presentation
DTH Power Point Presentation
 
Diegetic and non diegetic sound
Diegetic and non diegetic soundDiegetic and non diegetic sound
Diegetic and non diegetic sound
 
DTH
DTHDTH
DTH
 
DTH System
DTH SystemDTH System
DTH System
 
RADIO PPT
RADIO PPTRADIO PPT
RADIO PPT
 
ITFT-MEDIA Radio production
ITFT-MEDIA Radio productionITFT-MEDIA Radio production
ITFT-MEDIA Radio production
 
Direct To Home (DTH) Technical seminar
Direct To Home (DTH) Technical seminarDirect To Home (DTH) Technical seminar
Direct To Home (DTH) Technical seminar
 
Dsng system
Dsng systemDsng system
Dsng system
 
Dth Technology
Dth TechnologyDth Technology
Dth Technology
 
Public service radio in india
Public service radio in indiaPublic service radio in india
Public service radio in india
 
Radio
RadioRadio
Radio
 
Radio Production
Radio ProductionRadio Production
Radio Production
 
Direct to Home(DTH)
Direct to Home(DTH)Direct to Home(DTH)
Direct to Home(DTH)
 
RADIO JOURNALISM AND PRODUCTION
RADIO JOURNALISM AND PRODUCTIONRADIO JOURNALISM AND PRODUCTION
RADIO JOURNALISM AND PRODUCTION
 
News and news channel management
News and news channel managementNews and news channel management
News and news channel management
 

Viewers also liked

Vocational training at DDK Delhi by SAKET RAI
Vocational training at DDK Delhi by SAKET RAIVocational training at DDK Delhi by SAKET RAI
Vocational training at DDK Delhi by SAKET RAISAKET RAI
 
DDK Delhi -Vocational training by Raisaket
DDK Delhi -Vocational training by RaisaketDDK Delhi -Vocational training by Raisaket
DDK Delhi -Vocational training by RaisaketSAKET RAI
 
Microprocessor-Compatible Quadrature Decoder/Counter Design
Microprocessor-Compatible Quadrature Decoder/Counter DesignMicroprocessor-Compatible Quadrature Decoder/Counter Design
Microprocessor-Compatible Quadrature Decoder/Counter DesignRohit Singh
 
Final Report of Project A Low
Final Report of Project A LowFinal Report of Project A Low
Final Report of Project A LowJan Salomon
 
automation of street light using 8085 microprocessor
automation of street light using 8085 microprocessorautomation of street light using 8085 microprocessor
automation of street light using 8085 microprocessorshubham9929
 
Mini project-report
Mini project-reportMini project-report
Mini project-reportAshu0711
 
Laser ignition system (3) (1)
Laser ignition system (3) (1)Laser ignition system (3) (1)
Laser ignition system (3) (1)gajellitejas
 
Project report on gsm based borewell water level monitor
Project report on gsm based borewell water level monitorProject report on gsm based borewell water level monitor
Project report on gsm based borewell water level monitorTarun Arora
 
B.Tech Project Report
B.Tech Project ReportB.Tech Project Report
B.Tech Project ReportRohit Singh
 
seminar report on smart glasses
seminar report on smart glasses seminar report on smart glasses
seminar report on smart glasses Nipun Agrawal
 
LED & LASER sources of light
LED & LASER sources of lightLED & LASER sources of light
LED & LASER sources of lightRohit Singh
 
Holographic data storage presentation
Holographic data storage presentationHolographic data storage presentation
Holographic data storage presentationPrashant Kumar
 
Automatic railway gate control system
Automatic railway gate control systemAutomatic railway gate control system
Automatic railway gate control systemdeepraj2085
 
Holographic Data Storage
Holographic Data StorageHolographic Data Storage
Holographic Data StorageLikan Patra
 
wireless electricity report word docs
wireless electricity report word docswireless electricity report word docs
wireless electricity report word docsASHISH RAJ
 
Project report on gsm based digital notice board
Project report on gsm based digital notice boardProject report on gsm based digital notice board
Project report on gsm based digital notice boardmanish katara
 
Operating system.ppt (1)
Operating system.ppt (1)Operating system.ppt (1)
Operating system.ppt (1)Vaibhav Bajaj
 
television broadcasting history
television broadcasting historytelevision broadcasting history
television broadcasting historymike05
 

Viewers also liked (20)

Vocational training at DDK Delhi by SAKET RAI
Vocational training at DDK Delhi by SAKET RAIVocational training at DDK Delhi by SAKET RAI
Vocational training at DDK Delhi by SAKET RAI
 
DDK Delhi -Vocational training by Raisaket
DDK Delhi -Vocational training by RaisaketDDK Delhi -Vocational training by Raisaket
DDK Delhi -Vocational training by Raisaket
 
Microprocessor-Compatible Quadrature Decoder/Counter Design
Microprocessor-Compatible Quadrature Decoder/Counter DesignMicroprocessor-Compatible Quadrature Decoder/Counter Design
Microprocessor-Compatible Quadrature Decoder/Counter Design
 
Final Report of Project A Low
Final Report of Project A LowFinal Report of Project A Low
Final Report of Project A Low
 
automation of street light using 8085 microprocessor
automation of street light using 8085 microprocessorautomation of street light using 8085 microprocessor
automation of street light using 8085 microprocessor
 
Mini project-report
Mini project-reportMini project-report
Mini project-report
 
Laser ignition system (3) (1)
Laser ignition system (3) (1)Laser ignition system (3) (1)
Laser ignition system (3) (1)
 
Project report on gsm based borewell water level monitor
Project report on gsm based borewell water level monitorProject report on gsm based borewell water level monitor
Project report on gsm based borewell water level monitor
 
B.Tech Project Report
B.Tech Project ReportB.Tech Project Report
B.Tech Project Report
 
seminar report on smart glasses
seminar report on smart glasses seminar report on smart glasses
seminar report on smart glasses
 
REPORT
REPORTREPORT
REPORT
 
LED & LASER sources of light
LED & LASER sources of lightLED & LASER sources of light
LED & LASER sources of light
 
Holographic data storage presentation
Holographic data storage presentationHolographic data storage presentation
Holographic data storage presentation
 
Automatic railway gate control system
Automatic railway gate control systemAutomatic railway gate control system
Automatic railway gate control system
 
Holographic Data Storage
Holographic Data StorageHolographic Data Storage
Holographic Data Storage
 
wireless electricity report word docs
wireless electricity report word docswireless electricity report word docs
wireless electricity report word docs
 
Project report on gsm based digital notice board
Project report on gsm based digital notice boardProject report on gsm based digital notice board
Project report on gsm based digital notice board
 
108600389 dd-report
108600389 dd-report108600389 dd-report
108600389 dd-report
 
Operating system.ppt (1)
Operating system.ppt (1)Operating system.ppt (1)
Operating system.ppt (1)
 
television broadcasting history
television broadcasting historytelevision broadcasting history
television broadcasting history
 

Similar to What is In side a Television Broadcasting Station

Basics of videography
Basics of videographyBasics of videography
Basics of videographysrinsha k
 
What is Audio Visual Technology.pdf
What is Audio Visual Technology.pdfWhat is Audio Visual Technology.pdf
What is Audio Visual Technology.pdfchrishemsworth32
 
Live Streaming On A Smartphone – Best Practice Advice From Inner Ear
Live Streaming On A Smartphone – Best Practice Advice From Inner EarLive Streaming On A Smartphone – Best Practice Advice From Inner Ear
Live Streaming On A Smartphone – Best Practice Advice From Inner EarInner Ear
 
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Performance Analysis of Audio and Video Synchronization using Spreaded Code D...
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
 
Ddk(niraj) ppt on summer training from ddk patna
Ddk(niraj) ppt on summer training from ddk patnaDdk(niraj) ppt on summer training from ddk patna
Ddk(niraj) ppt on summer training from ddk patnaNIRAJ KUMAR
 
Enensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksEnensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksSematron UK Ltd
 
ECLB Company Profile
ECLB Company ProfileECLB Company Profile
ECLB Company ProfileAndré Botes
 
Summer Training At Doordarshan
Summer Training At Doordarshan Summer Training At Doordarshan
Summer Training At Doordarshan Himanshu Gupta
 
SABIC EXTRON AV & TPL SYSTEM
SABIC EXTRON AV & TPL SYSTEMSABIC EXTRON AV & TPL SYSTEM
SABIC EXTRON AV & TPL SYSTEMDoctorpc1
 
43pp8545 69 dfu_eng
43pp8545 69 dfu_eng43pp8545 69 dfu_eng
43pp8545 69 dfu_engMalik Arif
 
Av Cpresentation­ 011512
Av Cpresentation­ 011512Av Cpresentation­ 011512
Av Cpresentation­ 011512Paul Mendoza
 

Similar to What is In side a Television Broadcasting Station (20)

Basics of videography
Basics of videographyBasics of videography
Basics of videography
 
What is Audio Visual Technology.pdf
What is Audio Visual Technology.pdfWhat is Audio Visual Technology.pdf
What is Audio Visual Technology.pdf
 
Dynya news report
Dynya news reportDynya news report
Dynya news report
 
Adobe Profile lo res
Adobe Profile lo resAdobe Profile lo res
Adobe Profile lo res
 
Live Streaming On A Smartphone – Best Practice Advice From Inner Ear
Live Streaming On A Smartphone – Best Practice Advice From Inner EarLive Streaming On A Smartphone – Best Practice Advice From Inner Ear
Live Streaming On A Smartphone – Best Practice Advice From Inner Ear
 
Production skills
Production skillsProduction skills
Production skills
 
84 inch Large Format Professional Monitor
84 inch Large Format Professional Monitor84 inch Large Format Professional Monitor
84 inch Large Format Professional Monitor
 
Zeeshan Rahman
Zeeshan RahmanZeeshan Rahman
Zeeshan Rahman
 
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Performance Analysis of Audio and Video Synchronization using Spreaded Code D...
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...
 
Ddk(niraj) ppt on summer training from ddk patna
Ddk(niraj) ppt on summer training from ddk patnaDdk(niraj) ppt on summer training from ddk patna
Ddk(niraj) ppt on summer training from ddk patna
 
TV PRODUCTION
TV PRODUCTION TV PRODUCTION
TV PRODUCTION
 
Enensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksEnensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV Networks
 
starserev video
starserev videostarserev video
starserev video
 
starserev video
starserev videostarserev video
starserev video
 
ECLB Company Profile
ECLB Company ProfileECLB Company Profile
ECLB Company Profile
 
Summer Training At Doordarshan
Summer Training At Doordarshan Summer Training At Doordarshan
Summer Training At Doordarshan
 
SABIC EXTRON AV & TPL SYSTEM
SABIC EXTRON AV & TPL SYSTEMSABIC EXTRON AV & TPL SYSTEM
SABIC EXTRON AV & TPL SYSTEM
 
43pp8545 69 dfu_eng
43pp8545 69 dfu_eng43pp8545 69 dfu_eng
43pp8545 69 dfu_eng
 
Video production
Video productionVideo production
Video production
 
Av Cpresentation­ 011512
Av Cpresentation­ 011512Av Cpresentation­ 011512
Av Cpresentation­ 011512
 

Recently uploaded

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 

Recently uploaded (20)

Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 

What is In side a Television Broadcasting Station

  • 1. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 1 CHAPTER 1 JAIHIND TV PROFILE Launched on 17 August 2007, by Congress President and UPA Chairperson, Mrs. Sonia Gandhi with the mission to meet the aspirations of the family and the nation, JAIHIND has consistently remained in the forefront with innovative programming, setting new industry benchmarks in news coverage and entertainment packaging. JAIHIND TV is a channel with a multi-genre infotainment package for the whole family. JAIHIND TV, Malayalam Channel own and operated by M/s Bharat Broadcasting Network Ltd. is a subsidiary of Jaihind Communications Pvt. Ltd, a company registered under the Companies Act 1956 with an initial investment of Rs. 33 crores. Jaihind TV’s registered office is located at Karimpanal Arcade, East Fort, Thiruvananthapuram. Sri. Ramesh Chennithala, President of Kerala Pradesh Congress Committee is the President of JAIHIND TV. Sri. M.M.Hassan, former Minister and Senior Leader of the Indian National Congress Party, is the Managing Director. He is a Popular Figure championing various public interest causes. Bharat Broadcasting Network Ltd is chaired by Sri. Kunjukutty Aniyankunju, a prominent NRI businessman based at UAE. Sri. Vijayan Thomas, a renowned NRI business personal, is the chairman of Jaihind Communications Pvt Ltd. Board of Directors and investors are eminent NRI and Resident business personalities. JAIHIND TV is headed by Sri.K.P.MOHANAN, a veteran journalist well-versed both in Print and Electronic Media with professional experience spanning four decades. A Permanent Fellow of the USA based World Press Institute, Mr. Mohanan is the winner of the Rajiv Gandhi Award for excellence in Journalism. With the declared motto “For The Family, For The Nation” JAIHIND TV is committed to quality entertainment packages with Social and Ethical Binding and News and Current Affairs programmes with development objectives. The programmes of the Channel
  • 2. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 2 focus on upholding Democracy, Secularism and Nationalism. The Channel brings together some of the finest talents in the industry and transcends stereo typical television. Jaihind TV aims to be the voice of the global Malayalees and to address the issues of global relevance. A well-experienced and highly qualified professional team and state-of-the- art studio with multi-dimensional shoot facilities at the Kinfra Film and Video Park that has fully-digital production and post-production facilities and a Live News room in Trivandrum with static and dynamic connectivity to all the Districts in Kerala as well as to Delhi, Mumbai, Chennai and Dubai, make Jaihind TV a media organization with competitive edge.
  • 3. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 3 CHAPTER 2 INTRODUCTION 2.1 WHAT'S IN A TELEVISION STATION? "Photography is going to marry Miss Wireless, and heaven help everybody when they get married. Life will be very complicated." -- Marcus Adams, Society photographer, in the Observer, 1925. 2.2 THE TV STATION AS A WHOLE The basic television system consists of equipment and people to operate this gear so that we can produce TV programs. The stuff you'll find in a television station consists of (and this list is not exhaustive!):  one or more television cameras  lighting, to see what we're shooting  one or more audio consoles, along with sound control equipment, to manipulate the sounds we generate with microphones, audio recorders and players, and other devices  one or more videotape recorders, or other video recording technologies, in any of a number of formats  one or more video switchers, to select video sources, perform basic transitions between those sources, and to create special effects  EFP (electronic field production) shooting and production equipment, and storage facilities  perhaps a post-production editing facility, to assemble videotaped segments together  some special effects: either visual or aural; electronic, optical or mechanical
  • 4. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 4 2.3 THE STUDIO INFRASTRUCTURE Whether or not you work in a traditional studio or a new "studioless" environment, the same principles apply. You'll still need:  an intercom system (with headset stations for floor directors)  floor monitors (video and audio)  electrical outlets for both regular AC and lighting equipment In addition, your control room or control centre will have:  various program and preview monitors  program audio speakers  time of day clock  video switcher  an audio control room, with audio console, cart machines, turntable and/or CD player, reel to reel and/or cassette and/or DAT recorder and/or other digital recording and playback technology, and auxiliary audio enhancement equipment
  • 5. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Fig.3 3.1 STUDIO A television studio is an installation in which a for the recording of live television post-production. The design of a studio is similar to, and derived from, few amendments for the special requirements of television production. A professional television studio generally has several rooms, which are kept separate for noise and practicality reasons. These rooms are connected via among these workplaces. PROGRAM STUDIO NEWS STUDIO PROGRAM INDUSTRIAL TRAINING REPORT CHAPTER 3 WORK FLOW Fig.3 1: WORKING OF CHANNEL is an installation in which a video productions take place, either live television to video tape, or for the acquisition of raw . The design of a studio is similar to, and derived from, movie studios few amendments for the special requirements of television production. A professional n studio generally has several rooms, which are kept separate for noise and practicality reasons. These rooms are connected via intercom, and personnel will be divided PROGRAM PCR Page 5 take place, either , or for the acquisition of raw footage for movie studios, with a few amendments for the special requirements of television production. A professional n studio generally has several rooms, which are kept separate for noise and , and personnel will be divided
  • 6. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 6 The studio floor is the actual stage on which the actions that will be recorded take place. A studio floor has the following characteristics and installations:  decoration and/or sets  professional video camera (sometimes one, usually several) on pedestals  microphones  stage lighting rigs and the associated controlling equipment.  several video monitors for visual feedback from the production control room (PCR)  a small public address system for communication  a glass window between PCR and studio floor for direct visual contact is usually desired, but not always possible Various types of cameras Used  SONY PD 170  SONY DSR 400  SONY D 50  SONY D 55 3.2 PRODUCTION-CONTROL ROOM The production control room (PCR) is the place in a television studio in which the composition of the outgoing program takes place. Facilities in a PCR include: A video monitor wall, with monitors for program, preview, VTRs, cameras, graphics and other video sources.  A vision mixer, a large control panel used to select the multiple-camera setup and other various sources to be recorded or seen on air and, in many cases, in any video monitors on the set. The term 'vision mixer' is primarily used in Europe, while the term 'video switcher' is usually used in North America.  A professional audio mixing console and other audio equipment such as effects devices.
  • 7. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE  A character generator (CG), which creates the majority of the names and full on-screen graphics that are insert television screen  Digital video effects, or DVE, for manipulation of video sources. In newer vision mixers, the DVE is integrated into the  A still store, or still frame, device name suggests that the device is only capable of storing still images  The technical director's station, with control units (CCU) or remote control panels for the CCUs.  VTRs for recording.  Intercom and IFB equipment for communication with talent and  A signal generator to lock all of the video equipment to a common reference that requires color burst. VISION MIXER (VIDEO SWICHER) FOR-A 700 INDUSTRIAL TRAINING REPORT (CG), which creates the majority of the names and full that are inserted into the program lower third portion of the , or DVE, for manipulation of video sources. In newer vision mixers, the DVE is integrated into the vision. A still store, or still frame, device for storage of graphics or other images. While the name suggests that the device is only capable of storing still images 's station, with waveform monitors, vectorscopes and the (CCU) or remote control panels for the CCUs. equipment for communication with talent and television crew to lock all of the video equipment to a common reference that Fig.3.2 BLOCK DIAGRAM OF PCR Page 7 (CG), which creates the majority of the names and full digital portion of the , or DVE, for manipulation of video sources. In newer vision for storage of graphics or other images. While the and the camera television crew to lock all of the video equipment to a common reference that
  • 8. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 8 3.2.1 Recording format  Jaihind TV is using the latest Sony MPEG IMX recording format for programme production to deliver the excellent picture quality to the viewers. DV CAM Specifications Tape: ME (Metal Evaporate) Track Pitch: 15 micrometers Track Width: 15 micrometers (10 micrometers on some early gear) Tape Speed: 28.215 mm/sec Record Time: Standard: 184 mins, MiniDV: 40 mins. Compression: Intra-frame, 5:1 DVC-format DCT, 25 Mbps video data rate Resolution & Sampling: 720x576, 4:2:0 (PAL), 720x480, 4:1:1 (NTSC) Audio: 2 ch @ 48 kHz, 16 bits; 4 ch @ 32 kHz, 12 bits Will accept 2 ch @ 44.1 kHz, 16 bits via 1394 I/O; locked. 3.2.2 IMX Format Specifications IMX, also known as Beta cam IMX or MPEG IMX, records SD NTSC and PAL video using high-quality MPEG-2 compression. a) Storage Medium One of the features of the IMX format is that it is not restricted to a single media type. IMX can be recorded on XDCAM, a Sony optical disc format, as well as the IMX tape format. IMX VTRs bridge the gap between conventional tape decks and modern computer editing systems with the following features:  Playback of older video formats such as Beta cam SP, Beta SX, and Digital Beta cam. These formats can be converted and output to MPEG IMX in real time.
  • 9. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 9 Note: Not all IMX VTRs support playback and recording of all Beta cam formats.  IMX digital video file transfer via networking interfaces such as Ethernet and TCP/IP protocols b) Video Standard IMX supports both SD NTSC and SD PAL. c) Aspect Ratio NTSC and PAL IMX both have an aspect ratio of 4:3. d) Frame Dimensions, Number of Lines, and Resolution IMX can store video at two possible resolutions: NTSC (525) and PAL (625). The numbers refer to the number of analog lines of the corresponding video formats. However, many of these analog lines are not used to store picture information. In Final Cut Pro, the following frame dimensions are used:  NTSC IMX: 720 pixels per line, 486 lines  PAL IMX: 720 pixels per line, 576 lines In both formats, standard definition rectangular pixels are used, just as with DV, DVD, Digital Beta cam, and other SD digital video formats. e) Frame Rate IMX supports NTSC and PAL frame rates of 29.97 fps and 25 fps, respectively. f) Scanning Method IMX supports interlaced recording. g) Color Recording Method IMX records a 4:2:2 Y′CBCR (component) digital video signal. Each sample (pixel) has a resolution of 8 bits.
  • 10. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 10 h) Data Rate and Video Compression IMX uses I-frame-only MPEG-2 compression. IMX is a restricted version of MPEG-2 4:2:2 Profile @ ML. The official SMPTE designation is D10, as specified in SMPTE standard 356M. Three compression ratios are supported:  30 Mbps: 5.6:1 compression  40 Mbps: 4.2:1 compression  50 Mbps: 3.3:1 compression i) Audio IMX supports two audio channel configurations:  Four audio channels, sampled at 48 kHz with 24 bits per sample  Eight audio channels, sampled at 48 kHz with 16 bits per sample j) Time code IMX supports 30 and 25 fps time code.
  • 11. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE 3.3 EDIT SUITE Fig.3.3: SCHEMATIC DIAGRAM OF EDIT SUITE Non-linear editing is a video editing method which enables direct access to any frame in a digital video clip, without needing to play or scrub/shuttle through adjacent footage to reach it, as was necessary with historical most natural approach when all assets are available as files on recordings on reels or tapes, while linear editing is related to the need to sequentially view a film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the "cut and paste" technique used in film editing. However, with the appropriation of non editing systems, the destructive act of cutting of film negatives is eliminated. Non non-destructive editing methods began to appear with the introduction of technology. It can also be viewed as the why it is called desktop video editing in the consumer space. Video and audio data are first captured to storage devices. The data are either INDUSTRIAL TRAINING REPORT Fig.3.3: SCHEMATIC DIAGRAM OF EDIT SUITE linear editing is a video editing method which enables direct access to any , without needing to play or scrub/shuttle through adjacent to reach it, as was necessary with historical video tape linear editing systems. It is the most natural approach when all assets are available as files on hard disks recordings on reels or tapes, while linear editing is related to the need to sequentially view a film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the " technique used in film editing. However, with the appropriation of non editing systems, the destructive act of cutting of film negatives is eliminated. Non methods began to appear with the introduction of technology. It can also be viewed as the audio/video equivalent of word processing editing in the consumer space. Video and audio data are first captured to hard disks, video server, or other d storage devices. The data are either direct to disk recording or are imported from another Page 11 linear editing is a video editing method which enables direct access to any video , without needing to play or scrub/shuttle through adjacent linear editing systems. It is the s rather than recordings on reels or tapes, while linear editing is related to the need to sequentially view a film or read a tape to edit it. On the other hand, the NLE method is similar in concept to the " technique used in film editing. However, with the appropriation of non-linear editing systems, the destructive act of cutting of film negatives is eliminated. Non-linear, methods began to appear with the introduction of digital video word processing, which is , or other digital or are imported from another
  • 12. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 12 source. Once imported, the source material can be edited on a computer using application software, any of a wide range of video editing software. In non-linear editing, the original source files are not lost or modified during editing. Professional editing software records the decisions of the editor in an edit decision list (EDL) which can be interchanged with other editing tools. Many generations and variations of the original source files can exist without needing to store many different copies, allowing for very flexible editing. It also makes it easy to change cuts and undo previous decisions simply by editing the edit decision list (without having to have the actual film data duplicated). Generation loss is also controlled, due to not having to repeatedly re-encode the data when different effects are applied. Compared to the linear method of tape-to-tape editing, non-linear editing offers the flexibility of film editing, with random access and easy project organization. With the edit decision lists, the editor can work on low-resolution copies of the video. This makes it possible to edit both standard-definition broadcast quality and high definition broadcast quality very quickly on normal PCs which do not have the power to do the full processing of the huge full-quality high-resolution data in real-time. A multimedia computer for non-linear editing of video will usually have a video capture card to capture analog video and/or a FireWire connection to capture digital video from a DV camera, with its video editing software. Various editing tasks can then be performed on the imported video before it is exported to another medium, or MPEG encoded for transfer to a DVD. One of the best things about nonlinear editing, is that the edit is instant - no splicing and waiting for the glue to set as in film, and no having to actually play back the entire shot for its full duration, as in videotape. One mouse click and it's done - on to the next edit. And the shot can, of course, be placed absolutely anywhere, even in between two frames of a previously laid down shot. Another advantage of nonlinear editing can be its reduced VTR cost. Obviously, to have instant access to all of your shots, you have to shoot on a non-digital camera and VTR, and dump the raw footage into the nonlinear editing system, but this
  • 13. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 13 requires only one VTR (you can use the same videotape machine to record the finished product afterwards, too.) The down side of this, however, is the cost of hard drive storage - many editors only dub over the final takes of their shoot, not all of their raw footage. 3.4 COMPILER Fig.3.4: LAYOUT OF COMPILER 3.4.1 Production This part will discuss the production of the video. But not the creation of the video from the movie maker, this part is when the video is already done. One of the most important parts will be the election of the medium to watch. Depending if the video is designed and transmitted for watching through internet, or if is for television. Leaning on one or the other the requirements will be different. First of all the size of the file is noticed that the quality of the image in a DVD or in a TV program is better than the quality in an online transmission on a PC; higher quality implies higher size and vice versa. As a result of, is interesting for a online view smaller file extension for a clearer transmission.
  • 14. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 14 A DVD movie can be like 4 Gigabytes hence would not be worth to view by internet streaming connection. The term streaming means that it is a direct current (without interruption). The user can listen or watch when they want. This technology allows to be stored in a buffer that will be listened or viewed. Streaming makes possible to play music or watch videos without having to be downloaded first. Up-to-date there are new software applications to listen music via streaming. Spotify uses those technology to listen music only needing an internet connection and the program. Besides, bigger size needs more bandwidth to be transferred by the network. Additionally, the file must be smaller than the original. Thenceforth a more powerful compression is needed for reduce the size of the movie file. For more compression, more energy and so on more power consumption. Furthermore there are new algorithms with the aim of reduce the size relevantly but also with a minor quality decrease. Consequently is important to select the correct devices related with the type of transmission and which compression is required. Before making the choice is useful to know which compression tools are craved. Here is a list of the different video coding standards:  MPEG-1: Is the standard of audio and video compression. Provides video at a resolution of 350x240 at 30 frames per second. This produces video quality slightly below the quality of conventional VCR videos. Includes audio compression format of Layer 3 (MP3).  MPEG-2: audio and video standard for broadcast of television quality. Offers resolutions of 720x480 and 1280x720 at 60fps with audio CD quality. Matches most of TV standards even HDTV. The principal use is for DVDs, satellite TV services and digital TV signals by cable. An MPEG-2 compression is able to reduce a 2 hour video to few gigabytes. While decompressing a MPEG-2 data stream no needs much computer resources, the encoding to MPEG-2 requires more energy to the process.  MPEG-3: Designed for HDTV but was replaced for MPEG-2  MPEG-4: Standard algorithm for graphics and video compression based on MPEG-1, MPEG-2 and Apple QuickTime technology. The MPEG-4 files are smaller than JPEG or QuickTime, therefore are designed to transfer video and images through a narrow
  • 15. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 15 bandwidth and sustain different mixtures of video with text, graphics and 2D or 3D animation layers.  MPEG – 7: Formally called Multimedia Content Description Interface, supplies a set of tools for multimedia content. Performed to be generic and not aimed at a specific use.  MPEG – 21: Allow a Rights Expression Language (REL) and Rights Data Dictionary. Describes a standard that defines the description of the content and the processes to access, search, store and protect the copyright of the content discordant with other MPEG standards that define compression coding methods. The above- mentioned are the standard but each one has specific parts depending on the use. Among these types the most important contemporaneously are:  MPEG-2  MPEG-4 → particularly the part number 10 which without increasing the complexity of design and the file size, implements good video quality at lower bit rates than the previous. Technologically called MPEG-4 H.264 / AVC. 3.4.2 MPEG-2 (H.262) MPEG-2 is a standard for “the generic coding of moving pictures and associated audio information”. Is an extension of the MPEG-1 international standard for digital compression of audio and video signals created to broadcast formats at higher bit rates than MPEG-1. Initially developed to serve the transmission of compressed television programs via broadcast, cablecast, and satellite, and subsequently adopted for DVD production and for some online delivery systems, defines a combination of lossy video compression and lossy audio data compression using the actual methods of storage, like DVDs or Blu-Ray, without a bandwidth restriction. The main characteristics are:  New prediction modes of fields and frames for interlaced scanning.  Improved quantification.  The MPEG-2 transport stream permits the multiplexing of multiple programs.
  • 16. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 16  New intra-code variable length frame (VLC). Is a code in which the number of bits used in a frame depends on the probability of it. More frame probability implies more bits intended by frame. Strong support for increased errors.  Uses the discrete cosine transform algorithm and motion compensation techniques to compression.  Provides for multichannel surround sound coding. MPEG-2 contains different standard parts to suit to the different needs. Also annexes various levels and profiles. 3.4.3 MPEG-2 FUNDAMENTALS Nowadays, a TV camera can generate 25 pictures per second, i.e., a frame rate of 25Hz. But in order to convert it to a digital television is necessary to digitalize the pictures in order to be processed with a computer. An image is divided in two different signals: luminance (Y) and chrominance (UV). Each image has one luma number and two chrominance components. The television colour signal Red-Green-Blue (RGB) can be represented with luma and chrominance numbers. Chrominance bandwidth can be reduced relative to the luminance signal without an influence on the picture quality. An image can also be defined with a special notation (4:2:2, 4:2:0). These are types of chroma sub-sampling relevant to the compression of an image, storing more luminance details than colour details. The first number refers to the luminance part of the signal, the second refers to the chroma. In 4:2:2 luminance is sampled 4 times while the chroma values are sampled twice at the same rate. Being a fact that the human eye is more sensitive to brightness than colour, chroma is sampled less than luminance without any variation for the human perception. Those signals are also partitioned in Macro blocks which are the basic unit within an image. A macro block is formed by more blocks of pixels. Depending on the codec, the block will be bigger or smaller. Normally the size is a multiple of 4. MPEG-2 coding creates data flow by three different frames: intra-coded frames (I frames), predictive-coded frames (P-frames), and bidirectional-predictive-coded frames (B-frames) called “GOP structure” (Group of Pictures structure).  I-frame: Coded pictures without reference to others. Is compressed directly from a original frame.
  • 17. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 17  P-frame: Uses the previous I-frame or P-frame for motion compensation. Each block can be predicted or intra-coded.  B-frame: Uses the previous I or P picture and offers the highest compression. One block in a B-picture can be predicted or intra-coded in a forward, backward or bidirectional way. A typical GOP structure could be: B1 B2 I3 B4 B5 P6 B7 B8 P9 B10 B11 P12. I-frames codes spatial redundancy while B-frames and P-frames code temporal redundancy. MPEG-2 also provides interlaced scanning which is a method of checking an image. The aim is to increase the bandwidth and to erase the flickering showing the double quantity of images per second with a half frame rate. For example, produce 50 images per second with a frame rate of 25Hz. The scan divides a video frame in two fields, separating the horizontal lines in odd lines and even lines. It enhances motion perception to the viewer. Depending on the number of lines and the frame rate, are divided in:  PAL / SECAM: 25 frames per second, 625 lines per frame. Used in Europe.  NTSC: 30 frames per second, 525 lines per frame. Used in North America. MPEG-2 encoding is organized into profiles. A profile is a "defined subset of the syntax of the specification". Each profile defines a range of settings for different encoder options. As most of settings are not available and useful in all profiles, these are designated to suit with the consumer requirements. A computer will need a hardware specific for the use, the same with a television or a mobile phone, but it would be capable to rate it in a particular profile. Then an encoder is needed to finish the compression. 3.4.4 MPEG-2 COMPRESSION BASICS Spatial Redundancy: A technical compression type which consists of grouping the pixels with similar properties to minimize the duplication of data in each frame. Involves an analysis of a picture to select and suppress the redundant information, for instance, removing the frequencies that the human cannot percept. To achieve this is employed a mathematical tool: Discrete Cosine Transform (DCT).
  • 18. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 18 Intra Frame DCT Coding: The Discrete cosine Transform (DCT) is a based transform with Fourier Discrete Transform with many applications to the science and the engineering but basically is applied on image compression algorithms. DCT is employed to decrease the special redundancy of the signals. This function has a good energy compaction property and so on accumulates most of the information in few transformed coefficients. In consideration of this the signal is converted to an new domain, in which only a little number of coefficients contain most of the information meanwhile the rest has got unappreciated values. In the new domain, the signal will have a much more compact representation, and may be represented mainly by a few transform coefficients. It is independent of the data. The algorithm is the same, regardless of the data applied in the algorithm. It is a lossless compression technique (negligible loss).The DCT is capable to interpret the coefficients in a frequency point. As a result of that, it can take a maximum of compression capacity profit. The result of applying DCT is an 8x8 array composed of distinct values divided in frequencies: • Low frequency implies more sensitive elements for the human eye. • High frequency means less cognizant components. Temporal Redundancy: Temporal compression is achieved having a view in a succession of pictures. Situation: An object moves across a picture without movement. The picture has all the information required until the movement and is not necessary to encode again the picture until the alteration. Thereafter, is not necessary to encode again all the picture but only the part that contains the movement owing that the rest of the scene is not affected by the moving object because is the same scene as the initial picture. The notation with is determined how much movement is contained between two successive pictures is motion compensated prediction. As a result of isolating a picture is not a good fact because probably an image is going to be constructed from the prediction from a previous picture or maybe the picture may be useful to create the next picture.
  • 19. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 19 Motion Compensated Prediction: Identify the displacement of a given macro block in the current frame respect from the position it had in the frame of reference. The steps are:  Search for the same macro blocks of the frame to be encoded in the frame of reference.  If there is not the same macro block then the corresponding motion vector is encoded.  The more similar macro block (INTER) is chosen and later on is necessary to encode the motion vector.  If there is no similar block (INTRA) these block is encoded using only the spatial redundancy. 3.4.5 H.264 / MPEG-4 AVC H.264 or MPEG-4 part 10 defines a high-quality video codec compression developed by the Video Coding Expert Group (VCEG) and the Motion Picture Experts Group (MPEG) in order to create a standard capable of providing good quality image, but using rates actually lower than in previous video standards such as MPEG-2 and without increasing the complexity of its design, since otherwise it would be impractical and expensive to implement. A goal that is proposed by its creators was to increase its scope, i.e., allow the standard to be used in a wide variety of networks and video, both high and low resolution, DVD storage, etc. In December 2001 came the Joint Video Team (JVT) consisting of experts from VCEG and MPEG, and developed this standard to be finalized in 2003. The ISO / IEC (International Organization for Standardization / International Electro technical Commission) and ITU-T (International Telecommunication Union-Telecommunication Standardization Sector) joined this project. The first is responsible of the rules for standards by focusing on manufacturing and the second focuses mainly on tariff issues. The latter planned to adopt the
  • 20. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 20 standard under the name of ITU-T H.264 and ISO / IEC wanted to name him MPEG-4 Part 10 Advanced Video Codec (AVC), hence the name of the standard. To set the first code they firstly based on looking at the previous standard algorithms and techniques to modify or if not create new ones:  DCT structure in conjunction with the motion compensation of previous versions was efficient enough so there was no need to make fundamental changes in its structure.  Scalable Video Coding: An important advance because it allows each user, regardless of the limitations of the device, receives the best possible quality, issuing only a single signal. This is possible because it provides a compressed stream of video and users can take only what you need to get a better video quality according to their technical limitations of receipt. The MPEG-4 has more complex algorithms and better benefits giving a special quality improvement, which provides a higher compression rate than MPEG-2 for an equivalent quality. 3.4.6 MAIN ADVANTAGES For the MPEG-4 AVC the main important features are: 1. Provides almost DVD quality video, but uses lower bit rate so that it's feasible to transmit digitized video streams in LAN, and also in WAN, where bandwidth is more critical, and hard to guarantee. 2. Dramatically advances audio and video compression, enabling the distribution of content and services from low bandwidths to high-definition quality across broadcast, broadband, wireless and packaged media. 3. Provides a standardized framework for many other forms of media — including text, pictures, animation, 2D and 3D objects – which can be presented in interactive and personalized media experiences. 4. Supports the diversity of the future content market. 5. Offers a variety of so-called “profiles,” tool sets from the toolbox, useful for specific applications, like in audio-video coding, simple visual or advanced simple visual profile, so users need only implement the profiles that support the functionality required.
  • 21. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 21 6. Uses DCT algorithm mixed with motion compensation. It clearly shows that MPEG4 wants to be a content-based representation standard independent of any specific coding technology, bit rate, scene type of content, etc. This means it shows at the same time why and how MPEG4 is different from previous moving pictures coding standards. 8. Low latency The most important and relevant are: 1. Reduces the amount of storage needed 2. Increases the amount of time video can be stored 3. Reduces the network bandwidth used by the surveillance system 3.4.7 MAIN NEW FEATURES  Type of image : Adding two extra frames, apart from I, P, B, the SP (Switching P) and SI (Switching I) which allows passing from one video to another applying the temporal or spatial prediction with the possibility to reconstruct accurate values of the sample even when using different reference images than the images used in the prediction process.  Motion compensation: With variant sizes of the sub-blocks; 16x8, 8x16, 8x8 pixels, 8x8 size can be partitioned into 8x4, 4x8 or 4x4 groups of pixels to providing a greater accuracy into the estimation.  Transform: a DCT modification with a 4x4 pixel size, using Integer coefficients to avoid the approximation errors, getting a better precision to calculate the coefficients.  Entropy coding: Coding method without errors that consist in treating all the transform array in a zigzag way, bringing together groups with similar frequencies, insert coded zeros and applying VLC coding for the rest. This technique reduces in a 5% the file size but with a larger coding and decoding time. 3.5. MASTER-CONTROL ROOM The Master control room is the place where the on-air signal is controlled. It may include controls to playout television programs and television commercials, switch local or
  • 22. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE television network feeds, record may be in an adjacent equipment rack room. The term "studio" usually refers to a place where a particular local program is originated. If the program is broadcast live, the signal goes from the PCR to MCR and then out to the transmitter Fig 3.5: SCHEMATIC LAYOUT OF When all of the diverse elements of the daily program schedule come together, there has to be some way of integrating them in a connected way. That is where master control comes in. Master control operators (and their equipment) are and look of the station. 3.5.1 ROUTER / SWITCHER INDUSTRIAL TRAINING REPORT feeds, record satellite feeds and monitor the transmitter, or these items jacent equipment rack room. The term "studio" usually refers to a place where a particular local program is originated. If the program is broadcast live, the signal goes from the PCR to MCR and then out to the transmitter. Fig 3.5: SCHEMATIC LAYOUT OF MCR / PLAYOUT When all of the diverse elements of the daily program schedule come together, there has to be some way of integrating them in a connected way. That is where master control comes in. Master control operators (and their equipment) are responsible for the final output / SWITCHER Page 22 feeds and monitor the transmitter, or these items jacent equipment rack room. The term "studio" usually refers to a place where a particular local program is originated. If the program is broadcast live, the signal goes from When all of the diverse elements of the daily program schedule come together, there has to be some way of integrating them in a connected way. That is where master control responsible for the final output
  • 23. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 23 The heart of the master control operator's equipment is the switcher, which can perform cuts, dissolves, and keys like a production switcher. There is one significant difference, however - the MCR switcher also has the ability to take audio from any of the sources selected, and is therefore called an "audio follow" switcher. In addition to the regular program audio, the MCR operator (or "MCO") has the capability of sending out additional tape or digital cartridge material either to replace the existing audio completely, or mixing it over the continuing source. This is done by lowering the level of the program material and is called "voice over" mode. The keyer on the switcher is generally used to superimpose station identification information over program material, or other statistics such as the time of day. With all of this information and technical detail to watch over, master control operations are beginning to computerize. The computer remembers and activates transitions sequences. It also cues, rolls, and stops film projectors and VTRs, and calls up any number of slides from the still store system. The development of MCR switchers is going in two directions. One can have the switcher perform more and more visual tricks. Or, it can be kept simple enough so that the operator does not have to climb all over it to reach everything, or take a computer course to use it. 3.5.2 CHARACTER GENERATOR The character generator looks and works like a typewriter, except that it writes letters and numbers on the video screen instead of paper. This titling device has all but replaced the conventional studio card titles, and has become an important production tool. The more sophisticated character generators can produce letters of various sizes and fonts, and simple graphic displays such as curves and rectangles (blocks). In addition, backgrounds can be colorized. To prepare the titles or graphs, you enter the needed information on a computer- like keyboard. You can then either integrate the information directly into the program in progress, or store it for later use. Whether it is stored on computer disk or in the generator's
  • 24. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 24 RAM (random access memory), the screen full of information is keyed to a specific address (electronic page number) for easy and fast recall. Most character generators have two output channels - a preview and a program. The preview channel is for composing titles. The program channel is for actually integrating the titles with the major program material. The preview channel has a cursor that shows you where on the screen the word or sentence will appear. By moving the cursor into various positions, you can centre the information, or move it anywhere on the screen. Various controls allow you to make certain words flash, to roll the whole copy up and down the screen, or to make it crawl sideways. More sophisticated generators can move words and graphics on and off the screen at any angle, and at any of several speeds. 3.5.3 LOGO / GRAPHICS GENERERATOR With graphics generators, you can draw images on the screen. These units are also called video art, or paint box, systems. You draw onto an electronic tablet with a stylus. An output monitor immediately reflects your artistic efforts, and another one offers a series of design options (such as colour, thickness of line, styles of letters, or special effects) to improve your art work. Newer systems will allow you to change the size and position of a graphic so that it can be used full-frame, in an over the shoulder position (for news), or in lower third for information display. More than one original image can be integrated into a new full-frame composite (a montage). Character generators are now often part of the software package. Retouching (softening edges, adding or removing highlights, matching colors, blending objects into their backgrounds) is commonly available. Most graphics generators work in real time; your drawing is immediately displayed on the screen. Less sophisticated paint boxes take a moment before they display the effects. Also, better quality units have built-in circuits and software to prevent aliasing (jagged details on diagonal lines). These are hidden by slightly blurring the edges of the lines. Personal computers are now often used to generate television graphics, both as
  • 25. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 25 character generators and paint boxes. Some amazing effects can be produced with these easy to use graphics machines. These systems have an internal board to convert the regular PC output to standard NTSC scanning. 3.5.4 ENCODER An Encoder is a device used to convert analog video signals into digital video signals. Most of them compress the information so it can be stored or transmitted in the minimum space possible. To achieve this it takes advantage of video sequences that have spatial or temporal redundancy. Therefore, eliminating redundant information obtains that encode information more optimal. The spatial redundancy is erased with DCT coefficient coding. To delete temporal redundancy is used the motion compensation prediction, with motion estimation between successive blocks. The operation method is:  Signals are separated luma (Y) and chroma (C).  Find the error of estimation to make the DCT.  The coefficients are quantified and entropy coded (VLC).  Coefficients are multiplexed and passed to the buffer. The buffer controls the quality of signal.  Check that the outflow bit stream of the buffer is not variable, because the signal is thought to be transmitted on a channel with a steady speed.  The quantified image is reconstructed for future reference for prediction and motion estimation. The DCT algorithm and the block quantification can cause visible discontinuities at the edges of the blocks leading to the known “Blocking effect”, because the DCT omits the 0 in the matrix, so may produce imperfections. As a result of, new standard video coding like H.264/MPEG-4 ACV, includes filter algorithms able to decrease that effect. 3.5.5 MODULATOR In the televisions systems, signals can be carried between a limited frequency spectrum, which a concrete lower and upper frequencies. A modulator is a device charged of
  • 26. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 26 transporting a signal inside another signal to be transmitted. Is able to transform a low- frequency signal into a other frequency signal. As a result of, the frequency can be controlled and the problem above can be solved. Mainly, the aim of modulate a signal consist in changing a parameter of the wave according to the variations of the modulation signal (information to be transmitted). 3.5.6 UPCONVERTER Upconverter is used to convert television signals to VHF or UHF signals regardless of whether it is a digital signal or analog signal. The device detects the kind of incoming signal and being based on if is a digital or analog, creates the suitable reference signal. 3.6 DIGITAL VIDEO BROADCASTING (DVB) Once the video is encoded in the desired format (MPEG-2 or MPEG-4/H.264 AVC) will have to be put on the network to be distributed and transported to the end user. In this field there are different connections, satellite, terrestrial, cable, etc., Depending on the type of means used in transport will have different formats, but mostly in terrestrial digital television will use a terrestrial connection (DVB-T) mixed with a connection via satellite (DVB-S). These connections will be in charge of transmitting the digital signal to the end user, but first this signal must be created, since only at this point you have video signals, audio and data. Digital Video Broadcasting (DVB) has become the synonym for digital television and for data broadcasting world-wide. DVB services have recently been introduced in Europe, in North- and South America, in Asia, Africa and Australia.DVB is the technology that makes possible the broadcasting of “data containers“ in which all kinds of digital data up to a data rate of 38 Mbit/s can be transmitted at bit-error rates in the order of 10-11 . 3.6.1 BASEBAND PROCESSING The transmission techniques developed by DVB are transparent with respect to the kind of data to be delivered to the customer. They are capable of making available bit streams at (typically) 38 Mbit/s within one satellite or cable channel or at 24 Mbit/s within
  • 27. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 27 one terrestrial channel. On the other hand a digital video signal created in today’s TV studios comprises of 166 Mbit/s and thus cannot possibly be carried via the existing media. Data rate reduction or “source coding“therefore is a must for digital television. One of the fundamental decisions which were taken during the early days of DVB was the selection of MPEG-2 for the source coding of audio and video and for the creation of programme elementary streams, transport streams etc. - the so-called systems level. Three international standards describe MPEG-2 systems, video and audio. Using MPEG-2 a video signal can be compressed to a datarate of for example 5 Mbit/s and still can be decompressed in the receiver to deliver a picture quality close to what analogue television offers today. The term “channel coding” is used to describe the algorithms used for adaptation of the source signal to the transmission medium. In the world of DVB it includes the FEC (Forward Error Correction) and the modulation as well as all kinds of format conversion and filtering. 3.7 DVB-S2: THE SECOND GENERATION STANDARD FOR SATELLITE BROAD-BAND SERVICES DVB-S2 is a digital satellite transmission system developed by the DVB Project. It makes use of the latest modulation and coding techniques to deliver performance that approaches the theoretical limit for such systems. Satellite transmission was the first area addressed by the DVB Project in 1993 and DVB standards form the basis of most satellite DTV services around the world today, and therefore of most digital TV in general. DVB-S2 will not replace DVB-S in the short or even the medium term, but makes possible the delivery of services that could never have been delivered using DVB-S. The original DVB-S system, on which DVB-S2 is based, specifies the use of QPSK modulation along with various tools for channel coding and error correction. Further additions were made with the emergence of DVB-DSNG (Digital Satellite News Gathering), for example allowing the use of 8PSK and 16QAM modulation. DVB-S2 benefits from more recent developments and has the following key technical characteristics:
  • 28. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 28 There are four modulation modes available, with QPSK and 8PSK intended for broadcast applications in non-linear satellite transponders driven close to saturation. 16APSK and 32APSK, requiring a higher level of C/N, are mainly targeted at professional applications such as news gathering and interactive services. DVB-S2 uses a very powerful Forward Error Correction scheme (FEC), a key factor in allowing the achievement of excellent performance in the presence of high levels of noise and interference. The FEC system is based on concatenation of BCH (Bose-Chaudhuri- Hcquengham) with LDPC (Low Density Parity Check) inner coding. Adaptive Coding and Modulation (ACM) allows the transmission parameters to be changed on a frame by frame basis depending on the particular conditions of the delivery path for each individual user. It is mainly targeted to unicasting interactive services and to point-to-point professional applications. DVB-S2 offers optional backwards compatible modes that use hierarchical modulation to allow legacy DVB-S receivers to continue to operate, whilst providing additional capacity and services to newer receivers. 3.8 TELEPORT A telecommunications port—or, more commonly, teleport—is a satellite ground station with multiple parabolic antennas (i.e., an antenna farm) that functions as a hub connecting a satellite or geocentric orbital network with a terrestrial telecommunications network. Teleports may provide various broadcasting services among other telecommunications functions, such as uploading computer programs or issuing commands over an uplink to a satellite. 3.9 ENPS  ENPS (Electronic News Production System) is a software application developed by the Associated Press's Broadcast Technology division for producing, editing, timing, organizing and running news broadcasts. The system is scalable and flexible enough
  • 29. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 29 to handle anything from the local news at a small-market station to large organizations spanning remote bureaus in multiple countries.  The basic organization of each news broadcast is called a "rundown" (US) or "running order" (UK). The run-down is a grid listing scripts, video, audio, character generator data, teleprompter control, director notations, camera operator cues, and timing estimates for each section of the broadcast.  ENPS integrates scripts, wire feeds, device control, and production information in a server/client environment. On the server side, ENPS runs an identical backup server (called a "buddy") at all times as a fail-safe. If the primary server fails, all users are redirected to the buddy server until such time as the primary comes back on-line. All document changes are queued on the buddy and copied back to the primary automatically when it returns to production. Note that this is not a mirror server as the changed data is copied to the buddy, but there is no direct replication inherent within the intercommunications between the servers, so if the data is corrupted due to hardware failure on one server, this corruption will not be replicated to the "buddy".  Device control can be managed either through a serial interface, or the MOS (Media Object Server) protocol. MOS functionality is included in the base ENPS license, but may be an extra add-on for the device that needs to interface with ENPS. MOS items such as video or audio clips can be added directly to scripts, and then used by third party software and devices during the broadcast.  ENPS was originally developed by the Associated Press for use at the BBC in the United Kingdom as a replacement for the text mode system BASYS (which developed into Avid iNEWS), and the Corporation has the largest installation of the system with over 12,000 users in 300 different locations.
  • 30. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 30 CHAPTER 4 DIGITAL SATELLITE NEWS GATHERING (DSNG) 4.1 DSNG Fig.4.1 BLOCK DIAGRAM OF DSNG Satellite offers the unique possibility to transmit images and sound from almost anywhere on the planet. From a fully equipped uplink truck or from a light flyaway travel case, DSNG makes it possible to bring live news and sport to millions of viewers. MIC ANTENNA TRACKING SYSTEM USING GPS
  • 31. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 31 DSNG is all about being at the right place at the right time. This often means to travel light in order to get there fast. But you don't need to compromise on picture or signal quality to be there just where it is happening. You also don't need to be cut-off from your news production facilities. Digital satellite news gathering (DSNG) is a system that combines electronic news gathering (ENG) with satellite news gathering (SNG). As time passed and electronic devices become smaller, a whole DSNG system was fitted into a van. DSNG vans are now common; they're extensively used in covering news events. 4.2 EQUIPMENT Fig.4.2 DIGITAL SATELLITE NEWS GATHERING VAN The DSNG van, also known as an “outside broadcast” (OB) van, is a mobile communications systems using state-of-the-art equipment to produce and transmit news as it happens, where it happens. A typical DSNG van is outfitted with a two-way high-power amplifier satellite system, a production component and an energy framework. The DSNG van also comes with a custom and powerful electrical system, as it needs to power all the equipment it carries without the need for any external source.
  • 32. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 32 There are also several additional equipments inside the van: a traveling wave tube amplifier (TWTA) administrator system, an encoder/modulator, primary and secondary monitors, video synthesizer/mixer and an audio mixer. External equipment includes high definition cameras, a solid state power amplifier and a low noise block down converter. Most DSNG manufacturers can outfit vans with the necessary equipment with ease. Some manufacturers even offer stand-alone modular DSNG equipment systems, whereby qualified operators can move and install equipment from one vehicle to another easily. DSNG vans have five main working sections: the monitoring section, the audio engineering section, the data and power storage area, the video control area and the transmission area. 4.3 TRANSMISSION MECHANICS OF OLDER SYSTEMS With older DSNG setups, as soon as a camera captures news images, the satellite in the OB van transmits real-time images to an uplink satellite, which in turn sends raw footage to a geostationary network. The network produces a local copy of the received images for editing. During this editing process, archive images from the network’s libraries are sometimes integrated into the edited video as the network sees fit. The edited video is then ready for play-out. 4.4 TRANSMISSION MECHANICS OF NEWER SYSTEMS With the advent of interactive tapeless methods of DSNG, editing is done simultaneously through a laptop-based rewriting/proofreading terminal. The van is equipped with transmission and reception facilities, which allow rough and condensed files to be transmitted to and received from a remote geostationary network. A prime video server processes the files for storage and eventual broadcast. The DSNG system maximizes bandwidth to allow faster turnover of news in real time. A modern DSNG van is a sophisticated affair, capable of deployment practically
  • 33. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 33 anywhere in the civilized world. Signals are beamed between a geostationary satellite and the van, and between the satellite and a control room run by a broadcast station or network. In the most advanced systems, Internet Protocol (IP) is used. Broadcast engineers are currently working on designs for remotely controlled, robotic DSNG vehicles that can be teleoperated in hostile environments such as battle zones, deep space missions, and undersea explorations without endangering the lives of human operators. 4.5 WORKING OF DSNG DSNG of Jaihind TV is communicating using NSS 12 570 E. Before the communication starts we need to synchronize the dish antenna mounted on the DSNG with our satellite. The position of the satellite and the antenna must be in Line Of Sight (LOS). The position of the antenna can be determined in such a way that the transmitted beacon signal is received with its maximum power. After tracking the location of the satellite, we can start the communication. The camera out will be taken and then it will be modulated using QPSK. Next we will encode, and then we will get an L-band signal. This signal will be upconverted and then amplified using a TWTA. The obtained signal is send to the feed arm of the antenna using a rectangular waveguide. The signal from the feed arm is radiated to the antenna dish in such a way that maximum power will be radiating in the LOS direction. This is the basic working of a typical DSNG. One of the difficult and important operation in these steps is the tracking of DSNG with its satellite. One thing to be noted is that the vehicle must be always kept in a horizontal direction, for that special equipment are used. DSNG vehicles are designed and provided by VSSC.
  • 34. INDUSTRIAL TRAINING REPORT DEPARTMENT OF ECE, AJCE Page 34 CHAPTER 5 CONCLUSION We have undergone our industrial training in Jaihind TV at Thiruvananthapuram. We got a great experience and exposure in the world of broadcasting. We learned the telecasting methods for live programs, news and recorded programs. Also we got an idea about the operation of Digital Satellite News Gathering vehicle of Jaihind TV. The transmission of the programs to the satellite is from Noida (Telport own by EsselSyam Satcom) at Delhi up to which programs are carried through optical fiber cable. The signals from the satellite are then received in remote stations using commercial satellite recivers. We have got an idea about the Production Control Room, Master Control Room, Live Room, etc.