Video Workshop by Frederick RodriguesPresentation Transcript
SD To HD Fundamentals Video Basics
SD PAL video signals are made of; 25 frames per second. Each frame is made of 2 fields. Each Field is made of half of the 625 lines per frame. This picture shows the way the fields are displayed to become a frame.
Video is displayed coherently by synchronising the incoming signal with the display systems internal signal. This synchronisation is done by locking the frequency of the display systems refresh rate, and of the output systems refresh rate to a common, known frequency. This common known frequency is taken from the modulation of the power supply.
The video systems synchronise to the peaks and troughs of the electrical modulation. In Australia our power modulates at 50hz, or 50 times per second. This means our video signals run at 25 fps, or 50 fields per second. In America and most of Asia, the power modulates at 60hz, this means NTSC, (The video standard of the USA and Asia) runs at 30 fps or 60 fields per second.
PAL Vs NTSC NTSC at 30 fps, has a higher frame than PAL rate so is it better? Not really, the vertical resolution of NTSC is only 525 lines vs. 625 lines for PAL. NTSC has only 84% of the resolution per frame, the extra frames provide better motion capture. The NTSC colour profile is also slightly more limited than the PAL version allowing PAL to look brighter and more vivid.
Bigger is better. HD video Part 1
HD Video HD Video works in the same way as SD only better! HD Video has a much larger frame size so more pixels are used to make each picture making it sharper and clearer as well as some other benefits we will discover later. This picture shows the relative Frame sizes of SD and HD video standards.
HD Video comes in several flavours. 1920 x 1080 (the largest size) 1280 x 720 (a smaller version of the same aspect ratio) Unlike SD, HD frame sizes are standardised so if the signal is 60i, 30p, 50i, 25p, 24p or 23.976p the pixel aspect ratio and frame sizes will always be either 1920 x 1080 or 1280 x 720. This stands true for all professional broadcast HD standards but then everything gets weird. We will cover much more about HD video soon but first, lets have a look at digital video.
A jungle of jargon gets napalmed Digital video
Codecs and containers Containers are the way a video file’s data is stored, some examples are .mov , .avi, .mkv, .vob etc. These containers control the way the video data inside is formatted. This is important but not as important as the codec. The codec of a video file defines the size, quality and efficiency of its playback. The codec is independent of the container. Example, you can have an .avi file a .mov file and a .mkv file all containing a divx encoded data stream.
What are codecs? Codec means compression decompression. It is the language that a video file is transcribed in. Some languages use a shorthand and the files are small but hard to decode, some codecs use no compression giving you large files that don’t need decompressing. There are many different codecs, millions of them, some companies make their own proprietary codecs like Avid’sDNxHD codecs or Apple’s ProRes codecs.
Why do we use codecs? Uncompressed SD video, where all the frames are stored in full size pictures, uses 100gig per hour. Although this data is easy to manipulate there is a lot of it. You need a powerful computer to apply effects to uncompressed video, and very fast disks to play it from. Uncompressed HD video uses 1tb per hour, making almost impossible to play without many thousands of dollars worth of hardware. Codecs make video small and portable without making them look too bad.
There are 2 main things that codecs do that decide what the movie looks like. Codecs compress frames, just like compressing a picture into a small Jpeg, each frame is compressed into a smaller file and then decoded. This frame compression is generally controllable when you export your movie. Codecs also compress motion. This is a very complex task. Groups of frames are analysed for similarities, a header frame is stored and then a tail frame is stored, (these are called key frames and may only happen twice per second). The codec then stores information about what changes between these frames and when.
The motion compression process results in a LONGOP (Line Of Numbers Group of Pictures) compressed file. LONGOP codecs like H264, MPEG4 and MPEG2 result in small files. HOWEVER! These small files take a lot of calculations to compress and decompress to and from video streams. So when you encode to a LONGOP codec it can take ages, especially when a multi-pass encode is done. A multi-pass encode basically encodes the video a number of times and averages the resulting data stream.
Here is a diagram of a longop encode, and it’s the same amount of work backwards
Decoding from LONGOP encoded files also takes a lot of calculations. Also different media players implement the decoding process differently so the video can actually look different when you play it back on different systems. DVDs use a VOB container and an MPEG2 codec. The data rate is small and the compression is high. Often DVDs group a 4x4 block of pixels into 1 and if highly compressed can contain as little as 2 real frames per second. With a professional encoder, (like the one bundled with Scenarist) results are often good when played back with a high quality decoder.
A final note on containers The codec is generally more important than the container BUT. Containers are designed around the underlying graphics architecture of an operating system. Windows based systems use a platform called direct show to run video, Direct show is better at decoding .avi files. Apple’s operating systems use Quicktime through the core image processing as the video display engine, this platform is optimised for .mov file. Having said that both types of video will play on both operating systems with the correct software.
Codecs and live processing. Live Video
Isadora, Quartz composer, Jitter and VJ Arkaos are just a few examples of live video software. They are all very different and choosing the right one is a matter of personal taste. However they mostly work in the same way. When video files are played back each frame is rendered into a still image and then sent down a processing pipe. So far the architecture of operating systems and computer hardware has sent these frames bouncing in and out of cup's and ram and back again until they are fed to a graphics card for rendering.
If you want to process live video with effects make it easy for your computer. Don’t use a LONGOP codec for your video files, there is too much work in the decoding process. Where possible don’t use highly compressed frames, again just getting the frames into the system for manipulation will take a lot of your CPU power. Avoid using containers that aren’t optimised for your computer architecture. Experiment with different file types and codecs to find the best settings for you.
Live video and a growing context. Vjs media artists and performance artists are just some example of people who use live video. By live I mean applying effects as a real time process not just playing prepared footage. The effects may be as simple as a fade at the end of a scene, or making those crazy rave tunnels that send pill heads into deep trances. What once was done with DVD Players and analogue video mixers can now all be done inside a laptop. As HD video mixers are still prohibitively expensive laptop video mixing is the best way to present live HD footage. As well as choosing the right container and codec it is important to use consistent frame sizes.
To minimise the work for your computer match the output size (the resolution of your display system) with you input size (the resolution of your video sources). If they aren't matched at least make them even number multiples. If your output is from an analogue RGB output from your laptop to a projector, and the projector screen resolution is at 1280 x 1024, ideally all your video sources would have a frame size of 1280 x 1024 or 640 x 512 or 320 x 261. This way when you computer outputs the final rendered frame it doesn’t have to split and average pixels, it just duplicates them. This will also give you smoother movement and diagonals in your output.
Interlaced and Progressive HD video part 2
HD Video comes in several flavours. The names of them give away what is different about them. Interlacing is the process we saw earlier where odd numbered lines scan up the screen then even numbered lines scan up the screen. This process gives video its hard look, but allows the recording of very fast motion. Progressive scan video lets each line scan from the bottom of the screen in order without splitting them into even and odd groups. This means that the motion has a softer film like feel, and fast movements can be blurred, which is often nice. Progressive scan footage is also easier for software to manipulate live.
So all the HD formats with an i at the end 1920 x1080 50i or 60i, have respectively 50 or 60 fields per second of 1920 x 1080 that are interlaced. All the HD formats that have p at the end 1920 x 1080 25p or 30p or 1280 x 720 24p, have 25, 30 and 24 frames that scan all the way from bottom to top. These dimensions and scan rates are only what a video file will output, you still have to set the codec and container. As an aside many HD codecs have room in the data stream for multichannel audio, meaning you can embed 6 tracks of audio alongside the video, this can be a single 5.1 mix or multiple pairs of stereo mixes.
Getting HD in and out Most of us will have experienced HD only in the prosumer world of HDV, ranging from tiny handy cams, to tiny handy cams on steroids with great lenses and proper irises. HDV is a highly compressed HD format, and although many cameras claim to be full HD outputting 1920 x 1080, they do this as an upscale on playback. Many HDV cameras have an HDMI output, this plays the data through a different decompression process and gives a deeper colour spaces and full HD dimensions. Other cameras such as the P2 based models from Panasonic or the XDCAM HD from Sony record files onto flash drives. These files are still highly compressed and following are some tips for working with them.
Working with P2, XD and HDV I am going to reference Final Cut Pro for the following examples, so far the most flexible software for working with HD and its very easy to use. It also has the distinction of being very easy to find a dodgy copy of. All these files need transcoding to be edited with easily. The codecs they use are LONGOP and use a lot of power to manipulate. Transcode the files, using Log and transfer to the ProRess 422. If you are working with HDV capture HDV and work on a ProRess 422 timeline. For other editing platforms MPEG Streamclip a free software from Squared 5 will allow you to transcode your files into an appropriate resolution.
Cheap ways around expensive problems and trouble-shooting installations. HD Video plugged in.
There are a few options to playback HD video. Bluray is one very expensive and unreliable way but the pictures do look good. Bluray can plug directly into an HDMI projector but you cant do anything but play – for installation this is fine. Media Players- There are several media players available that have HDMI outputs and a hard drive inside. You copy your HD file (In an appropriate codec) plug the HDMI cable into a projector and off it goes. This is a great option for video installations that run on a loop, the boxes start at about $220 and they are easy to use and reliable -mostly. Computer- The most flexibility, you can add live effects, interaction and even synchronise several videos for multichannel work.
HDMI or High Definition Multimedia Interface is the best way to connect HD video sources to output devices. This is a digital signal that contains up to 1920 x 1080p video and up to 8 tracks of audio simultaneously. A benefit of using this digital connection from your computer is it doesn’t have to perform the analogue video modulation, it still has to render but it’s a little less work. HDMI and DVI are THE SAME SIGNAL. DVI is available on almost every computer made in the last few years. If you have DVI you can probably run HD video with a cheap plug convertor to HDMI. DVI however carries no sound.
DVI and HDMI cables are expensive, and for no good reason. DVI can only run 10 meters without signal boosting and these units are very expensive. Thankyou China. There are many HDMI extension systems available, which by definition with some cable convertors are also DVI extenders. There are HDMI over CAT5e convertors available from $60 a set, together with some CAT5e (about 50 cents per meter) you can bypass the expensive solutions and have pure HD video funning full frame up to 60 meters away. So HD projectors are still hard to get, but maybe you have access to one that has DVI, this may give you a larger frame size and a crisp digital connection, and still without costing the earth.
Why HD? Well apart from it just looking better HD has other benefits. I recently used HD video in a theatre piece as way around having multiple projectors. All the projection surfaces were on the same plane, so I made smaller videos within a black frame to give the impression of several smaller projections. Really big screens- This is just maths, the bigger you make your screen the bigger you pixels become, soon enough each pixel is the size of an apple but muck less tasty. If you need to go big HD pixels on the same size projection are less than half the size.
Plugging in Analogue This is a jump backward but it seemed an appropriate place to mention a re-occurring problem with analogue vide. As I said earlier video display devices synchronise their displays to the power supply. Many venues will have 3 phase power, or even just outlets that get their power from different circuits, and hence the signal modulation is not synchronous. If this is the case you will see banding and rolling of your picture. This is easily solved by running extension leads so that you video source gets its power from the same circuit as the video display unit.
How can I……? Common requests
Multichannel video- If the files don’t have to be synchronised then use multiple media players, they aren’t too expensive (you could always use them and return them when your done claiming you didn’t like them if you buy them from unnamed large retailers). If the files do have to be synchronised either several small computers or one computer with several video outputs (newer hardware allows you to put multiple video cards in one machine). I have a laptop and want to run 3 projectors? A great product called the Matrox triple Head 2 go will help you out for around $700, run up to 3 full HD displays from a laptop.
It didn’t look like that when I played it on my computer. This happens a lot, computer displays are a lot more flexible than video displays, reverse field dominance, pixilation and strobing can often be taken care of by a computer graphics card. The information in this tutorial should explain why something is happening. Or it could be something more problematic. When preparing video for use with Isadora or other similar software make sure the frame size container and codec are all optimised for your setup. When preparing video to be played from a media player make sure you know the best format and container that the unit will play. Often the bundled software does not take full advantage of the units capability. Again MPEG Streamclip may help you out with the conversion process.
It’s the only software I have ever paid for! Isadora
The following section may seem like an ad for Isadora, I cant help it, I love it. It seems like if you are creative in the way you use it it can do anything. There are many other programs available that do many of the same things, but here are a few reasons I like Isadora more. Isadora can output to up to 6 screen simultaneously, providing you have 6 outputs, which can be achieved in a number of ways. Isadora speaks many languages, when setting up interactive installations you may wish several pieces of hardware and software to speak to each other. Isadora has a multitude of ways of sending and receiving data. Finally Isadora has a direct player, that will play your video files as they are, this means no effects but its great for multichannel loops.
Frederick Rodrigues firstname.lastname@example.org I got to eat. An Add for me.