The Persistence of National Visions, Ebeltoft, Denmark, 2000 ...Document Transcript
Training for the Next Century<br />The Ebeltoft Congress<br />September 1-8, 1997<br />The European Film College<br />Ebeltoft, Denmark<br />CONFERENCE PRESENTATIONS<br />edited by Henry Breitrose<br />CILECT, 8 rue Thérésienne, 1000 Brussels, Belgium<br />CONTENTSFrom the EditorHenry Breitrose………………………………………………………………………………….….3From the Conference OrganizerSharon Springel………………………………………………………………………………….…4Keynote Address Sir David Puttnam……………………………………………………………………………….…6The Digitization of Conventional Production A.J. (Mitch) Mitchell………………………………………………………………………….…….14The Evolving ProcessWalter Murch……………………………………………………………………………………….21Emerging Digital Culture In Audiovisual Production: A Case Study Of The Media Program Of Universite du Quebec a MontrealPhilippe Menard………………………………………………………………………………….…34New and Hybrid Forms of Drama - 1Chris Hales………………………………………………………………………………………….38New and Hybrid Forms of Drama - 2Uncharted Interactive Cinema: Simulation, Power, and Language Games Hilary Kapan…………………………………………………………………………………….….41New and Hybrid Forms of DocumentaryMichael Murtaugh…………………………………………………………………………….…….47Some Short Notes on Developing a Digital CurriculumJohn Collette ……………………………………………………………………………………….51Project Report: Curricular Consequences of the Digital DomainRod Bishop………………………………………………………………………………………….59The Teaching of “New Media” in a School of Film and Television Robert Rosen……………………………………………………………………………………..64<br />From the Editor<br />Henry Breitrose<br />CILECT Vice-President for Research and Publications<br />In early September of 1997, 110 delegates from 70 member schools met at the European Film College in Ebeltoft, Denmark for CILECT’s Biennial Congress. The centerpiece of the congress was an international conference, “Training for the Next Century” that brought specialists from industry and film and television training institutions together to discuss the nature and future of digital technologies in film and television, and the ramifications for teaching.<br />This document presents edited versions of the lectures that were presented at the Ebeltoft conference. CILECT, being concerned with film and television, and the presenters, most of whom were deeply involved in various aspects of digital technologies, are fundamentally audio-visual, and the written word is inadequate to present the conference with any degree of adequacy. Most of the presentations were relatively informal, sometimes interacting with the audience, and they were usually richly illustrated with moving images and sounds. Speakers typically worked from notes, or when there was a fully prepared text they felt free to revise their remarks as required. This presented certain challenges to the editor, who worked from transcripts of audiotape recordings of the conference proceedings. <br />Professional translators frequently differentiate between “translation” which is about the conversion of words from one language to another and “interpretation” which transcends translation and attempts the conversion of meaning from one language to another. In some cases I’ve tried a cross-medium interpretation, attempting to convert the multi-sensory and multi-media presentations of the conference speakers to the single medium of text on paper, while preserving the meaning. No one is more aware of the impossibility of the task than I, and I offer the speakers and the readers apologies for all that was inevitably lost in the translation. There was one presentation which consisted wholly of audio-visual presentations of digital special effects with explanatory comments, and this was not included in the conference report. As we used to say in my home town of Brooklyn, “ya’ really shudd’a been there yourself.”<br />The procedure that was followed consisted of initial transcription of the audiocassettes by students at Loyola-Marymount University, working under the direction of CILECT Vice-president for Finance Don Zirpola, editing of the texts by me, and submission of the edited texts to the speakers. Most of the speakers returned the edited texts and their revisions were included in the final texts that appear here. <br />The Executive Board of CILECT acknowledges our gratitude to the administration and staff of the European Film College in Ebeltoft, Denmark for their assistance, patience, and unfailing good humor.<br />From the Conference Organizer<br />Sharon Springel<br />Chair, Conference Project<br />The title of this conference is “Training for the Next Century”, but it could just as easily be “How are digital and communications technologies impacting the moving images industry and what ought we be doing about it”. Like it or not, these technologies are affecting our art and industries in profound and irreversible ways, which should come as no surprise to anyone.<br />The only surprise is how long it has taken for the fundamentals of moving image production, which have remained essentially the same for more than seventy years now, to have undergone any significant change. Film technology has been extraordinarily stable. The basic mechanism of the motion picture camera and the broad outlines of film today’s photochemistry would be recognizable to any film maker of fifty years ago. Even in the case of video, despite the change from chemistry to electronics, the final product is much the same. <br />The one significant area of change has been broadcast television, which has had a profound effect on society. But has it had an equally profound effect on the approach to training within CILECT member schools, or do most treat it as simply another delivery mechanism for essentially the same sorts of product?<br />We are now on the verge of a new wave of technology, one which will arguably have an even more important impact on the ways in which audiences receive, and indeed interact with moving image media. This is of course being driven by the rapid developments in digital and communications technology, the pace of which is astonishing even by the standards of the late 20th century. ‘Moore’s Law’, a commonly quoted predictor of change proposed by semiconductor pioneer Gordon Moore, states that computational power increases by up to 100% every 18 months, with network technologies roughly keeping pace, and real costs remaining constant. What this means is that our graduates now face a rapidly changing world, brimming with many new and evolving audio-visual possibilities. <br />The relationship of the audience to the image is also set to change dramatically by the introduction of choice through interaction. This opens a future which may well include active participation with the material, as Michael Murtaugh, one of the speakers whom you will be hearing from later on puts it, “a co-construction of meaning” in which the audience member actively participates in determining the sense of what is on the screen, and ultimately, even to a form of drama based on the audience member becoming an actual protagonist in the story, rather than someone who passively identifies.<br />The configuration of the audience is also changing, from the current model of ‘one to many’ (through either a large cinema audience or to an even larger home television audience), to a ‘many to many’ model based on the vast expansion of internet enabled audio-visual programme possibilities, (including the emergence of individual publisher/broadcasters).<br />Within this chaos of change there is both an intoxicatingly rich array of possibilities, as well as an ever-increasing confusion of meaningless rubbish. That is the nature of art. As ever, the success or failure of any piece will be based on the quality of the fundamental idea at its heart and the artistry with which that idea realized. In a very real sense, the core skills taught by CILECT member schools are now more important than ever in infusing meaning into this maelstrom of raw technological potential. Character, dramaturgy, basic visual grammar are the fundamental qualities that these new possibilities are crying out for. <br />The conference programme which we have prepared will explore these subjects in two stages; firstly through an examination of how conventional forms of film and television are being effected by the impact of digital tools from fundamental non-linear approaches to programme construction, through to the future implications of synthetic character and virtual set technologies. The second part of the programme will progress into an examination of the new forms of audio-visual languages which are emerging, and how basic forms of both drama and documentary are beginning to evolve as a result. It is our hope that you the membership will find these explorations to be thought provoking and, most importantly, that they will stimulate a constructive dialogue through which CILECT can begin to ready itself for the challenges of the 21st century, the interactive media century.<br />Keynote Address <br />Sir David Puttnam <br />David Puttnam is Chairman of Enigma Productions and has been an independent film producer since 1971. He was Chairman and Chief executive of Columbia Pictures between 1986-1988. He was a Founding Member of the European Film Academy in 1989, and the Club of European Producers in 1993. In 1995 he was awarded a knighthood for his services to the British film industry. He was also honored in France by appointment as Officier de l’ordre des Arts et des Lettres..<br />We're here over the next few days to discuss developments affecting the future of the moving image. But I'd like to begin by casting to the past, indeed right back to the very origins of cinema.<br />When one of the medium's founding fathers Louis Lumiere hired Felix Mesguich as a cameraman he warned him; "
You know, Mesguich, we're not offering a job with much in the way of prospects, it's more of a fairground job; it may last six months, a year, perhaps more, perhaps less."
<br />As it turned out, Lumiere was fairly accurate about Mesguich's job prospects but spectacularly wrong about cinema itself. It's worth remembering men like Denmark's Ole Olsen, who created the film giant Nordisk and helped turn cinema into a truly international form of entertainment. But even he surely would have been surprised to know that the 'fairground' job has, one hundred years later become one of the most influential industries in the world. What's really paradoxical is that despite all the<br />evidence around us, I still sense that we haven't quite got our heads around the extraordinary significance of our medium.<br />We boldly proclaim to just about anyone who will listen that we are standing at the threshold of a multimedia age in which the moving image will unquestionably become the world's dominant form of communication, entertainment and education. <br />But I suspect that many of us aren't entirely sure what that might come to mean and some of us may even have a sneaking suspicion that it doesn't mean anything very much at all! There is still an uneasy feeling that a great deal of information technology may prove to be, in its own way, just another fairground attraction lots of bright lights and promises but, in the long run, not all that much substance. Well, this evening I'd like to look beyond the hype and the bright lights and unpack some of the opportunities which are likely to be opened up by these developments in the world of the moving image.<br />As the distinctions between film, television, video, telecommunications and computer software evaporate in the face of the digital revolution, whole new industries are being created. Forty years ago the symbols of national wealth and progress in Northern Europe were things like steel and shipbuilding, or companies producing exportable consumer goods.<br />Now, the rising and dominant corporate symbols of success are, almost without exception, related to information: media companies, telco’s, entertainment companies, software houses. The initial convergence between the film industry and the interests of telephone and electrical giants that occurred in the 1920s when the Warner Brothers screened their first talkies, finds itself being repeated, but this time on an infinitely bigger scale. The dominance of the written word as our primary means of interpreting the world around us is giving way to a more diffuse, visual culture whose final shape is, as yet, impossible to accurately foresee.<br />If you're still not convinced, just take a look at some of the predictions about growth in our sector: The European Commission predicts that its audiovisual sector will grow by some 70% in the ten years from 19952005, and that this will be complemented by a 55% growth in the income earned by producers.<br />What Is clear is that as money and commodities are able to move around the globe with ever greater ease, the distinguishing characteristic of any nation or community today lies in the quality of its intellectual property; in other words, the ability of its people to use information and intelligence creatively, to add substance and value to global economic activity, rather than just quantity. We are on the threshold of what has come to be called the Information Society. As Professor Charles Handy of the London Business School has succinctly put it, “Intelligence is the new form of property.” It's now commonplace to assert that the audiovisual industry is America's second greatest export. In fact, it would probably be more accurate to say that the number one position is now occupied by the "
intellectual property industry,” of which films, television programming and other audiovisual software represent a massive and still growing share.<br />All this new technology, particularly digital technology about which we'll be hearing a good deal over the next couple of days means that film and television are now simply two components, albeit extremely important components, of a much larger industry. Movies are part of a new industrial sector which as the European Commission's 1993 White Paper on jobs, Competitivity and Growth pointed out, has the potential to generate literally millions of highly skilled and highly productive jobs.<br />With more than 18 million Europeans out of work, that is a dimension which cannot politically, socially, culturally or economically be ignored. All of us in this room are part of an industry that has the potential in some cases already realised to deliver an astonishing and rapidly multiplying range of services which includes banking, retailing, news, leisure services, public information and, crucially in my view, high quality education and training.<br />Of course, such innovations have marked the whole history of the moving image. Some of them have changed the very nature of filmmaking and some have disappeared without trace: 3D, Cinerama, circlorama and a host of other revolutionary innovations are all, apparently, as dead as the brontosaurus. Some have even been seen as terminal threats to filmmaking. <br />Here's what Charlie Chaplin had to say about his first visit to a sound stage in the 1920s: "
Men dressed like warriors from Mars, sat with earphones while the actors performed with microphones hovering over them like fishing rods. It was all very complicated and depressing. How could anyone be creative with all that junk around them?"
<br />From a distance, this multimedia revolution may look like just another process of technological change but we do well to remind ourselves that however it may seem, the technology is simply what makes the revolution possible. It is not the revolution itself. As the seminal report produced for the European Commission by Martin Bangemann put it, the “Information Society"
is about new ways of "
living and working together."
It is not, fundamentally, about new ways of transmitting or storing information. And unless we are start thinking actively and continuously about those issues of social change we are likely to be idly seduced into the belief that the technology is the driving force. Worse than that, we may allow technology to become the driving force without any clear idea of where it may be taking us.<br />This obsession with technology for its own sake recalls an observation I came across recently when someone asked "
Will computers make films one day?"
came the reply. "
And other computers will go to see them."
<br />Once again the early history of cinema offers some instructive lessons as to what happens if you allow technology to become the driving force pulling everything behind it. Earlier this year I completed a book about the history of cinema, and for me, looking at those early days was a real eye opener. Almost from the very start many of those responsible for helping to create our industry failed to fully capitalise on the immense economic potential offered by the medium they had created, preoccupied as they were with technology. For instance, the famous screening held in Paris by the Lumiere Brothers in December 1895 attended by just 35 people! is now universally hailed as the official birth of cinema as public spectacle. But what's possibly less well known is that the Lumiere Brothers themselves didn't even bother to show up. They were too busy with their photographic research.<br />Even a few days after that screening, with crowds queuing around the block to see his new invention, Louis Lumiere, effectively the father of cinema, confidently (if ruefully!) proclaimed "
The cinema is an invention without any commercial future. For years<br />he continued to cling to the belief that cinema was, for the most part, a scientific curiosity, at best a minor branch of photography.<br />In Europe, much of the development of cinema was left largely in the hands of scientists, inventors and magicians. In those early days, cinema was principally seen either as a scientific tool or a device for producing mind-boggling visual tricks the forerunner of today's special effects movies. In fact, it took quite a long time for cinema to realise its potential as a wholly distinct form of art and entertainment. Not for the first time, it was the public that provided the answer. They soon grew tired of socalled novelty films, of the seemingly endless stream of dancing beam, boxing kangaroos and exploding policeman that passed for entertainment. They wanted stories bigger and better stories. That's something we'd do well to ponder over the next day or two and I know it's one of the issues that Henning Camre and others have been preoccupied with in planning for this conference.<br />But the truly remarkable thing about the growth in the field of moving images is not that entertainment has simply become one element among many but that it is fast becoming the driving force for the whole information society, top to bottom be it education, marketing, or even, to the disquiet of many, the news. In fact, in my view the new hybrid, multimedia sectors contain a potential for growth which already makes them far more important, and in my view even more interesting than the traditional feature film industry, certainly as most of have known it.<br />What all of this means is that the skills and techniques of the entertainment industry the stories, the music, the characters, the special effects are now essential components for success in every one of these fast multiplying new services. It's become almost a truism to acknowledge that what matters in this "
is not the hardware but the software, and this software is as dependent as ever on the abilities of writers, designers, actors, musicians, artists, cinematographers, animators and a host of other creative professionals.<br />As Terry Semel, the Chairman of Warner Bros. has succinctly put it: "
When we consider the future of our industry as a whole there is only thing of which we can be absolutely sure he who controls the software, controls the future.'! <br />However, it would be overly complacent to believe that creativity and control are necessarily synonymous; in many cases, those who create the software will not be those who end up distributing it.<br />But to me the most significant development in the Information Society is the increasing convergence between entertainment and education. When resources that have traditionally been associated with the best in entertainment are applied to education and training, genuinely surprising results begin to flow. Anyone who has tried to learn a foreign language will know that to be able to see and hear people speak with the help of an imaginatively constructed piece of software is a lot more effective than sitting alone with a textbook.<br />The educational potential of the medium has long been recognised-even it not realised. In case that sounds like an overconvenient claim, let me give you a couple of examples. In the early days of cinema, Thomas Edison predicted its primary and most valuable use would be as an educational tool. As he put it "
It may seem curious, but the money end of the movies never hit me the hardest. The feature that did appeal to me about the whole thing was the educational some glowing dreams about what the camera could be made to do and ought to do in teaching the world things it needed to know teaching it in a more vivid, direct way."
<br />The way in which CDROMs, the Internet and other new media products are now being used in classrooms around the world suggests to me that Edison's vision is finally about to be fulfilled.<br />In the UK, the British Film Institute was originally created with the simple aim of encouraging teachers to realise the educational potential of film and thereby bring about closer cooperation between the UK's education system and the film industry. Now it seems that moving images are assuming an increasingly central role in educational systems around the world.<br />As information technology becomes more and more essential to the functioning of our education system, the need for software and support materials is going to grow at a prodigious rate. If we are ever to harness the multi-media revolution to the needs of our education systems we cannot afford the luxury of treating it as "
just another teaching aid.<br />We need to develop new approaches to learning and teaching which will be relevant to, and can flourish in, an age of interactive technology which gives ready access to ever greater quantities of information. Interactivity now offers the prospect of personally tailored teaching by means of online and offline services, to any student, at home as well as at school, however remote their geographical location, and however advanced or obscure their interest. The possibilities this creates to revolutionise learning, and teaching, are almost incalculable.<br />More than twenty years ago an early pioneer of virtual reality in the United States wrote that "
A display connected to a digital computer gives us a chance to gain familiarity with concepts not realisable in the physical world. It provides us with a looking glass into a mathematical wonderland."
<br />As a child at school, it never occurred to me for a moment that mathematics might be any kind of wonderland. To me, and I suspect most of my classmates, it was not much more than a confusing nightmare. To end that dismal situation schoolchildren would be of no small consequence to society.<br />Indeed, whether we like it or not, education is, in every respect, a fast-growing global business. Together with training it accounts for about 15 per cent of the European Union's total GDP. Not only does this proportion look certain to continue rising across the developed world, but bear in mind that the demand for education in the developing countries is also increasing at an exponential rate. The United Nations Development Agency estimates that over the next thirty years as many people will be seeking some kind of formal educational qualification as have done since the dawn of civilisation. If Europe's disparate education systems ever really decide to take on board the possibilities of audiovisual technology, they would, almost overnight, create the potential for a world lead in possibly the most valuable growth industry of all.<br />One senior Hollywood executive recently told me a couple of years ago that in his opinion the bestknown names and the highest earning stars of 2005 and 2010 would not be traditional movie stars at all, but a stilltoemerge generation of teachers and educational superstars who would dominate the world's TV channels, CDROM, cable, Internet and a myriad of other delivery platforms still yet to be born.<br />Looked at in this light, the balance of resources between the USA and the rest of the world is at least for the present, very unlike the imbalance that exists in the traditional entertainment movie business. Take the example I know best, Great Britain.<br />In the UK we are lucky enough to have some of the world's finest talent in television and film production, in educational publishing, in animation and even in the authoring of electronic games. We have a unique range of relevant institutions, including the BBC, the world's premier public service broadcasting organisation, and the Open University, probably the world's most experienced distance learning institution.<br />Perhaps most important of all, for the moment we enjoy cultural ownership of the language which much of that world uses and ever more want to learn, the language in which 80 per cent of all electronic information happens to be stored. As a senior computer companies executive recently put it, Britain truly has the potential to become the “Hollywood of Education”.<br />Unlike many other countries, particularly the United States, Britain has one of the most effectively 'rigged' education markets in the world. We have a sizeable national school system with a national curriculum, with policy driven from the centre so one would assume it would be relatively easy to implement a strategic plan to exploit our many natural advantages. <br />By contrast, in the United States education is largely organised on a states right basis, which means that the market remains highly fragmented and a centrally coordinated plan for bringing together information technology and education is for the time being that much harder to implement.<br />What's likely to happen if Britain and other nations around the world fail to grasp the opportunity that's there to be seized? If you accept that it was the enormous size of its domestic market that generated growth and uniquely benefited the US entertainment industry over the last 100 years, and if you accept that technologybased learning as a global reality is something of an inevitability, then we are left with a very simple choice: do we, most particularly in Europe, manufacture our own multimedia resources for education, training and retraining at all levels, or do we sit back, wait, and eventually import them from the US and the Pacific Rim? Are we going to be foolish enough to hand over this new, potentially massive business, with all of its likely cultural, let alone commercial, implications to the US in exactly the same way as we have handed over effective control of our movie industry?<br />It seems to me that thinking in these terms is justifiably sobering but not unreasonably alarmist, 1995 was the first year on record in which Britain ran a deficit on its international trade in learning materials; previously a significant source of export earnings.<br />But of course, the multimedia revolution raises a host of other issues which are already affecting the lives of millions of people across the world. Later this week I understand we'll be hearing something about the rich possibilities which are opening up for the world of moving images as a result of the Internet. Of course, this a subject which I can hardly avoid in a talk of this kind even if, for me, logging onto the Net seems immensely complicated. I'm not a natural techhead so believe me, I'm praying for it to become far easier to use!<br />There's absolutely no doubt at all that the Net Is changing the way that people view and receive information.. It's also beginning to beg colossal questions that need real and urgent answers at the moment it's like a child whose mind is still being formed.<br />Seriously important issues such as those in the complex field of international copyright are going to have to be resolved over the next few years. <br />If there's no accrued financial value for the creator in putting something on the Net, it's unlikely anyone will be able to afford the luxury of spending five years working on their next project. And, in the end, this is likely to stifle rather than encourage creativity. The irony is that the Net, a fantastic medium for the dissemination of information, could begin to close down knowledge unless there is an organised commercial respect for intellectual copyright. Obviously, all this needs to be worked out in a way that makes good long-term sense to both the user and the creator.<br />And while it may be worth running promos on the Net to advertise a movie, there's not a lot of point in putting the movie itself up there since there isn't as yet any reliable means of charging for the viewing.<br />But what's absolutely certain is that we won't be awe to grasp any of these opportunities unless we place training right at the heart of our approach to the new technologies. Indeed, without doubt the most striking paradox that now confronts our industry concerns the quality and quantity of our workforce. It can be summed up as "
too many and not enough"
too many technicians with skills and working practices which have been marginalised or simply by-passed by the pace of technological change; too few writers, directors and producers with a sound instinct for the needs of the marketplace.<br />In my view it is beyond question that the broader the talent base within the industry, the more costeffective and efficient it is likely to become - good training has as great an impact on costs as it has on quality and, of course, on the job satisfaction from training. We are already wrestling with the consequences of skills shortages in the digital and non-linear areas. This in turn carries with it all the dangers to creative freedom and risktaking that flow from spiralling wage inflation. Unless we in the industry think more sensibly about our future, and invest massively in training the present boom in will inevitably turn to bust.<br />I'd like now to return specifically to the cinema. Fifteen years ago cinema box office revenue accounted for more than 95% of total revenues generated by the industry world-wide. It's my belief that by 2010 some 95% of revenues will come from what used to be termed "
markets such as TV, video, satellite and cable. As the variety of delivery systems grows, so this socalled "
will rapidly become the dominant market. It would be foolish to believe that the film industry is destined to become simply subordinated to the dangling array of possibilities opened up by these new multimedia offerings.<br />Although cinema boxoffice is, in itself, of relatively declining commercial significance, the big screen remains the most desirable shopwindow for the movingimage industries as a whole. The best and most ambitious creative talents of the age both in front of the camera and behind it still see the cinema as the true focus of their energies, and, to that extent, they set he agenda for much of the overall communications business. In a very real sense, movies are a locomotive pulling much of the entertainment and multimedia industries in its wake. It's this that makes them crucially important.<br />Almost snce cinema began it has been dominated by the United States. Why? Because unlike their European counterparts, the early pioneers of the American film industry were not, by and large, filmmakers, but film exhibitors. They understood that their primary task was to fill their theatres and in consequence they developed a close and thoroughly healthy respect for their audience.<br />They took them seriously not by pandering to them but by careful observation and systematic research, by the efficient and imaginative exploitation of each new advance in relevant technology, by telling good stories, and for the most part telling them well, by attempting to challenge as well as please their audiences. These are lessons that we must learn to apply across the board if we are ever to have the chance of competing with the United States.<br />In 1957, Andre Bazin, one of the wisest and most perceptive them all, observed that:<br />"
The American cinema is a classical art. Why not then admire in it what is most admirable not only the talent of this or that filmmaker, but the genius of the entire system."
<br />It was the genius of the system that fed and sustained the strength of the Hollywood industry and has kept it the dominant force in our global industry for so long a system which has benefited from having a consistent commercial strategy, a system which has developed as an industry and a system that has paid attention to development, marketing and training as component that are every bit as essential as individual genius.<br />In the teeth of such organised and powerful determination, the rest of us must stop regarding ourselves as cultural treasures and start acknowledging our responsibility as a strategic industry.<br />To accept that argument requires that we seek out and discover that intangible quality of confidence. If we can develop sufficient confidence in our future, then we are that much more likely to summon up the necessary energy to re organise, retrain and re orientate ourselves. And the more we have the energy to do that. the more likely we are to recognise new opportunities, and grab them. The more we see and take the opportunities the more confidence we are likely to acquire and so on. We can, in this way, create something approaching a truly virtuous circle.<br />Another lesson we can learn from cinema concerns the vital importance of building effective distribution networks. Europe and many other park of the world have traditionally concentrated nearly all of their energies on one part of the film industry production and in doing so have largely ignored changes to those marketing, distribution and exhibition networks which could possibly have put more homeproduced films on our screens, and thus achieve higher levels of boxoffice popularity. We've allowed production, distribution to become fatally divorced. The result is a series of cottage industries instead of a modern fully integrated business. Now the task of creating effective marketing and distribution for our cinema products has become all the more urgent and all the more possible in the wake of the development of the new technologies which will undoubtedly create huge new markets.<br />The problem is that over time in a truly competitive sense we have crucially damaged our ability to fully exploit even the best our movies, simply because those of us in the industry outside the United States we have proved unable to deliver the right kind of product in sufficient volume, and on a consistent basis. The Americans, by contrast, have developed a marketing machine which is capable of successfully turning its hand to delivering just about any kind of entertainment.<br />As a producer, I can make the most thrilling or challenging movie imaginable, with the best crew and the most talented cast, but unless I have a well-thoughtout arrangement with an effective world-wide distribution resource, one which understands how to simultaneously market a film in different countries and when necessary to different audiences, I am, to a great extent, wasting my time.<br />In my view, it would be an act of madness to make the same mistake with the new media. In this environment, 'distribution' will come to mean less and less the physical distribution of film and videotape and will more and more become a question of disseminating electronic impulses in a myriad of interactive configurations through a variety of addressable cable and wireless systems And one thing is certain in whatever form these products do eventually materialise, in the end it will be those companies in possession of substantial software catalogues who will reap the real rewards of the multimedia revolution.<br />Never forget it's the catalogues of films which the US studios built up from the 1930s onwards which account for much of the profits, and most of the overwhelming security of the U.S. studios. For what those libraries represent and have always represented is a treasure trove of product which can be freshly exploited each and every time a new technology emerges. So when television arrived, they licensed giant packages of films, when video came along, those same films were marketed to that medium, and so on<br />Now with cable, payTV, laser discs, CDI and other new technologies, the studios are set to clean up once again and they'll go on doing so, just so long as people remain enthralled by beautifully stories like It's a Wonderful Life, Casablanca and Red River. In fairness it should always be remembered that these libraries came into being not as the result of any deep strategic thinking or visionary inspiration, but just an extraordinary, and quite accidental byproduct of the decision to store the films in case they could be re released at the cinema at some future date. During the thirties and forties each film title was valued on the books at $1 against just such an unlikely eventuality! <br />Most other film industries around the world have failed to build up film libraries of any comparable size, principally because they didn't develop the kind of large, wellcapitalised companies capable of producing, marketing and retaining the rights to a consistent stream of films over a number of years. That's why we need to devise mechanisms which will allow us to retain control over the intellectual property rights of these new products, ensuring that the benefits flow directly into the national economy. But amid all these developments, let's not forget the power and influence of storytelling. Stories and images are among the principal means by which human sol has always transmitted its values and beliefs, from generation to generation and community to community. Movies, along with all the other activities driven by stories, images and the characters that flow from them, are now at the very heart of the way we run our economies and live our lives. If we fail to use them responsibly and creatively, if we treat them simply as so many consumer industries rather than as complex cultural phenomena, then we are likely to irreversibly damage the health and vitality of our own society. <br />In his wonderful book The Secret Language of Film, the great French screenwriter JeanClaude Carriere who as you know also happens to be president of FEMIS warns us that:<br />"
Cinema is an art on the move, a hurried art, a ceaselessly jostled and dislocated art. This wealth of invention, which film has known since its beginnings, this apparently unlimited extension of the language's instruments (although not of the language itself, which keeps on running up against the same barriers) often engenders a kind of intoxication which once again leads us to mistake technique for thought, technique for emotion, technique for knowledge. We mistake the outward sign of change for the underlying essence of film. Constantly dazzled by technical progress, we filmmakers tend to forget substance and meaning which are true and rare and see only the same routines in the latest technological disguise."
<br />At the same time, Carriere reminds us that the actual language of moving images can change extremely fast so much so that in the days before television, newly released prisoners who had seen no films for a decade or so frequently had difficulty following newly released pictures the films simply moved too fast for them.<br />But like it or not, the crucial social outcomes affecting the movies will be won or lost in the arena of global commerce. It's thirty years since the French media entrepreneur JeanJacques Servan Schreiber published his seminal book The American Challenge, which analysed Europe's economic decline in the face of the overwhelming penetration of American goods and ideas. "
The confrontation of civilisations will now take place in the battlefield of technology, science and management,' he concluded. 'The war we face will be an industrial one.'<br />That war has already begun. Already the Americans have managed to secure a free trade agreement on information technology and, once again, the Hollywood studios will be key players in the debate. They are broadening their spheres of activity, keenly aware that a video game can gross more than a blockbuster movie. Just as they've always done since the First World War, when it comes to international trade the studios are likely to put aside their narrow commercial differences to maintain unity on key issues, arguing free access to international end to unilateral taxes and subsidies wherever they find them commercially inhibiting.<br />All of us in this room have the opportunity to influence that struggle all of us can help create a programming, software and informationbased industry capable of competing at the leading edge of what may well turn out to be the twentyfirst century's most exciting, profitable and influential industrial and cultural sector. Surely we should be developing strategies which will encourage the intelligent, and dare I say sensitive exploitation of these vast assets, both for own benefit and for the benefit of the world as a whole.<br />Of one thing we can be absolutely sure. Whatever predictions we make about the impact of all these new technologies are bound to contain much that is wrong, in the same way that the Victorians got it wrong whether they declared themselves wholly in favour of steam and progress or whether they thought railways spelt the end of civilisation and social order as they know it We get it wrong because we can only describe the new and unknown in terms of what is already familiar to us. <br />The American writer Henry Thoreau once complained of the folly of building a telegraph line from Texas to Maine without first establishing whether there was anybody, in either place, who had anything useful they wanted to communicate. <br />The intervening century and a half has proved beyond any shadow of a doubt that the lack of anything substantial to say is no bar to developing ever more sophisticated forms of communication especially if they can carry advertising. <br />CILECT has the opportunity to fulfil an important role in all of this. As an organisation it has always played an invaluable role in bringing together educators and curriculum designers from around the world to discuss developments in the film and television industries. Now it is beginning to cast its net much wider, to embrace new technologies and the new opportunities and responsibilities that come with them. We have to rethink who and what we are as an industry and who and what we truly represent. Your deliberations over the next few days are in my view a timely and important step in that direction.<br />The Digitization of Conventional Production <br />A.J. (Mitch) Mitchell<br />A. J. (Mitch) Mitchell, BA, FRPS, FBKS, is director of special effects at The Moving Picture Company in London, UK. He started in BBC television and moved to The Moving Picture Company as a director/cameraman. He helped design the first commercially available digital edit suite, collaborated on the design of a digital video-to-film transfer system and was one of the first exponents of creating video/digital effects work in post- production rather than live at the time of photography.<br />We're here to talk about motion pictures, digital production, now. It has been referred to it as the "
digital revolution, I think the reason that it's "
is a past-tense reason rather than a future projection, in that digital technology has stealthily crept up on motion pictures and the revolution has happened. Good heavens, the revolution is almost over, although most people aren't aware of it and the media report it as if it is actually happening ! <br />Motion pictures have become almost totally dominated by digital technology. I must explain that for motion picture I mean all things that are to do with moving pictures. So I include in that description film and cinema: film as the acquisition format, cinema as the presentational method that uses chemicals and mechanics, as well as include video and television, where video is an acquisitional and manipulative medium, and television is the broadcast format for getting it into people's homes. The broadcasting isn't totally digital yet, but by the end of the century, most places in the world will have some form of digital television. <br />When I say digital technology, I refer not just to computers as such but also to imbedded computer technology, that is to say, computers which are miniaturized and put into things that you don't recognize as having to do with digital technology. Thus you could even say that where production facilities have new refrigerators that have microchips in them, I mean they're computerized without even knowing it, even if it's just a guarantee that the Coca-Cola bottles are kept cold ! That's probably not what you think of as the digital revolution in film, but sometimes I think that most film crews are more worried about the coolness of their Coca-Cola than the coolness of the images they photograph. <br />Digital technology is affecting every aspect of motion pictures; not just production, not just the economics, and it's not just an enabling technology - something to make things cheaper or more efficiently - it also makes possible certain kinds of creative functions, and I will come back to that. <br />Most people think of digital technology, as producing horribly surreal sorts of imagery, or brightly colored pictures with lots of flashing effects, the sort of thing you get in MTV, or in effects-laden films. These are mostly easily recognizable and quite often featured in the trailers posters for movies and the promotional announcements for television programs. But digital technology and computers are now used in every conceivable stage of the motion picture process, from the very first word and thought to the last customer leaving the cinema or the television transmitter shutting down for the night. <br />The first creative stage in the making of a television drama or the production of a feature film is the creation of the story - it may come from a play, it may come from a novel, it may be an original screenplay that's written for the production. But any of these are always written on a word processor. Very few writers still use pen and paper and so, in a way, the electronic molding of the story starts right back there at the beginning. Of course, a number of writers have argued that what you produce with a word processor is often very different from what you produce with a pen. If you write with a pen, then modifications usually involve wholesale rewrites, and when copied out, lots of things are subject to change because total rewriting it forces you to go through the material as you're rewriting it. Writing on a word processor, if there's errors and faults, like spelling errors, they are picked up by spelling checker, and you just fix them. If you were doing a manuscript by hand you'd have to rewrite it because crossing out and correcting would get too messy. Even at that very simple level a word processor starts to influence what you do. The digital word processor enables writers to electronically "
cut and paste"
- they shift words, characters, scenes, around easily, although even with hand-written manuscripts, people sometimes literally cut them up and paste them. <br />The film editing process, can begin when a person who may not even be interested in films writes a novel, which is then bought by a studio, re-written by several hands, cut and re-cut, pasted and re-pasted to make a script, and that's then re-cut and re-pasted in the editing room to make the film. The editing process, the ability to cut and paste, goes all the way through production and the method by which the cutting and pasting is accomplished has profound effect on the creative process. <br />In television news, of course, events are electronically filed into the news room, and so this cut and paste element goes even further - the journalists out on location produce their material that is submitted electronically. It goes into the database at the newsroom, and the editors edit it without it ever being printed out. Many newsrooms are now electronic, and the digitized news goes to all the interested parties directly, to monitors for the floor managers to see what' s happening, to a text display for the gallery or control room where the director sits, and may never be printed out at all, at any stage ! <br />So right at the very beginning there is a digital electronic impact on the creative process. As we go further into production, there is the communication between the writers and the producer, often these days e-mail, but certainly word-processed, even in the form of letters backwards and forwards. Sometimes they even speak to each other but decreasingly so. These letters and e-mails are all put through computers, which leads to the stage of contracts. Contracts used to be quite simple, but they have become increasingly complex. Some people have described them as "
contracts from hell"
. Actors' contracts may be 40 pages long because, again, the cut and paste technique is used to put in every clause imaginable. On the basis of legal experiences, their own or other's, lawyers or accountants will now continually add clauses and build these contracts up until they appear to be completely insane. <br />Having made the contracts, producers begin serious budgeting using computer spreadsheets. Very few budgeting people do it by hand anymore - they may still sometimes have hand-drawn charts on the wall, but they always have spreadsheets of some sort. Scheduling and planning logistics contact lists are also on computer frequently using purpose-built software, especially designed for film and video production. <br />The next stage in the production process is the storyboarding standard procedure on large productions. Test scenes are often made using small lightweight digital cameras these days. Casting is always done on video. Location hunting nowadays uses small Walkman-sized DV cameras, small digital cameras that can photograph as well as still . Some location managers use digital still cameras. In the past, many location managers would produce large panoramic pictures, by taking many still pictures of the from slightly different angles and overlapping the prints, but these have always had bad joins. The exposure varies slightly because we start off on the buildings and then the next picture's got some sky in it, and the lab prints it slightly darker. Then the next one's got the buildings on the other side, and it's slightly lighter and they don't all quite fit together. Today, many location managers make multiple digital exposures, and use a picture editing program like Photoshop, joining the images together so they'll match, and outputting them as a single print with a big panoramic picture - altogether a much more sensible procedure. <br />The art department quite often pre-visualizes on computers. They use CAD (Computer Assisted Design) systems to design their sets. CAD also allows them to do mock camera angles so that the director may see what the sets will look like before a single nail is hammered. Costume design, model building, all other functions of the art department are increasingly using computers at some stage. <br />CAD has given rise to a version of what some people call the "
of movie making. This is where, let's say you start off with storyboards, you film the storyboards, you edit the storyboards with a scratch soundtrack that you make. using your workmates or family to play the parts. This is edited together to give a vague form or template for the film that you're making. As you proceed to doing tests with actors you replace all the scenes, and as you get into the composing you produce the final soundtrack, and add. Gradually, shot by shot, it's massaged and replaced, and - some directors go off and shoot rehearsals with the actors, again on these small DV hand held cameras. These are gradually put into the film. Some directors even make the film on video and then gradually replace the shots with the film shots. This is a very economical way of working because you can shoot miles and miles of tape, edit it for months and months and months, and then when you come to make a film you only have to shoot exactly the shots that you need. Very efficient, but I don't know how much it really helps the creative process because shooting with hand-held video cameras is very different from and gives a different feel from, say, anamorphic, large screen cinematography. But this scratch method had been the subject of experiment, and was used in part by some directors, with but it is fully enabled now, by digital technology. <br />In the actual production process, film cameras are covered with digital bits and pieces now. The main mechanism is still mechanical - pulling the film down frame-by-frame, and so on - but all the enhancements that have been added in terms of exposure, frame rates, stability, are governed by digital gizmos, and many cameras, such as the latest Arriflexes , allow you to plug in a portable computer in and do diagnostics on the camera, and all sorts of other clever tricks, such as changing the shutter angle in the opposite direction to the aperture while maintaining the same exposure - very clever stuff. <br />Video cameras, of course, are a very simple matter. The latest video cameras are in fact digital cameras anyway. They use solid state censors, CCD's, and they record onto digital formats such as DV or digital Betacam, or even to D1 format. As for the actual equipment that's used for setting the pictures - there are digital light meters, digital consoles that control the lighting, all sorts of gizmos for synchronizing. Almost all Directors of Photography (D.P.'s) , carry portable computers these days, even if only pocket ones, which, have got things such as moon, sun, and tidal information that can be searched in databases. Wherever they are in the world they can tell precisely what moment the sun is supposed to appear. If there's cloud cover and you're shooting to get a particular exposure, then of course that information is still very useful. Even though you can't see the sun, you can know where it is in the sky and what effect it's going to have on the light. <br />The motion picture industry is not a large industry in terms of industrial products, so there are not that many professional film cameras and video cameras made. I mean that if you compare the production of Arriflex to Ford, you're looking at very different orders of magnitude. Over the years the amount of design that's gone into film technology can hardly be described as massive. Cinematography has produced relatively few designs : the Bell & Howell camera, then the Mitchell camera, then Panavision, then other cameras like Moviecam and so on, there are only a few manufacturers, and a limited number of models. But because of digital design, small companies can now afford to continually upgrade and innovate and so, we're actually, getting much more product and a more rapid turnover of new products and enhancements , even on the old style mechanical systems. <br />A particular example is in lenses. The quality of lenses is phenomenal, compared with what they were, because of computer design. It's no longer a craft of grinding bits of glass and seeing what the effect is. Obviously, theoretical designs can be drawn up , but the aspheric lenses that give a completely flat image wouldn't be possible without computers because they are used to simulate how a virtually infinite array of points of light go through the actual glass, and so we can have accurate T-stops and wide angle lenses that give flat images, and you can have f. 1.2 lenses with phenomenal resolution. Most people are not fully aware of the impact of computers on lenses and the aesthetic possibilities of modern cinematography. <br />Sound has gone completely digital in most instances. It is often recorded on DAT, time-code controlled digital audiotape. The synchronization between the cameras and the recorders is absolute, thanks to digital timecode. I recently worked on a job where there were cameras and soundmen all over the place on a racing track, with lots of noise. It wasn't possible to have hard connections among the recorders and cameras, so from time to time the sound recordist would sort of wander up to the camera operator of the "
, the one that served as the reference for all others, and say to the operator "
my computer needs to talk to yours."
He'd plug his little plug into the camera and pull it out again all synched up. It didn't need to be connected anymore, and he could wander off to a safe distance from the racers, rather than lying on the racetrack next to the cameraman. <br />Those are all the less obvious things. Now for the obvious ones. There are repeating or motion-control camera heads, with which the camera operator physically pans the camera, and sensors precisely record the move. The cameraman can then press a button, and the camera will make precisely the same move again, as many times as required, enabling multiple passes exactly mimicking the original movement. Then there is full motion control which is obviously a more complicated version of that, where the entire camera moves about on a crane rig and can be made to repeat that move, theoretically, for an infinite number of times, but in practice the repeats tended to become increasingly imprecise to the point where one might as well have hand-held the camera. But now that digital technology enables us to fix the problem, nobody uses motion control anymore. They want to hand-hold it the camera and then fix any problems in digital post production. Some may conclude that the term "
which we're all familiar with,(Director of Photography with the "
somehow missing) now stands for Digital Photographer. <br />The Director he benefits from digital technology on the studio floor. Digital communications are used all the time - cell phones, for example, radio feeds of the sound being recorded by the audio crew, the video tap on film cameras that allows the director to have wonderful color pictures of what the camera is seeing, because CCD's, (charge-coupled diodes) are light sensitive computer chips. These images are recorded to enable instant access to the frames that the director might want to see from previous takes. Although they sometimes still record it onto u-matic or VHS, I've been very surprised that, a lot of the video tap crews that go out now actually have hard disks that can record compressed video pictures as well, which provides amazing creative possibilities for testing transitions between shots, and making practical judgments. <br />On the set or location, there is the production secretary, who does continuity notes and such. They tend to use computers these days, because it is really convenient to be able to go back up and down production notes, make changes, and cut and paste. <br />I wish to discuss what has become the contentious issue of video rushes. Most D.P.'s will tell you this is one of the down-sides of the digital world. Nowadays, many shoots order rushes, prints directly from the original negative. If you're working on the stages at a big studio, then you may be able to see material well-projected on a proper screen, but the majority of the time we get video rushes, which is nightmarish for the cameraman. Who knows where the telecine operator, the person who supervises the conversion of the image from negative film to positive video has set his settings. Imagine, if you will, a cameraman doing a horror film, trying to make everything green. <br />In the old days, the lab had a sympathy for these things, and the people in the lab looked at the notes, understood the intention, and made accurate, if green, rushes. But nowadays video rushes are done by trainees at two o'clock in the morning and they may have a dodgy eye or they may not have any interest and therefore aren't reading the notes and or not understanding them, so that everything shot green, is corrected to a proper flesh-tone. They look at the tapes and the director says, "
But this is supposed to be a horror film, why does it look like a musical?"
The cameraman doesn't know where he is! What does he shoot today? He doesn't know what it looks like. There are tools being developed by a number of companies to try and help that, but again, digital technology can come to the rescue, and some cinematographers, not that many yet, but a substantial number all the same, around the world, are trying experiments such as using small digital cameras and a laptop and color printer, and in the lunchbreak or whenever, they shove these pictures in, they take shots of all their setups, and they tweak the color using something like Photoshop and then they make a small print and keep printing it out until they get it looking the way they want. They send that with the can of film, as a reference for the facilities house or wherever the video rushes are being made, They can see roughly the colors he wants, and so at least it's in the right ball park, and they understand that the while the gray scale is gray, the shots after it are all green. So this is a major creative element. <br />Even the telecine , in which film images are transferred to video, is becoming in many respects an extension of the shooting There is a new type of telecine called Spirit an all digital system - the sensors and optics are designed by Kodak with the colorimetry of film in mind. It has a much higher resolution than the standard television, so it allows you to digitally reframe the image and zoom in an awfully long way. Of course, television has a much lower resolution than film, and so you can zoom deep into a 35 mm frame if the end use is television presentation. Of course, most of these things could have been done n the past with film opticals in the past, but at an impossible cost. Digital special effects, especially in re-composing shots is a place where this technology has an influence. <br />Images can originate in places other than the studio floor. There is, for example, computer graphics, which is analogous to the normal shooting method. You have the modeling, which is equivalent perhaps to building the sets; animating, which is like directing the actors; then lighting, which is…like… lighting; and rendering, which is the equivalent of the shooting process - making decisions about the lens angle, the height of the camera, where it's placed and so on. The computer graphics, of course, are completely post-produced digitally because, well, they're digital in the first place. <br />Traditional animation, has been slower to change, but all the large studios such as Disney are now using computer graphics of some sort. Some of them are even using 3-D, but there are other systems such as Animo and Toons which are coming along that make possible, at the very least, in-betweening, paint and trace, and many studios, although they actually draw in the traditional method, put the drawings into digital form so that computers can do automated paint and trace. <br />Much stop-frame animation for television use and even for large scale cinema production is done with digital electronic cameras. Instead of putting latex or rubber over wire armatures, they're putting sensors on them. The sensors feed computers and the computers control computer models which are much more flexible, can easily be altered, and don't suffer some of the mechanical problems common to physical armatures in the past. This method was used on Lost World and Jurassic Park. It allowed the traditional techniques to be used, but under computer control, and with digital precision. <br />These days, time-lapse cameras are controlled by computer chips as well. The combination of live action and CGI is the area of motion capture in which a number of experiments are currently being conducted, from many different approaches. <br />Of all aspects of film and television production, post-production is the one that we almost take for granted as digital, because of the nature of computers and data. The impact of digital technologies started off with post-production because the data quantities are very small and made possible early experiments with comparatively primitive equipment. Audio has gone almost completely digital now because it has the smallest data needs of all. A home computer can edit and mix digital audio in highly sophisticated ways. <br />Television follows next because the information in its frames, even compressed, is greater than audio, but very soon that threshold of price against power (processor speed) and against the amount of data involved (compression algorithm) and the expense of holding it (RAM and disk storage) will be crossed and television is the next to fall. Post-production of feature film requires much higher resolution, especially if the digital image is to be written back to film for theatrical projection. <br />In television, effects are almost entirely done digitally now and many stations are converting to full digital production. Audio and Video compression allows us to move down into the lower cost levels, the few places where digital technology hasn't been used, and still lower production cost . Compression allows more information to put more on the same amount of disk or tape, allows it to be moved faster, and allows everything to be done more economically. <br />In editing there are a number of purpose-built digital machines, the Avid, Lightworks and others.- I'm sure you all know about them so it isn't worth spending a lot of time on. <br />Digital audio post-production in film, or sound editing, takes on a new dimension with digital technology. Considerable music composing is now done with samplers, with synthesizers, with digital systems of one sort or another. There's Automatic Dialogue Replacement using sound systems that squeeze it and slice it and cut it and expand it, and there's effects tracks and post synching of films, all of which are made much easier on a computer. <br />At the level of manipulating and distributing the film images, digital technology has profound impact. A 35mm film frame can store and display some 40 to 50 times the amount of information of a video frame, but as computers get more powerful, and computer memory's get bigger, and as the prices keep going down, down we have reached a point where now, as with television, the film special effects rely on digital technology. The photographic image is scanned into a digital format from the camera original, then the digital processing takes place, using a fast and very powerful computer. The result is scanned back to film using laser or electronic beam recorders. <br />At the exhibition level there is digital broadcasting and DVD which gives much better quality than previous methods of transmission. Exhibitors want to go the whole hog and digitally transmit feature films in motion picture resolution from satellites or down fiber optics under the street to the cinema which for electronic projection, thus saving print costs, shipping charges and associated labour, but that is the least developed area. <br />There are some excellent large-screen digital projectors available, but it will be some time before electronically produced images can match film images in quality. <br />In the commercial environment of film and television, all of the technology ultimately is about the acceptance and approval by the audience. Here we find digital technology deeply involved in computerized booking systems in theaters, and the subsequent analysis of box office performance or television rating, which finds its way to us in the digitally composed pages of the trade papers. <br />I've intentionally attempted to steer clear of the traditional area of "
I want to stress the fact that all production in film and television has embedded within it a very large amount of digital technology, and we should be aware of it and how it impacts production and creative decisions. Special effects like go-motion and blue screen and difference matting and digital projection, image stabilization, virtual sets - all of the techniques basic to the effects in the action picture, are currently being used to perpetuate a fashion and a fad, just like all the other ones there have been through the cinema. <br />There was a previous fad in the 1950's for action and special effects films, just as there have been fads for cowboy films, film noir in the 40's, cinema verité in the 50's and 60's. There have been all sorts of fashions in movies and television style and content. Effects laden films are one of these fashions, probably enabled by the fact that, well you can do it, so let's experiment with it. <br />But the real revolution is what underlies this, which is for the whole of production to use these technologies to be more efficient and more creative. The best example of how far this has gone is in the UK, in advertising where 70% of the advertisements on television been digitally fiddled with in some way or another, and the majority of these are seemingly ordinary looking films. They had the money, the time and the will to adopt this technology and so it's pretty well completely in use now. They use it for salvaging, they use it for fixing the image, correcting the imperfections of the real world, if you will, and for making multiple versions where they want different labels, for pack replacements, for creating synthetic images, or for fiddling with the color. <br />An example of it this in Britain has to do with car number plates, which have a letter on the end of them from which you can tell in what year the car was made. Cars are not redesigned annually, but every August a new number plate comes out, and we are told that people are psychologically affected by this. In a TV commercial if they see a car with the previous year's letter on the number plate people - in their subconscious - think "
Oh, this is an old car"
so if the General Motors car has the previous year number plate and the Ford has this year's version, there is a fear that people will buy Ford cars because subconsciously they think, "
Oh well the other guys are selling old cars and Ford is selling new ones"
. The problem could be fixed by having the word "
where the number plate belongs, as they do in the car sales room, but that's unacceptable as well, so the psychologists say, because people think, "
Ah, well you can't believe the commercial because it's not a real car, it's a special one that's been made for the commercial and the one you get would be different"
, even though you only see it in long shots. So when they make the commercials, they have the most current number plate and when August comes along, all the special effects houses shut their doors to everybody else for two weeks while they digitally replace all the number plates on all the car commercials to and them the new year's number plate. That is the sort of invidious and hidden thing that goes on. <br />Then there is Coca-Cola. They make global commercials and ads but the packaging differs from country to country. Some countries have bottles, some countries have cans, sometimes it's called "
and sometimes "
It varies and so they make generic commercials and the packs are replaced digitally, sometimes by the hundreds. <br />The final thing to be said about digital technology and film is that digital will become a kind of electronic intermediate in the not-too-distant future, and what we'll do is put everything, whether it's got effects or not, into computers so that it can then be post-produced, scanned out, and any number of virtual "
negatives can be made. So the prints that you will see in the cinemas will be all struck off original negatives and not off dupes or printing masters or interpositives or internegatives as they are now. The whole quality of projected cinema will improve thanks to digital manipulation. <br />Digital technology has crept up on the movie industry and is already an important item in the craft tool set, one that is there to stay. It can only become more powerful and more important, and will change how things are done, forever. In the areas of sound, still photography and television it has already become a major creative and technical resource. <br />Film is only lagging behind television and audio in the digital world because of its much greater appetite for data and the need for technology to efficiently handle the data. - This constriction is relaxing every minute that goes by. One day the whole process will be digital from image capture to exhibition. You had better believe it, and start training the generation who will have to depend on this technology. In ten years time there will be nothing else! <br />The Evolving Process<br />Walter Murch<br />Walter Murch is a sound designer, editor, screenwriter, and director. He earned his undergraduate degree at Johns Hopkins University and is a graduate of the School of Film and Television at USC. He has been a longtime collaborator of filmmakers Francis Ford Coppola, and his USC film school classmate George Lucas. Murch played an important role in the creation of some of the most important films of the 1970s including American Graffiti (1973), The Godfather Part II (1974), and Apocalypse Now (1979). His sound editing was central to the artistic success of Coppola's The Conversation, which earned him an Academy Award. He won the Oscar for Best Sound for the sound design of Coppola’s Apocalypse Now (1979) and he was nominated for Oscars for film editing for Fred Zinnemann's Julia (1977), Apocalypse Now (1979), Jerry Zucker's Ghost (1990), and The Godfather Part III (1990).<br />In 1996 he won the Academy Award for Best Achievement in Editing, and the Academy Award for Best Achievement in Sound Anthony for Minghella's The English Patient .<br />Walter Murch has been a guest teacher and lecturer at a number of CILECT schools, among which are AFTRS-Sydney, DFS-Copenhagen, UCLA, and Stanford University.<br />Thank you. It’s wonderful to be back in Denmark. I first came here as a teenager in 1960, then returned for a symposium on film sound at the National Film School in 1980, and returned again in 1986 to teach for a couple of weeks, so I feel very much at home. <br />In the late 1960’s, Francis Coppola had become dissatisfied with life and work in Hollywood and was anxious to find another way of making films. He wound up in Copenhagen after the première of one of his early features at Cannes, and stayed for a couple of months at a film commune called Lanterna Magica. Anyone remember it? Yes? Anyway, it was located in an old house just outside the city. Lanterna was using all of the new technology of the time, and there was something about the whole setup that really got Francis excited. He came back to the United States determined to see if he could recreate the spirit of Lanterna, so he (and George Lucas and I and all our families), moved up to San Francisco in 1969 to start a new production company. And as an homage he called this company American Zoetrope, which was his version of Lanterna Magica. <br />I think as you can tell from what Mitch said in the previous lecture, we’re in a period of transition. It is going so fast right now that the transition might be complete in a couple of years, but at the moment things are still in flux. We cannot yet dispense with film, which is the medium that photographs the image in the first place and which eventually takes the images and sounds into the theaters. As an intermediate step between shooting and exhibition, though, film is rapidly becoming anachronistic. In some ways, the situation is similar to the state-of-affairs with domestic lighting around the turn of the 19th century. In 1905 you would have seen chandeliers that had both gas and electricity in them. The electricity was new, it was exciting, it produced a more brilliant light, but it was also not quite as dependable as it should have been, and so there would also be gas - romantic, dangerous, inefficient, but dependable. And so, especially in film editing and sound mixing, we’re in a similar hybrid phase where we have to deal with both - with film as a sprocketed, photographic, material medium; and with the electronic image as a digital, virtual, immaterial medium. Somehow we have to find the most effective way to combine the two and not trip over our own shoelaces in the process.<br />The central question that has brought us all here is: Is Digital Non-Linear Editing a Help or Hindrance? The short answer is Yes. But we should be careful not to oversell digital’s advantages, nor to disregard the more “mechanical” means of dealing with all the problems of editing a film.<br />To begin, one of the things that I’d like to emphasize is the astronomical number of ways that images can be combined. This has always been true, and is true no matter what system you use. If a scene is covered with only two shots - one take each from two different camera positions (A and B) you can choose one or the other or some combination of them both. As a result you have four ways of using these two images: A, B, AB, BA. However, once the number gets larger than two - and an average scene might have twenty-five shots - the number of possible combinations quickly becomes astronomical. <br />It turns out there is a formula for this: C = (e • n!) - 1. C is the number of different ways a scene can be assembled, “n” is the number of shots the director has taken to cover that scene - say, twenty-five; “e” is the transcendental number 2.71828...., one of those constants (like π) which you might remember from high school. And the exclamation point after the “n” (the one instance of mathematics becoming emotional!) stands for ‘factorial,’ which means the product of all integers up to and including the number in question. <br />For instance, 4! = 1x2x3x4 = 24. 6! = 1x2x3x4x5x6 = 720, so you see the numbers get big pretty fast. The factorial of 25 is a very large number, something like 1.5 billion billion million - 1.5 followed by 25 zeros. Multiply by that “e” and you get 4 followed by 25 zeros. Minus one! So a scene made up of only 25 shots can be edited in approximately 40,000,000,000,000,000,000,000,000 different ways. This is roughly the distance in miles from Earth to the edge of the observable universe.<br />If you had 59 shots for a scene, which is not at all unusual, you would theoretically have as many possible versions of that scene as there are subatomic particles in the entire universe! Some action sequences I have edited, though, have had upwards of 250 shots, so you can imagine the kind of numbers we would be talking about: 8.8 followed by a solid page of zeros - 392 of them. Now the vast majority of these versions would be complete junk. Like the old chestnut of a million chimpanzees typing randomly - most of what they write would not make any sense at all. On the other hand, even such a ‘small’ number as 4 followed by 25 zeros is so huge that a tiny percentage of it (the potentially good versions) will still be overwhelmingly large: if one version in a quadrillion makes sense, that still gives 40 million possible versions to consider. The queasy feeling in the pit of the stomach of every editor beginning a project is the recognition - conscious or not - of the sheer immensity of options he is facing. This is true whether you are editing on a Moviola or on a Kem or on an Avid.<br />DIGITAL: the advantages.<br />Do digital non-linear technologies help you deal with these super-astronomical figures? Again, the qualified answer is yes, since what you are actually creating in the computer is something called a “virtual assembly” - the images themselves have not been disturbed, only the instructions for what to do with the images. This means that every time you look at a cut sequence on an Avid, the images are being assembled for you as you watch. If you want to do something completely different, the system doesn’t care at all - you are only creating the instructions, the recipe for this particular dish, not the dish itself. There are no trims to put away, no rewinding, no physical splicing of film itself. The film does not get scratched, broken, go out of sync, or become unsteady over time. You can save as many different versions of each scene as you wish and review them with ease. You can also do relatively sophisticated multi-track sound editing and mixing on the Avid.<br />The elimination of busywork like rewinding and filing trims can give an intoxicating sense of freedom, particularly for editors who have been filing trims for many years. But it is dangerous to think that the removal of this weight simplifies the real job at hand, which is to discover the best structure for your film. In fact, that sudden rush of freedom can lure you into some real traps. Let me explain what I mean.<br />No matter what editing system you are using, you will always be confronting the astronomical number of different possible versions of each scene. When that astronomical number was physically represented by mountains of actual film, you knew instinctively that you had to have a plan and be organized going in: contemplating the thicket of 500,000 feet of dailies was like peering into the Amazon jungle - who would go in there without a map and adequate supplies? <br />The danger with digital systems is that they seem to turn the Amazon jungle into a video game, a game without apparent consequences. If you lose the game you simply start over again from the beginning! No film has actually been touched! There are no loose ends to re-stich back together. In a certain limited sense, this is actually true, but on the whole it does not mean that there is not a real Amazon lurking behind the virtual one, with real consequences should you lose your way. There should always be planning, no matter what. You have only so much time. There should always be a map. You can never possibly explore all the possible versions of even the simplest scene: remember our formula e x n! There should always be detailed notes of what you have seen and where you have been. Theseus needs his thread to get out of the Minotaur’s maze. Otherwise, editing just becomes a thrashing about, a slamming-together of images and sounds for their momentary effect, but an effect which has no long-term resonance and impact on the film as a whole. <br />So, paradoxically, the hidden advantage of editing real, sprocketed film is that the sheer weight and volume of it encourages you to take things seriously and to plan ahead before you jump in. All of us who have grown up as editors of sprocketed film have had to develop some kind of strategies for dealing with our Amazons, and these strategies should not now be discarded in the name of digital efficiency.<br />What I’d like to do now is show you some of these ordinary film techniques and strategies which I still continue to use when I edit digitally.<br />DATABASE: Theseus’s Thread.<br />In the early 1970’s I started to develop a database system that would allow me to keep a record of all of the notes that were ever made for every shot in whatever film I was working on. I used this index card system from “The Conversation” through “Apocalypse Now,” then in 1982 shifted to a computer for “The Right Stuff,” but it was still essentially the same system.<br />I have brought the database for English Patientwith me. Let’s see if we can get it up on the screen here.<br />OK. What you see here is the number of shots that were taken for the film: 3,073. I have randomly started up with this record, setup number 1045. Actually, this is one of the last setups to be shot, so if you divide 3,073 shots (takes) by 1,045 setups (camera positions) you come up with the average number of takes that were printed for each camera position, approximately three. This is setup 1045, take one - an effects shot that was intended for the sequence at the beginning of the film where Almasy’s plane gets shot down. <br />Aside from the technical specifications, the three key areas to each record (ie. shot) are: First Viewing, Second Viewing, and Director’s Notes. That last category is pretty self-explanatory - anything that Anthony Minghella said about this shot will be found in this area here. First Viewing and Second Viewing are my own notes, taken when I see the dailies for the first time and then just before I am about to cut the scene. With the notebook computer, I am able to take the whole database with me into the screening room and type in the dark, silently.<br />Let me choose something different, a scene at random - I’m just going to put a “find” request in here - scene 56. I can’t remember what that scene was. OK. It’s English Patient being interrogated by Military Intelligence, and as you can see, the database tells us there were 38 shots for that one scene. Using our e x n! formula, there’s a huge number of ways that this scene could have been cut together. All of the numbers down here are technical numbers - the code numbers relating sound to picture; the key numbers which relate to the negative; the day it was shot; the lab roll number - any comment or number ever associated with this shot gets put in this master record. And every shot ever taken for the film has a record like this.<br />So let’s look at my notes for this scene. Can you read what’s up there on the screen? The first shot number is 432, take 2. The length of the shot is 154 feet.<br />During dailies, I would turn off the screen illumination for my laptop, and sit there silently typing anything that popped into my mind about the material I was looking at. For instance, here, just the one word “waves” - the shot opens up on a full shot of ocean waves.. “People walk by to the <<< (left)” - a nurse and two soldiers enter frame left and walk across the street Then in the next take, take 4, the “composition is tighter with a zoom back.” <br />Now let me just read some notes at random from other scenes: “Coming down • serious, with flare • what is that..?” “Nuzzling each other with smoke • what is the concept here?” “Hana sees Kip in room • nice light” “Same as previous, with Patient lisping again” “His hands folded.... meaningless words.” “Camera noise • what is happening with he sound?” “Nice see the little bugs outside • good for ‘too many men’ • good look at the laugh” “Fall out of bed, like a fetus.” Etc.<br />So, these are a few of my first impressions, my First Viewing notes.<br />I should emphasize that this is, for an editor, one of the most important things that you can capture when you’re working on a film: how you felt the first time you saw something. It goes beyond logic, it has primarily a personal emotional impact. If you can capture any of the fleeting thoughts that are going through your head when you watch a shot for the first time, you have a chance of recapturing that feeling months later, when the film has been assembled and you have forgotten your initial reactions. This is the closest you will ever get to how the audience is going to feel when they see this shot for the first time. Otherwise, you’re so necessarily compromised by the process of actually making the film that it’s very difficult for you to retain objectivity. <br />In this column to the right are the notes that I take when I’m actually beginning to cut the scene, the Second Viewing. As you know, films are shot out of sequence, so the experience of watching dailies (the raw footage of the previous day’s work) is often quite fragmented: parts of incomplete scenes, second unit material, effects shots, camera tests, etc.<br />When I’m getting ready to cut a particular scene, however, I will assemble all of the material for that scene alone and review it one more time, now being more analytical and specific in my comments. So here, for instance, we have - “Hana between the cabins, looks <<<, @ 397” - these little arrows (<<<) meaning “left” are how I compensate for my peculiar inability to tell left from right very quickly - a kind of graphic representation. This moment of Hana between the cabins happens at 397 feet into the shot. Anytime I make a second viewing note, I also type in the footage. For instance, this next note: “She exits, leaving E.P. (English Patient) <<< and the interrogator >>> in dialogue, 429.” <br />Next note: “I like the sound of the Patient’s voice.” One of the things that concerned me during the shooting of English Patient was the understandability of the Patient’s voice. This was partly due to the amount of makeup that Ralph Fiennes was struggling with. But I was also sometimes concerned by the degree of affectation in his voice, and I didn’t know whether this was going to be a problem when the film was all put together. So my note here is that I liked the way his voice sounded in this take. <br />“Cut camera at ‘Why? Are you German?’, 492 feet.” They must have stopped early in this take.<br />“Hana smiles as she gives towel,” - she had a nice smile as she gave the Patient a towel. One of the problems in this scene - it’s all coming back to me now - was that she appeared overly reserved. I was looking for any little fragment where she might be a little brighter, emotionally. <br />“E.P. voice more breathless, lispy, grrr,” I wrote, because I didn’t want him to lisp, 523 feet. <br />Also, “Cut camera on ‘Are you German?’,” so I guess these two shots ended early. <br />Next shot: “Similar composition as previous.” <br />“See more down the rows of cabins,” - they’ve moved the camera a little bit so there’s more depth to the scene. <br />“Interrogator shifts position on ‘This was your garden.?’” <br />“Hana back in >>> on ‘You were married then?’” <br />I won’t go on with this, but I think you can get an idea of the amount of detail that I go into before I ever start cutting. Both emotional detail (how I felt when I first saw the shot) and clinical detail (what happens physically within the shot); Lover and Surgeon. It was a good discipline because English Patient turned out to be the most restructured film I’ve ever worked on. The sequence of scenes in the screenplay was severely altered in editing, so I was always on the lookout for little bits and pieces of film that could allow me to recombine things in different ways than originally intended, and these notes were essential to making that possible.<br />As with any database you can interact with it in many different ways: if we wanted to find anything that had Hana in it, we’d type in “Hana” and . . .there we are: 807 shots. You can choose any character or action or any combination of characters and action, and use the database to pull information and see what shots you have that fill any of those requirements. <br />PHOTOGRAPHS: the graphic overview<br />I have here some photographs that I took off the workprint of English Patient, so I’ll hand them out for you to examine closely. They are part of a system of still photographs that I use. Right after seeing dailies, I choose one frame from each setup to answer, graphically, the theoretical question, “Why did they shoot this setup? What moment were they really looking for?” Here, for instance, is a shot of Katharine looking at the cave paintings, and there’s something in the expression on her face that told me that this was what they were after, this moment. Here’s another: a close shot of a brush doing some painting - here again a smile, one of those precious smiles from Almasy. You’ll notice that each of these photographs has the number of the shot in the right hand corner. On the left hand side is the code number corresponding to this frame in the film. So, since there were 1045 setups shot for English Patient, I took more than 1045 photographs (some shots had multiple photos) and then mounted them on big panels arranged in scene order. The result is a “visual database” or “reverse storyboard” of the film, which allows you to pinpoint very quickly things that are too elusive for words to describe. The color of something, an ineffable expression on someone’s face, details of costume and set decoration. How many closeups did they shoot for a certain scene, how many long shots? A single glance at one of my boards will give you the answer. If you imagine the panels as big contact sheets, it is very similar to the way that still photographers work.<br />LITTLE PEOPLE: correct perspective.<br />Another problem common to all editing, no matter what the system, is the difficulty of dealing creatively with a small image. There is always a great disparity between the small image that the editor sees in the cutting room and the huge image that will be shown in the theaters. And this size difference has an effect on how the film is perceived. It is the difference between painting a miniature and a mural. On a small screen, your eye easily takes in everything at once. On a big screen, you can only take in sections at a time.<br />If a scene has a hard time coming together rhythmically, the problem frequently is that the editor has lost the correct size relationship with the frame. He is working on the miniature and forgotten that this will be a mural.<br />I get around this problem very simply, by cutting out two white silhouettes, a boy and a girl, and placing them in the correct size relationship to the Avid screen the way it will finally be when projected in a theater. So, if I am looking at a screen 22 inches wide in the editing room, I will make my little people 4 and a half inches tall. This makes the screen look as if it is 30 feet wide. <br />Related to the issue of screen size, one of the questions people keep asking me about digital editing is “Are movies getting faster? Are films cut faster, with many more quick cuts, because it’s digital and they can be cut faster?” Well, that’s certainly true as far as it goes - it is easier to cut quickly because you don’t have to make all those splices and file all those trims. But actually, I don’t think that is the main reason: most of the “quick cut” problem comes from the common mistake of looking AT the screen rather than INTO the screen. Let me explain this in more detail.<br />Television is a “look at” medium, cinema is a “look into” medium. Very different. You can think of the television screen as a surface which the eye hits and then bounces back. In a feature film, particularly one in which the audience is fully engaged, the screen is not a surface, the screen is a magic window, sort of a looking-glass through which your whole body passes and becomes engaged in the action with the characters on the screen. If you really like a film, you’re not aware that you are sitting in the cinema watching a movie. <br />One of the functions of MTV (music videos) is to attract your attention - when you’re watching MTV you’re usually looking at a small screen some distance away, there’s lots of competition all around, the lights are on, the phone may be ringing, you might be in a supermarket or department store. MTV has to show dramatic, visually shocking things within that tiny frame in order to catch your attention because of the much narrower angle of vision. So MTV is an extreme example of television in general - hence the quick cuts, jump cuts, swish pans, staggered action, etc.<br />There’s a completely different aesthetic when you’re in a theater, however: the screen is huge; everything else in the room is dark; there are (hopefully) no distractions; you can’t stop the film at your convenience. And so, understandably, the editing of a feature has to be very different than the editing of a music video, and these little people to either side of the screen keep reminding me of that: this is not television.<br />Generally, I like this kind of solution: it is simple, and you don’t think it would amount to much, but in the long run it helps tremendously because it solves problems even before they happen.<br />QUESTION FROM AUDIENCE MEMBER: “Could you edit with a video projector like this? In a big room with a really big screen?” <br />That’s a good question. But editors are strange creatures: we seem to like small, dark rooms. Yes, there’s no reason even now that you couldn’t be editing like this, with a thirty-foot screen. The rental rates for the room would be expensive, however. I have found that working with my little friends, my little paper leprechauns, is just as effective as having a very large screen. They are just a reminder of what I have to do, the mental adjustment I have to make.<br />ANOTHER QUESTION FROM AUDIENCE: “Were there any surprises when you went from looking at the Avid to looking at sprocketed film?”<br />No, I’m happy to say not very many. Actually, on English Patient the only thing that caught me by surprise was that in the Avid I mistook the actor’s line of sight a couple of times. I thought he was looking far left and he turned out to be looking a little closer to camera than I had thought. Actors’ eyes easily fall in shadow on the Avid, but not on film. Because the shadow was not so deep, I could actually see the pupils of their eyes. Other than that, I was amazed at how well everything translated to the screen - I believe that it was my little leprechauns that helped me.<br />STANDING UP: endurance<br />The last thing I’d like to mention is that I edit standing up. I don’t believe that editing film is in any way like editing the written word. They have the same word in English - editing - but the process is very different. A film editor is a kind of performer - a performer in the same way that a cook is a performer, a dancer is a performer, even in the way that a surgeon is a performer, and if you think about it, all of these people stand up to do what they do. There is no reason that a cook couldn’t sit down to cook or a surgeon couldn’t sit down to do surgery, but the tendency is to stand because there’s a different emotional relationship to the material when you stand. It’s also healthier for you - you can sit whenever you want to (I have a comfortable architect’s chair), but most of the time, and especially when I’m at the decisive moment of making a cut, I will always be on my feet. It’s almost like being a gunslinger - I want to be able to stand and react as quickly as possible. It also helps to have your whole body engaged in the process. When you sit down, you’ve effectively amputated the lower half of your body. Rhythmically, you’ve become a paraplegic.<br />I started out editing on the Moviola, which encourages you to stand by virtue of the way it is built. In the early 1970’s, though, I started editing on Steenbecks and Kems, which are built like desks, and I became more and more frustrated - something was wrong, but I didn’t know what it was. I began to feel physically constricted and it finally dawned on me that the problem was that I was sitting down. So I built two plywood boxes and raised my Kem up off the ground fifteen inches so that I could stand at it, like this. With the Avid or any of the digital machines it’s very easy to do, you just put the monitors on stands at the appropriate height, get an architect’s table and chair and there you are!<br />I recently just finished a short film on Fred Zinnemann, who was born in 1907 and died earlier this year, and it struck me forcefully - I had worked with him on Julia in 1976 - it struck me forcefully how short the whole evolution of film has been. Fred was born right at the beginning of cinema and was influenced to go into filmmaking by some of the very people first engaged in the art - Griffith, Eisenstein, Von Stroheim, King Vidor. He started making films in the late 1920’s, I worked with him in the mid-70’s, and I’m standing here talking to you in the late 1990’s, so really we’re looking at a total of perhaps four generations of people who have ever worked in film. And Fred’s life span covered almost the whole of cinema history.<br />Film editing has also gone though four generations of evolution. Up until about 1925, film was edited by eye alone - there was no machine to help you, you just strung the film out like a piece of cloth and cut it and stitched it together, like making a suit. You examined the still frames with a magnifying glass, imagined how they would look in motion, and cut with scissors where you thought it would be correct, glued all the shots together and then went to a projection room and looked at what you had done, and then came back to your room, made some adjustments, and then projected it again. I should add that during those years film was made out of nitrate, so even had there been editing machines, it was understandable that no one would have wanted to lock themselves in a room with thousands of feet of explosive film and a machine that was possibly capable of igniting all of it. <br />The Moviola entered the scene in the late 20’s and early 30’s, largely as the result of the invention of film sound. You could not now continue to edit the old “tailor” way - you needed some kind of machine that would play the soundtrack in sync with the picture. Enter the Moviola, whose reign lasted from the late 20’s until the late 60’s. Then the European flatbeds - the Kems and the Steenbecks - increasingly dominated the scene from the early 70’s until the early 90’s. And now, in the mid 90’s, the digital machines have really taken over. I think 80% of the feature films being done in the United States are being edited digitally, 20% are being done through the old mechanical systems. <br />DIGITAL: problems<br />I remember seeing my first non-linear editing machine (a CMX) in the late 1960’s - and I thought then that digital editing would take over much faster than it has. But there were a number of things in the early years that held things back. <br />The amount of memory that those machines had was limited. You simply couldn’t store the entire film on a hard disk, so the film had to be broken down into chunks in order to deal with it, which caused serious procedural and creative dislocations. “Godfather III”, which was edited on a Montage non-linear system, had this problem as late as 1990. But the price of computer memory has dropped so much in the last five years that all of this has changed. It is very easy now to store the dailies for an entire feature film (upwards of fifty hours of material) on an array of hard drives.<br />Also there was a bottle-neck problem in the work flow. The early machines were so expensive that you could usually afford just one machine per film. As a result the editor and the assistant had to work split shifts, which meant that the assistant normally had to work at night, with all the problems that would follow from that kind of arrangement. <br />Reliability was another problem. Until recently there was difficulty in linking the image correctly with the sound and also in having the final decision list, from which the negative was cut, be reliably correct. These problems have largely disappeared in the last four or five years as the result of changes in software.<br />The overall cost of digital editing is sold as a time and money saver, and so it can be under certain circumstances, mainly if you forego the expense of printing dailies on film and simply transfer direct from the negative onto the hard drives. But for films over a certain budget (say $20 million) workprint is a smaller percentage of the overall cost, and so it does not make sense to eliminate the viewing of dailies on film. <br />But this means that you now have two different systems, film and digital, and you have to serve both with the appropriate manpower and technology, which usually more than eats up the savings you might have otherwise made. This problem is still with us (part of the “chandelier effect” I mentioned earlier) and will probably remain until film itself - as the primary recording and distribution medium - is bypassed by digital cameras and projection.<br />Critical mass is harder to pin down. Within any technological society, how much support is there for any specific system? Until recently, digital editing had not yet achieved this critical mass. If you had a problem, there were only two or three people to call, and everybody else with a problem was calling those people, and so reliable support fell below a certain threshold. Throughout the 1980’s and early 90’s, it was easier, given the low critical mass of digital, to simply stay with film unless you were an intrepid explorer (thankfully, there were some!). <br />Once the critical threshold is bypassed, however, the urge to change becomes almost inexorable, and it seems to have reached that point now.<br />However, as a result of this, Kems and Steenbecks are no longer being maintained at the same high level as they used to be. Also, there’s a whole generation of film students, coming into the business as assistants, who don’t know very much about sprocketed film. So the technical and logistic support for film editing is beginning to wither away, which forces even more editors into the digital realm.<br />THE FUTURE:<br />Sound editors have always been used to thinking in what I would call vertical and horizontal dimensions at the same time. The sound editor naturally moves forward through the film in time - one sound follows another - and this would account for the horizontal dimension. But he also has to think vertically, which is to say, “What sounds are going to be happening at the same time?” There might be, for example, the background of a freeway; but along with the freeway there might be birds singing, a plane passing overhead, footsteps of pedestrians, etc., etc. Each of these is a separate layer of sound, and the beauty of the sound editor’s work, like a musician’s, is the creation and integration of a multidimensional tapestry of sound.<br />Up until now, however, picture editors have thought almost exclusively in the horizontal direction: the question to be answered was simply, “What’s next?” As you can tell from my math at the beginning, that’s complicated enough - there are a tremendous number of options in the construction of a film - but in the future that number is going to get even more cosmic because film editors will have to start thinking horizontally as well, which is to say: “What can I edit within the frame to make the shot better?” Earlier, Mitch presented a dramatic example with his music video, where there were all kinds of repositionings and freeze frames. But special effects work can (and has for a while) become sophisticated and subtle, not even noticeable as an effect, allowing the director and editor to say, “I don’t really like that sky after all,” or “I think this should be winter, so let’s get rid of the leaves in this shot.” In the near future, machines like the Avid, which are good at the horizontal dimension, will fuse with machines like the Harry, which are good at the vertical dimension. There will be some unforeseen consequences of this kind of development, such as: can one editor actually do all this, or will the work be divided up among two crews, a Vertical Team and a Horizontal Team?<br />In the old days, if you wanted to do a special effect like replacing the sky with another color, you had to use a special camera during production, VistaVision or 70 mm, to get a large format negative to work with so that the grain of multiple reprintings wouldn’t show. Now none of that is an issue. There also used to be a tremendous amount of time spent in shooting special effects - if somebody flew through the air, as you saw with those little people in Mitch’s example, they were attached to cables and consequently the cameraman had to light the scene so that the cables would be as invisible as possible. Now, with digital effects technology, you make the cables as big and brightly colored as you can possibly make them, because it then becomes easier to see and digitally remove them. <br />The Holy Grail, of course, is an Avid-Harry editing machine that actually produces the final product, not just a sketch of the final product. To some extent, this has already been achieved in television, but not in features, because the resolution has to be so much higher for the big screen.<br />We’ll break for coffee now and maybe we can get the machines going when we come back. <br />(BREAK)<br />During the coffee break I was talking to some people about the pluses and minuses of the digital systems and I should have mentioned earlier that one of the things I have not yet solved with digital is something that was a crucial part of the creative process for me with linear systems (Steenbecks and Kems). On a Kem, all of the material is stored in 10 minute rolls. When I would look through my notes, I might find something that said, “Excellent look with little tear in the corner of her eye - roll 250 at 563 feet.” I would put that roll on the machine and thread it through the prism, and at very high speed - fifteen times normal speed - wind down to where that shot was supposed to be. But since the film is running through the prism, I would be looking at everything on the roll down to 563 feet. And always - well, eight times out of every ten - I would stop long before I got to where I was going because I would have seen something much better than what I was looking for. <br />It’s the nature of the human imagination to be able to recognize ideas more powerfully than we can articulate them - the old “I don’t know what art is, but I know it when I see it.” When you are in a foreign country, you can always understand more of the language than you can speak. To a certain extent, every film that you make is a “foreign country,” and you are learning the language of that country when you make the film. Every film has (or should have) a unique way of communicating, and so you are struggling to learn this language, but the film can speak it better than you. So, in the search for what I wanted, I would find instead what I needed - something different, better, more quirky, more accidental, more true than my first impression. I could recognize it when I saw it, but I could not have articulated it in advance. Picasso used to say “I do not seek, I find” - which is another way of getting at the same idea.<br />The big selling point of the Avid is random access. The advertisements say, “Get instantly to where you want to go. All you have to do is tell the machine and it will go there instantly, like the perfect assistant.” Yes, true enough, but that’s actually a drawback for me - I don’t always want to go where I say I want to go. That just gives me the starting point. I want the material itself to tell me what to do next. <br />Now, technically, there is nothing that prevents me from using the Avid as a linear machine - you can organize the material in large blocks and scroll at high speed through them just like on a KEM. But it’s so easy to use random access that by default, it rules your decisions. How do you control your impulse to be immediately satisfied? I want what I want, so the machine gives it to me like the Genie in the Lamp. But something has been lost. Oscar Wilde: “When God wants to punish somebody, He gives them what they want.”<br />Technically, I should also add that there’s a subtle but profound difference in how film and digital move at high speed. On linear film machines, like the Kem, when you go at ten times normal speed the machines achieve that speed by reducing the amount of time that any one frame is seen by 90%. So a frame is on for 1/240th of a second, not 1/24th of a second - very fast, but it’s still there. You still can catch a little something from every single frame. But the digital systems can’t do that, by the nature of their design: they achieve ten times normal speed at the cost of suppressing 90% of the information. So if you say to a digital machine, “Go ten times faster,” it will do so, but only by showing you one frame out of every ten. It’s like skipping a rock across the surface of a lake. As a result, when you run at ten times normal speed, you are not seeing 90% of the film. Whereas when you watch sprocketed film at high speed on a Kem, you see everything. I’m always amazed at how subtle the human eye is, even at those high speeds, at detecting tiny inflections of looks and expression and action. But I can’t see them in the Avid because of the skipping process that is used.<br />Maybe this is the reason that I have resisted using Avids as linear systems. I’ve been unsatisfied because they can’t really do it very well. Technically, I think it’s a profound problem it’s something built into the nature of video monitoring: it takes just so long to scan a frame and things can’t go any faster than they do. Whereas it is easy to increase the scan rate of a KEM by simply speeding up the rotation of the prism.<br />Let me show you a couple of clips from English Patient”. <br />The first one is a fairly straightforward example of what can be created in post-production through the subtle integration of editing and sound: the interrogation of Caravaggio (Willem Dafoe) by the German officer Muller (Jurgen Prochnow), who has just threatened Caravaggio. <br />In the scene as it was shot, Muller becomes frustrated with Caravaggio’s lack of cooperation and proposes to punish Caravaggio simply for adultery according to Moslem law, by cutting off his hands. “Or is that for stealing? You must know. You were brought up in Libya, yes?”<br />Caravaggio becomes frightened and pleads “Don’t cut me.”<br />Muller (knowing very well that it was): “Or was that Toronto?”<br />Caravaggio (ashen): “Don’t cut me. Come on.”<br />Muller then proposes a bargain: names for fingers. For every name Caravaggio gives him, Muller will spare one of his fingers: “I get something, you keep something.” <br />Since Caravaggio does not respond, Muller orders the Moslem nurse in attendance to cut off Caravaggio’s thumbs.<br />In editing this scene, it occurred to me to play with the idea that Muller was initially using this threat as a test, not necessarily intending to go through with it. I read somewhere that Germans in Muller’s position were most upset not by resistance, but by fear. If someone opposed them with strength, there was a grudging respect. But if fear was shown, then they increased the pressure, hoping for resistance at a deeper level. If they didn’t get it, they increased the pressure still more, etc. - a self-reinforcing spiral that had no end except the destruction of the victim.<br />What interested me, then, was the moment the spiral begins to really curve in on itself, the moment that Muller thought: “I am actually going to do this.”<br />To achieve this with the material we had, I did two things: separated Caravaggio’s two “don’t cut me” lines, putting one much later than the other, after the bargain of “fingers for names”; and, immediately following this repositioned reading of “don’t cut me”, I created a sudden silence on the soundtrack, accompanied by a pause in the action, allowing the atmosphere to congeal towards the inevitable result. What in music would be called a luftpause.<br />To prepare for this moment, we had created and maintained a quite active offstage sound ambiance throughout the first part of the scene: ambulances, planes going overhead, people walking around, muffled voices, a couple of guns going off. And the room was filled with the sound of flies. But at the moment that Caravaggio decisively shows his fear, and Muller actually decides to cut off his thumbs, all of these things stop. Even the flies stop buzzing.<br />Then Muller moves into action, and the scene shifts to another gear. In musical terms, after the luftpause there is a key change.<br />Let’s take a look.<br />(CLIP)<br />This moment in the interrogation is typical of many “editorial” moments throughout the film. Just as an actor brings different readings and interpretations to the script, often surprising things that the writer/director did not originally have in mind, the editor can (and should) bring his own interpretations to the final assembly of the material. Just as with actors, however, it is finally up to the director to decide whether these new interpretations are things that he wants to have in the film.<br />What I’m going to show you now is the scene in which Caravaggio asks the Patient his final questions about what really happened in North Africa. <br />In the original assembly of the film, there was a subsequent scene up in Hana’s room, the breaking up of her relationship with Kip, in which Kip asks her to come to India with him and she declines. That scene was finally dropped because it was linked something else that was eliminated from the film: Kip’s reaction to the news of the bombing of Hiroshima.<br />Any time a scene is dropped from a film, however, the elements out of which it was composed are now “loose” and available to other scenes that may possibly be able to make use of them. There has to be matching continuity, of course, (costume, lighting, etc.) but with digital editing - particularly the “vertical” kind of editing I was talking about earlier - you can make something work that might not otherwise be suitable. <br />What I’m going to show you is a good example of this, and an indication of the kinds of things that will be happening more and more as digital technologies take hold further .<br />As a result of dropping the Kip-Hana “India” scene, we had available to us close shots of Hana listening to Kip. And we found we could use them to imply that she was listening to the conversation downstairs between the Patient and Caravaggio. <br />So now it becomes a three-way event - the scene downstairs between the two men, and Hana looking down through a hole in the floor (which we found from another scene) through which she could listen in on the conversation. All of this is conventionally edited - it is simply using shots from other scenes, making the color timing match, and putting them in a different context. <br />However, we wanted, at the end of the scene, to have a full shot of Juliette thinking about what she had heard, as closure. The problem was that Kip was in that wide shot. So, on the Avid, I created a split screen, superimposing some material from the back wall over the figure of Kip, and consequently eliminated him from the shot. This is what is called cloning - we take those pixels of the wall and just put them over the body of Kip. <br />So now we’re left with a wide shot in which there is only Hana, as if she’s listening to the conversation between Caravaggio and the Patient downstairs, rather than Kip in the same room. <br />All of this would be only superficially interesting unless it served a higher dramatic purpose, which in this case was making the scene become triangular, as it were, with Hana participating as a listener. This prepares her (and us) for the moment a few scenes later where she injects the Patient, at his wordless request, with a lethal dose of morphine. In the screenplay she did so without knowing all the details of his life - the crucial details about his relationship with Katharine and subsequently with the German army. Now, with this change, she knows everything that there is to know, and so her role in the death of the Patient has deeper currents. There is a deep knowledge behind the act. <br />(RUN CLIP)<br />In retrospect, you might say that although it’s better for Hana to have been present at this scene, to know all this information, why couldn’t the screenplay have been written that way? Well - many things were different in the original version of the screenplay, and once you begin to alter that structure it is exceedingly difficult to predict all of the interlocking consequences. Sort of like predicting the weather a month in advance. In the past, when we came up against this kind of problem, either we couldn’t solve it or it didn’t occur to us in the first place, or it might have been too expensive, and so we would go down another path. In this case, it was almost as easy to do as it was to think about it.<br />I think my time is up, so I will leave you with this thought: that the key to all editing, but particularly to digital editing, is discovering, for each of you, what the correct balance is between thought and action. As I just mentioned, it is easy to think/act on an Avid - to respond immediately to your hunches with action. Kind of like the relationship between thought and conversation. But on the other hand, the facility with which you can sometimes think/act must not keep you from planning ahead, like preparing to give a speech. “Think/act” is fine for casual conversation, but “think.......act” is better for presentations needing a developed structure leading to a certain conclusion.<br />Viewed on a moment by moment basis, film appears (and should appear) to be conversational and spontaneous, but viewed as a whole, it appears to be a highly developed and structured art. It is a question of finding the right approach for the appropriate moment.<br />Thank you very much.<br />Emerging Digital Culture In Audiovisual Production: A Case Study Of The Media Program Of Universite du Quebec a Montreal<br />Philippe Menard, University of Quebec, Montreal<br />Philippe Menard is a composer mainly in the electroacoustic repertoire, and a professor of audio production in the media programme of UQAM, (Universite du Quebec a Montreal). Besides teaching and composing, he is also engaged in research into musical robotics, having designed an interactive musical instrument called SYNCHOROS which he has used in live performances since 1986. He received most of his musical training in France with the Groupe de Recherches Musicales, the Groupe de Musique Experimentale de Bourges and the Paris-based center for computer-generated and new music, IRCAM.<br />Our program in UQAM includes photo, video, audio, film, television and multimedia training. This program is definitely more creation than technologyoriented, even if one of our pedagogical goals is a mastering of the medium instrumentation in both analog and digital worlds.<br />Within less than four years, we experienced a dramatic digital shift: fully digital audio equipment from the beginners' level to film postproduction; fully digital sound and image postproduction, back from extramuros to intramuros laboratories with AVID and PROTOOLS instruments; fully digital multimedia equipment with two brandnew program slates: a master's degree and a concentration (or major) in bachelor’s degree.<br />When I think that only four years ago, we had no SMPTE timecode systems, that all sound recorders were analog machines, that all sound and film editing was bladesplicing, that all projects were postproduced in private laboratories, that the only computer around was in the MIDI sound studio, and that the meaning of multimedia was to be found in the combination of light, slide and video monitors on stage, I can tell that we certainly experienced this shift as a real cultural shock, but a positive shock. It certainly is a shock as far as means and methods of production but not intentions and functions of production are concerned. Recording the appropriate sound is not mainly a question of microphone and tape recorder but above all a question of knowledge and intention of having this specific sound in this precise environment for a definite meaning.<br />When, on one hand, the emphasis of training is not on the tool itself but on creativity and, on second hand, that the same coaches (or professors) can make the bridge between the analog and digital technology as we did, the shift can be softer. I think that our success lies first of all in both of these aspects of our pedagogical approach: continuity in the functions and continuity in the persons training to these functions in spite of the discontinuity of the tool. I think it is an important factor in the emerging digital culture of the students: these ones can more easily identify their coaches to the functions of productions than to the means or tools. For example, composing as much with a piano, a microphone as with a sequencing software, I am regarded very clearly as a composer, more than a piano, a microphone or a computer user, allowing the students to focus more on the intentionality than instrumentality.<br />An additional reason to this success can be found in a constant coordination between the professors of audio, video and computer fields, in their confluence of thought both professional and academic, in their profound agreement on the methodology of production.<br />What explains that the same persons did manage to master both the analog and digital instruments without relying only on skilled technicians is that most of our professors are experimental artists in electronic and computer music, video art, installation and performance art having gradually or suddenly integrated the computerized tools in their own artistic practice and having normally enough introduced them in their teaching. In the same way experimental audio and video, more precisely electroacoustic music and videoart, did influence in many countries the development of cinematography, some professors also experimental artists did corrupt to a certain degree our media curriculum in two ways: first, through the appropriation of the digital technology and, second, through a renewal of the writing. It is a fact in our history of media training that the technological contamination comes from the audio, video and computer departments. And without any surprise, the same persons allied and fought together to establish new curricula in interactive multimedia both at graduate and postgraduate levels. I cannot deny that this group of professors make a nonneglectable pressure on the traditional film and television department and is pushing the classical storytelling to a more exploratory approach.<br />Actually our dynamism comes for a great part from this friendly confrontation and mutual influence between both of these schools of thought: on one hand, the school of dramaturgy, a cinema of very definite characters in numerous versions of filmscripts, very few characters in most of the time very realistic locations, fictions very close to the ambient culture portraying in a way this culture; a type of creation where production materializes preproduction or scripting; on the other hand, the school of experimental "
( in its broadest meaning), with conceptual design always close at every step to instrumentation, with creativity distributed on each step from first scripting to postproduction, where the creative potential of postproduction tools is present in each preceding step; more generally, a school of onirism, of a certain surrealism, where the technical lightness allows more spontaneity, more live creativity. Our personality in this emerging writing or language is also determined by the neighboring visual arts' department. Their curriculum also includes video production but a lot more conceptual, minimalist, static, much less narrative. In our media curriculum, the video or television productions keep on defining themselves through the documentary or fiction languages but both of them in a more delinquent or scattered style: storytelling or documentary research can be based on the portioning of the screen surface, of electrons processing, of montage rhythm, of mixing policony (like polyphony). There is a clear appeal to electronic stage setting and stenography. Nourished parallely by classical cinema and conceptual art, these students productions are most of the time fresher, younger, less squeezed, closer to the generation which deliver them, more authentic to this generation, with a respectful distance from their master’s minds.<br />The contamination introduced by a few professors is echoed and propagated by the students themselves. Our communication department and curriculum really offer a simultaneous multimedia environment During three years, the same students have lectures and workshops in photo, audio, video, film, television and interactive multimedia. So there are a constant dialectics and multilayered exchanges between what is perceived as being more creative, more exploratory, more progressive and on the contrary more traditional, more classical, more dogmatic. These students in taking over the digital technology bring the building into shaking. For example, the ones involved in video production do influence and make pressure on colleagues in cinema and vice-versa. Used to "
and its liberty of creation, encouraged by the professors to experimentation, some students got frustrated in the more classical and dogmatic ways of filmmaking; they got frustrated to see some of their audacious ideas being crushed for reasons of mentality or equipment. In the traditional filmmaking culture, there is a strong pressure for keeping a fiction in a certain realism; it is audacious to give it a nonrealistic, poetic or surrealistic flavor. The habit is to stick to strict codes while more and more students feel the need in accordance with their own culture to play with the codes, to pervert them, to show the artificiality or the fake of any mediation. On the other hand, they benefited of a rich school of dramaturgy, description of characters and actors' coaching.<br />Generally speaking, our experience till now is that filmmaking has been very little contaminated by digital postproduction in scripting and fllming steps. But postproduction itself has throughly changed the face of the projects, has injected a new enthusiasm among the students who actively participate through numerous critical video projections to sculpt the product: at each step, montage is tighter, sound environments and musical proposals more inventive and relevant. Television has a different story: it has been spoiled both by theater and video, due to affinities of the professor to these media . Videography is probably the most impure, the most hybrid medium, with a great permeability to postproduction creative potential. In most of the students productions, postproduction is already active at scripting steps.<br />The art of the experimental artist is based on the trial and error process, which is one of the greatest strengths of the computer. Being at ease with this process, it is also part of the pedagogical strategies to train the student to this process. Consequently, digital trial and error process applied to sampling, scanning, processing, editing, composing, mixing etc. becomes a strong feature of this new digital culture.<br />Experimental artists are often inclined to become multimedia artists. Actually it is the case of many of our artistsprofessors. For these persons, the jump into computer arts is very attractive because of the unity of the medium supporting a multiplicity of tasks: the same machine for words, sounds, pictures, films altogether; and also because of the transferability of the utilization's metaphor and ergonomy. In using various software, the students discover a lot of analogies with the result that many students are not afraid to be trained simultaneously in sound and image. I can refer to many cases of students coming from audio and being after a while and an appropriate training excellent nonlinear video editors. The factor of medium unity definitely appeals on these students for an integration of photo, audio and video in interactive environment. It certainly is another feature of this new digital culture.<br />There is a clear emphasis on montage (editing and processing) in our students' training. I would like to describe our approach through the audio exercises. From the very first audio workshops, the students are introduced to an ideology and technique of montage very much inspired by the electroacoustic music school: interview trimming, micromontages or clips in three distinctive genres: short duration relationships between heterogeneous sound objects; chains of musical fragments; ear short films (many French composers from Pierre Henry to Alain Savouret spoke of music in terms of cinema for ears). These basic exercises reflect the nature of our ideology of montage, have a great deal of influence on the visual editing and give the students a large amount of editing analogies transferable to image editing. In the digital shift, with higher quality standards, we were able to offer a securing continuity in the exercises' corpus. Actually, being well targeted and a bit visionary, these production exercises slide very gently into the new digital would. We can push the students a step further in the discovery of the documentary side of the interview trimming, the fictive side of the bigger polyphonic or mixing projects and the clip side of the more artistic or musical micromontages. Giving constraints of production always for the container never for the content, we excite the creativity of the students already familiar with the culture of sampling, clip and repetition as these ingredients are present in techno music for example. Thanks to digital technology, we registered clear gains in dynamics, in sound clarity or lisibility, in time continuity, in montage rhythm, in polyphonic complexity. Through these parameters, the students are constantly at the heart of two major ingredients of audiovisual production: timeline in editing and its rhythmic structure; polyphonic thickness of sound spaces designed according to the classical cinema standards: dialogues, sound effects and music. Without any picture, students get a real pleasure in telling stories, inventing sound poems, designing imaginary spaces which are various ways of making earcinema. All these techniques and ideology of production keep being applied in a more sophisticated way in more advanced workshops of the film, television or multimedia <br />Since we introduced the postproduction workshops in our curriculum, we noticed the following effects. First, as far as sound is concerned, there is a maximal constraint for high quality of sound recording in order to avoid too a large discrepancy between poorly recorded sounds and richly sampled ones. There is an increasing interested filming stage for free sounds, stereo ambient sounds. There is equally an increasing sensitivity to dimensional changes, broadening of the space through postproduction spatialization. This consciousness of the leftright, frontback and basstreble axis facilitates larger framings, more contrasts and a greater exploration of the Zaxis (depth). Opportunities to use music as a character or in subtle leitmotifs suggest to many students to write their scripts or film with music. Not only are most of the musical scores original ones but they are often shaped as electroacoustic sound designs by talented nonmusician sound designers as in the exercise "
.<br />As far as image is concerned, speed of editing allows multiple versions from the first assembly to the final cut and bring the product very rapidly to a tight and meaningful montage. During the passed winter, we postproduced 6 documents: 3 in the combination AVIDPROTOOLS and 3 on AVID only within less than 6 months: a real breakthrough and a challenge which many colleagues thought that I would lose. We did succeed for three reasons: very much stimulated and talented students; a well greased logistics and agenda; and finally a competent and professional support. In such a tightened agenda, fllming weaknesses can be detected sooner and many a film directors wish to reshoot some scenes or look at archives. This trial and error process suggests for the coming years to ask an 8 minutes portrait from no more than,t30 minutes rushes. Through experimentation of many montage concepts, one can think that soon enough one will pop out for editing the most relevant and original portrait.<br />In the other media, our approach is pretty much inspired by music seen as a large polyphony (space and mixing) rolling in time (objects and montage). This concept of polyphony transferred to policony, that is a mix of sound and image objects living in magnified real or imaginary spaces, is very attractive for the students. The video, medium of impurity, of mixity, of hybridation, of cohabitation, of collage, of quotation, with its unfathomable range of electronic effects (framing, screen portioning, superimposition, sizing, colorizing etc.) inspired many projects with divided screens to illustrate simultaneous times and spaces, a way to be polyphonic on screen; with characters filmed alone for keying effects in order to get more imaginary, more metaphoric encounters, in contracted or expanded surrealistic spatio-temporal dimensions; with sounds and images in true counterpoint without any redundancy; with imbedded stories, all different but linked to a common theme through appropriate icons or iphons giving the basis to a parallel montage. The postproduction instrumentation clearly influence the successive versions of scripts to the ultimate technical sheets. Due to seventies and eighties' video art, there is a clear trend to shift the pole of videographic creation to the postproduction stage. As we can see it nowadays in advertising, it is not rare that filming consists of recording few visual elements which will be reframed, animated and processed in virtual spaces. The video medium with stereo spatialized sound, integration of fixed and animated images, of informative, graphic or poetic texts include all the meaningful parameter of multimedia, except interactivity. <br />The multimedia production are designed according to the same spirit but with real gains as far as instantaneity, relationships between text, image and sound elements and transfer to the user of the final organization of the product are concerned. One interesting aspect of the emerging digital culture is collective work. Due to computer technology itself and big classes of 25 students, many interactive works were polarized by tribal themes: a tribe, a family or a group of persons; stories related to each member; we enter the work through a character or a part of his story with total freedom. In this same metaphor (members of a set), we find rooms of a museum, apartments of a house, levels of a building, doors of a corridor etc. In the emerging interactive scripts, we bump into the multiple lives of a character, each life being chosen according to a specific statement; into an unique story(a trip, a murder, a love affair etc.) but told by different characters with their own biases (as did Durrell in the Alexandria Quartet); into objects in a room as starting points of the story of somebody's life, his adventures, his encounters etc. Some students are exploring algorithmic random montage, the structuring motor depending on a random generator and conditions linked to external sensors for distance, duration, weight, temperature information. With interactivity, polyphony, policony, time and space are living, developing themselves according to external events and software conditions on them.<br />I will conclude with this reflection: the actual trend in student film, television, video and multimedia production seem to lie in the impurity, mixity, hybridation, cohabitation, collage and quotation that yesterday was specific to video and today pervades all the audiovisual languages. New and Hybrid Forms of Drama - 1<br />Chris Hales<br />Chris Hales is a doctoral researcher in 'Interactive Film Art' at the Royal College of Art Film & TV department (London) and a Senior Lecturer at UWE Faculty of Art (Bristol), as well as doing some freelancing for Research Arts. 'Twelve' a CD-ROM was released in November 1996 on the experimental Laboratory label by Research Publishing Ltd, and consists of 2 interactive movies. 'The Twelve Loveliest Things I Know' installation was awarded the Prize for Artistic Excellence at the ARTEC exhibition in Japan. Other exhibitions in Holland, Germany, France, Australia, Canada and England have often been combined with participation in Conferences as a speaker. <br />His interactive CDROM films have been shown at: Melbourne International Film Festival 1996; Brisbane International Film Festival 1996; Techne; IMAGO Centre, Perth 1997; Oberhausen Kurzfilmtage 1997, and FCMM (Montreal) 1997. Museum installation showing his interactive movies have been presented at: ARTEC95 Nagoya, Japan ; EMAF Osnabruck, Germany 1995; IMPAKT Utrecht, Holland 1996; Language of Interactivity, Sydney, Australia 1996; Royal College of Art, London 1995; MILIA New Talent Pavilion 1995, Cannes, France; LoveBytes Festival, Sheffield, UK 1995; ARTEC97 Nagoya, Japan - 'SAUDADE' <br />He can be reached at email@example.com or firstname.lastname@example.org<br />As an outsider to the field of film and television training, my first observation is that while we talk about new technology, I think we’re still actually talking about an old product We’re still talking about using new technology to empower greater creativity and new ideas, but a lot of people who teach film and television worry about how that’s going to affect how they the production process. But the process about which they are concerned is an old product, a linear film, a series of moving images edited together. We mustn’t just think that new technology is merely a new means of making traditional films.<br />What interests me is work that is interactive. One can’t even say “non-linear” because that phrase could apply to many kinds of films that you see at the cinema, the editing and flashbacks and similar devices that alter the consecutive linear flow of time. One must not think that just because something is “digital” that it’s different. I don’t see much experimentation with interactivity going on in film schools, but in the UK, there’s quite a lot in art schools., If I was a film school teacher, that would worry me. There are a lot of people, illustrators, graphic designers, who are doing this experimentation, and I can hardly think of anything worse than graphic designers doing it.<br />I’d like to contrast two examples, one three years old, the other three days old. They’re actually quite unified, in that in both cases you’re looking at a single stream of moving images, but you the stream can be changed. They are both interactive. There has been much discussion about the concept of so-called interactive movies, but they’ve been done really badly. Either we get to pick one of three endings, typically a) the girl commits suicide; or she goes with one bloke; or she goes with the other; and the audience member gets to click one or the other. It’s all obvious, but that’s what’s being done in commercial CD ROM’s similar formats. The kind of models that people are using for interactive TV are just not imaginative. Yes. we can change the viewpoint, we can watch it from this side or that side, or change the character, or look from one character or the other. In my opinion, if that’s the best people can do, it’s not even worth trying. My work attempts to be slightly more experimental. I do some of my work as part of an academic research project, but it’s actually very difficult to quantify what it is you’re trying to do. So the best feedback that I get is through making these things, showing them, and actually getting comments from the public.<br />(What follows is an interpretation of Chris Hales comments on his demonstration of interactive software. It is impossible to reproduce the programs in text form, but the gist of his comments is useful in understanding interactive storytelling.)<br />From the audience’s perspective all of my work is predicated on the fact that the audience member has to click at the right place at the right. In fact, cues about the right place are coded into the visual look of the piece as part of the art direction. The work also signals to the audience members when they can or can’t interact. <br />It’s very difficult for me to talk while I show the work because I’ve also got to interact. It’s also very difficult for people who just observe, because for it to make real sense, you should be interacting, getting feedback the whole time through what you’re doing. It’s actually quite difficult to watch someone else do this, because it’s not really a story, it’s a guy in his apartment, he’s got to get through an interview and we can maliciously hinder it and derive some very simple pleasure from watching things bang him on the head. That’s it. In this fairly primitive experiment we are not continually interacting, but are allowed to intervene only at certain points.<br />I don’t consider that I’m really into looking at new narratives or new stories. There’s very little to the story, and in my mind the work is not really about storytelling.<br />Of course you may say that there are only ten possibilities for interaction in the example. It is all pre-recorded and things already have been structured to make sense, but I think of my work, quite simply, as being like a reservoir or repository of related visual moving materials, that can be edited together by the viewer so that they make sense. But it is a finite supply of materials, and one of my tricks is to put so much in there that it requires a lengthy period of exploration by the viewer. But it doesn’t stop people from enjoying the interaction<br />The nature of another work can be thought of as algorithmically driven, in that it remembers all the different things that the viewer has done and it’s modifying the behavior of what you see on the screen as you go through it. In other words, it’s not only interactive, but adaptive. It’s almost the same story, but in this instance it’s a man in his apartment who just wants to watch the soccer. He’s one of these people who likes to have a nice, tidy apartment. The reason you want to interact is, there’s a German word for it, schadenfreude, which means something like taking malicious pleasure from watching someone in difficulty. There’s a lot of schadenfreude in software.<br />The idea in this example is that the subject on the screen will try to resist. An important difference in this example is that there are options in the frame that allow the user to choose to interact with, and each option has a different outcome. If you click on the champagne you can shake it up. The choice of interaction and outcome begins to set up a scenario. Much later in the piece, we can imagine that the whole situation will explode.. The subject on the screen is trying to have a quiet life, and we are trying to disturb him.. There are some chess pieces on the screen, and we disturb him by clicking on them and causing them to fall to the floor. He obviously wants to drink that wine, so he’d better do something about that. There’s a whole scenario around that. It’s a whole self-contained world. I’ve organized the scenario so that hes going to have to replace the wine. The way the program is written, he’s not going to clean it up. And you can do that ten times because the secret is that he’s got a whole fridge full of wine and chocolate eclairs. <br />As the program runs, at some point he will suddenly get up, and he’ll reverse all the damage that you’ve done, so he’ll pick up the chess pieces, he’ll replace the books, he’ll run into one of his rooms, he’ll get the vacuum cleaner out and he cleans up. Eventually he’ll get it back to the very beginnings . Towards the climax he’s virtually given up all hope, he’s trashed his apartment, and meanwhile the phone rings, but it’s buried under trash. He find it but there’s no sound on it, There’s a massive climax with the champagne exploding, and so it goes on. These are very low-budget productions, about fifty British pounds Sterling for a few hi-8 tapes. <br />My experience suggests that certain very specific genres of “film-like things” will work well in interactive mode, and other won’t. I don’t think, for example, that I could make a suspense or a horror movie. There’s something implicit in interacting makes me believe that that you just can’t sustain suspense, but obviously, slapstick comedy is a very good genre to do interactively. I think that documentary-based material can work quite well. <br />I did a piece called “Twelve Loveliest Things I Know.” I talked and filmed a lot of children and asked them to tell me the twelve loveliest things that they know. I could talk for an hour on the reason behind this, but basically the idea in this is very dreamlike, more like visual poetry than watching a film. Anything that’s colorful, that stands out intuitively is meant to be clicked on, and in doing that the work develops in a dreamlike way. It’s about serendipity, when things happen almost by chance. <br />These actions trigger certain themes. The children talked a lot about speed and the feeling of just being free. We have a bright red object on monochrome ground that attracts us. Through the metaphor, in this case, clicking or touching, we’re able to then generate material related to it. It sounds really cold when you try and analyze it like that, but it’s not cold at all, It is a very intuitive kind of thing. You can click on the dog, you can click on the clouds, you can click on the sun. There are about eight different branching points. Everything happens in threes, isn’t that right? In this piece go up here three times and you get to a kite, and then it all changes Without my going into a lengthy discussion on why this piece came about, it should be enjoyed on its own visual merits. <br />New and Hybrid Forms of Drama - 2 <br />Uncharted Interactive Cinema: Simulation, Power, and Language Games <br />Hilary Kapan, California Institute of the Arts<br />Hillary Kapan teaches at the California Institute of the Arts. He received his MFA from the University of Oregon in experimental art, animation, and interactivity. Previously, he taught at the University of Maryland and the Cleveland Institute of Art. He has received many honors, most recently First Prize at the University of Alaska Anchorage National Juried Computer Art Exhibit. Kapan's professional activities include serving as: Artist in Residence at Curtin University, Perth, Australia; panelist for Candido Mendes Centro Culturale, Brazil; producer, "
Art Transition Computer Animation Program"
, Boston; and juror, Montgomery Community College Student Art Exhibit. He has authored several grants including an AMAMA grant for "
Investigation of Visual Complexity Limits Through the Integration of Temporal Imaging Techniques."
Selected exhibitions of Kapan's work have shown at Siggraph in past years, as well as in different showings in San Francisco, Rio de Janeiro, Montreal, and Linz, Austria. He also has professional affiliation with Inter-Society of Electronic Art, and Ylem: a California based new genre arts organization.<br />I’d like to discuss some notions I have about interactivity. In particular I’d like to talk about the most well known form of interactivity - that is the point-and-click kind of interface - the kind of interface that we’re used to seeing whenever we use a computer. If you’re using Windows 95 or Windows NT or Macintosh or an Omega or a Silicon Graphics workstation, you are probably using a point-and-click interface in which you move a mouse and a cursor mimics the motions of the mouse. So I want to talk about point-and-click since it’s probably used in 95% of computer interfaces, but I’d also like to introduce you to some notions that I have about other kinds of interfaces, or other sorts of interface possibilities, and along the way it will be important for me to talk about a few other notions that I think are very important but not really yet talked about, even among most electronic artists, the power relationships between the work and the viewer, and the various notions of simulation, and language games.<br />There’s kind of a sub-culture among interactive artists, and I would include some people who would think of themselves as interactive cinema artists of some sort. I would include myself in that category, although you may wonder after you’ve seen what I’ve done, what the connection possibly might be. I hope to get to that. There’s kind of a sub-culture of artists who really are rebelling against the notion of point-and-click interfaces. These artists do not like hypertext. They scoff at hypertext and say “oh, it’s just clicking on a word and then you go to something which explains exactly what the word might say.” I have some sympathy with that group, but I also see that there’s something obviously, very useful in a point-and-click interface, so I’m not prepared to throw it out entirely, but I think that we’re just beginning to explore the interactive realm, and point-and-click is the heritage that we have inherited from the folks at Xerox PARC. It was the Xerox Corporation’s Palo Alto Research Center that essentially created the Graphic User Interface (“GUI” pronounced “gooey”) that was then appropriated and refined by Apple Computer, who in turn tried to sue Microsoft for stealing the idea that they had essentially stolen from Xerox. They lost the case. <br />First, a couple of things about my attitude toward interactivity and toward the ideas that I will present. I’m just making them up. I don’t necessarily believe them. It’s true - I don’t necessarily believe them, and the reason I don’t believe them is because by now, I’ve seen a lot of my ideas kind of come and go and so I’m not prepared to pronounce some definitive notion of interactivity. I also think that the title of this talk really should be something like “Why is Interactivity Sick?” or “What’s Wrong with Interactivity?” Even though I don’t necessarily believe everything that follows, I think it is useful. <br />There are a number of kinds of interactivity - point-and-click is used in probably 95% of interfaces. There are some others that artists have explored, and I truly hope that as cinema people, you and your colleagues, (and I have to include half of myself because I studied in a film program as well as an art program so I wear a couple of hats,) don’t let the artist take control of interactivity because they will take it in a certain direction, while people coming from cinema can take it in another very different direction. <br />We have point-and-click, but we also have a form that I would call “passive viewer interaction.” In this example, the viewer walks into a room and there’s video camera pointed at the viewer, and the video image of the viewer is then composited or manipulated along with some pre-shot footage or footage from another location. That can be interesting, but the next possibility is even more interesting. I call it “willful user interaction.” In other words, work in which you’ve got to do something in order for the piece to give you anything back. But there are all kinds of varieties of willful user interaction. In contrast to that, there’s another type which Sarah Roberts, a colleague of mine at Cal Arts, calls “interpassivity,” as opposed to interactivity, in which a computer and another computer, or maybe several computers, all interact, and the viewer really is a viewer, as opposed to a user or an interactor. The human arrives and watches some computers interact. You may think this is ridiculous or kind of Frankensteinian but just wait, it gets worse. <br />Then there is work I would call “reactive.” Most point-and-click interactivity is in this category. You point at something with the mouse, click, and something happens. You see another image or it starts a program, or maybe you click and a menu pops down and you drag the cursor to select something. You react to the options offered. Most CD ROM’s, and even more of the material on the internet, is primarily reactive. In contrast to this, the artist Jim Campbell, who’s a brilliant guy, talks about “responsive” work. He writes: <br />“I find it useful to put interactive work on a dynamic spectrum with controllable systems on one end, (in other words point-and-click systems), and responsive systems on the other. In controllable systems the actions of the viewer correlate in a one-to-one way with the reaction of the system. Interactive CD ROM’s are on this end of the spectrum, and generally speaking, so are games. In responsive systems, on the other hand, the actions of the viewer are interpreted by the program to create the response of the system, so there’s some level of mediation. The program interprets what you’re doing rather than simply reacting to it. <br />This is a very important distinction, I think. The viewer can sense this. Jim goes on to say:<br />If a work is responding in a predictable way, and if the viewers becomes aware of the correlation between their actions and the work’s response to their action, then they will feel that they are in control and the possibility of dialogue is lost.<br />I think dialogue is going to be very important for interactivity, but it does not exist yet. That may be an overstatement or an overgeneralization. Jim has got a nice example:<br />The first time I walked through an automatic door at the supermarket, I thought the door was smart and it was responding to me. Now I step on the mat to open the door on purpose. The point is that often, the first time an interface is experienced, it’s perceived as being responsive, but if the interface is experienced again, it becomes controllable. The second time, it’s not a question but a command.<br />So, that tells us something about point-and-click. I think you could argue that point-and-click may have had its roots in the defense industry. You’ve got a big panel of buttons that have to control missiles or defensive aircraft. In other words, in war the idea is to have complete and precise command and control so you have buttons and you have sliders which give you complete control. Within cinema, on the other hand, (and this is perhaps an overgeneralization), you get to sit in a dark room, and give yourself over to the filmmaker and to the film, and there’s something very valuable in that. We all know this. But with television, especially television with a remote control, what do you have? You sit there on the couch and you click. I have friends, not all of them male, who will wait for about three seconds, and then they’ll click again, and they’ll click again. It’s like a search for the Holy Grail of television, but they never find it because they have to keep moving in order to search. <br />So, giving the user complete control, which a lot of people, including myself at one point, have said about the benefit of interactivity, is not necessarily always a good thing. In fact, I often think it’s a terrible idea, because much of what’s wonderful about cinema comes from the fact that the filmmaker has some power over the viewer, and the viewer surrenders to that power, and that’s lost if the viewer has complete control. <br />I’m personally interested in responsive systems. For example, let us consider the interactive simulation of phenomena. The phenomenon might be a weather system, or were I to throw this water into the audience. the water arcing through space, the fluid dynamics, the gravity, all of those would be phenomena. However, in response, there would be various kinds of behavior evidenced in the audience, probably attacking me or sending me off to a quiet place where I could rest. If we were to do this on a computer system, then we might create a simulation of behavior. <br />Finally, there’s cognitive simulation. I think of this as in some ways the most difficult kind of simulation. It is the simulation of mind or of some kind of sentient entity and it’s both frightening and perhaps abominable, but also very intriguing. I mean, we have Mary Shelley’s “Frankenstein” and its various re-tellings, and we have tales of animating the inanimate, Golems made of clay for example. I think humans have a powerful drive in this direction and it’s one that will probably be evidenced in the future in interactive cinema that uses some cognitive simulation. <br />There are a number of metaphors that have been used to coordinate or structure interaction. I’ll describe them very briefly. <br />Navigational - You’re in a room and you move to another room by pointing and clicking a point on the left of the screen, and as you look, the screen gives you a view and a sense that you’ve turned to the left. You’ve probably seen multi-media pieces of this sort. <br />Branching - I’d like to kind of distinguish branching from non-linearity. As some have pointed out, “Pulp Fiction” is in some sense a non-linear narrative. I remember sitting, watching it for the first time and suddenly I realized that Vincent Vega, the John Travolta character, is already dead. So, there’s a certain kind of non-linearity there. However, branching is quite a different thing, and not the same as non-linearity although there are some elements of non-linearity, obviously. I think true interactive branching is a bit of a problem. Actually, one of the few cases in which branching works for me is in some of Chris Hales’ work, in which you get to do various things to this person who’s trying to make it to an appointment. I think that becomes more than just branching, it becomes more along the lines of maybe a dialogue or interplay - and there’s some element of simulation as well. <br />The desktop metaphor - that’s the metaphor of having various files and folders, a la Windows 95 and Macintosh. <br />The town or city metaphor - some programs now use a graphic of a town and you can click on various buildings in order to access certain kinds of information. <br />And then there’s non-linear vs. alinear. I’m inventing the term alinear. I would call “Pulp Fiction”, in some sense, a bit of a non-linear narrative, but alinear would involve a narrative that is created as you go. So in other words, by working through a piece, there is a narrative created that was not pre-programmed. Now this is very hard to do and I don’t thing anyone’s done this successfully yet, but given that the computer power is strong enough, it’s fast enough, there’s enough memory, it can be done, given enough attention to software. I know that Microsoft has some ideas in this regard, but I much prefer that the world is not created by Microsoft. <br />Let us turn to notions of simulation. My apologies to Baudrillard, I’m not a Baudrillard scholar, and I’m only pretending to be one now but don’t worry, I’m not going to go into a long discussion of Baudrillard, and he talked about notion of simulation in a way very different from how I think about it, but nevertheless, there are some linkages.. Baudrillard talked about Disneyland and said that in fact Disneyland is not only in some sense a simulation of America but it simulates an America that does not exist. Disneyland exists, according to Baudrillard, in order to create the illusion that the rest of America is actually real, but in fact the rest of America is not real.. Computer scientists want to simulate the weather, or they want to simulate other things very literally.<br />Sarah Roberts did “Elective Affinities”. a video installation in which four different characters exist on four different screens. Each screen has one character which is basically a laserdisc image and voice, and the remarkable thing about this work is that each character has a kind of “emotion engine” that simulates, in some sense, the emotions that a person might have as he or she is going through a particular kind of situation. In fact, each laser disc, each image, each person of the four is controlled by a computer, so there are four computers, four laser discs, and four screens and you can walk up to each one and listen in to the private thoughts of that person. Somehow, and it’s hard to convey this, somehow, there really is a sense in viewing and listening to this piece that there’s activity that’s going on, that there is some kind of processing going on behind the scenes. These people have a kind of dynamic existence that’s being played out - in other words, there’s a narrative that’s being created by these people as they kind of go through their thoughts, and while there’s no randomness involved in this work, Sarah Roberts cannot predict what kinds of thoughts will occur for each person. <br />Conceptually, they’re all driving in a car. In fact, the people have defined relationships to one another. The man driving is having an affair with the maid behind him. His wife is across and behind him, and her friend is in front. <br />Sarah Roberts wrote out the text, essentially twelve different sort of narrative bits, and each one is a thought in a particular emotional state. So this represents one of the characters, all of their possible thoughts and all the possible emotional states that they could have. In a way it’s like the script for the work, except that each character is controlled by a computer and may look at or glance in the direction of another character and in so doing affect that character’s emotional state. The emotional state in one way or another, will cause that character to then change, have different thoughts, have different responses and look at someone else. And so there’s this kind of subtle interaction between all of them. This is an example of what I would call an alinear narrative There’s a narrative created on the spot. It’s like the difference between me reading a paper and attempting to come up with the words as I speak. <br />I began “Blind Date” in 1993, and I was trying to poke fun at the infatuation that many people have with technology. At the time there were increasing amounts of pornography on CD ROM and the internet, accompanied by a kind of excitement by a lot of people, not just men, about the eroticism of computers. I thought this was ridiculous, and so I wanted to do a piece which was kind of a spoof or a farce or a satire. <br />So created an image on screen of a hand and you can rub the hand in various ways in order to arouse the hand. It’s an example of a kind of behavioral simulation, and also an example of a kind of language game that is just kind of a tip of an iceberg that I think is quite large. This was an installation in Paris and shows sort of what was typical at the installation and at some other installations is that when one person started to play with the piece a crowd would form, and the crowd would then kind of try to help the person interact with the piece, give suggestions, how please the hand, how to do what the hand liked.<br />This is supposed to be absurd. You have to figure out what the hand likes in order to arouse it, and it has different erotic states that it goes through. I intentionally made it so the participant has to be very vigorous. Remember, it’s a of caricature of eroticism. <br />How could this possibly relate to cinema? It doesn’t, but there is a kind of game at work, a kind of language game a la another French theorist, Lyotard, talking about Wittgenstein’s notion of language games. A language game has certain rules. You must have rules, and any action that is not one of the rules is perceived as irrelevant. In this case, the rules are a little different for the computer and for the viewer. The viewer has various kinds of rubbing motions that at his or her disposal, and they actually do cause different results depending upon the mood of the hand. Sometimes the hand will just be less responsive - it’s got a headache that day. In addition, the hand is making various kinds of exhortations. It’s a very narcissistic hand, and I think that’s an interesting metaphor for interactivity as well. <br />“Blind Date” is different from many pieces that have been done, and Sarah Roberts’ “Elective Affinities” is quite different as well, in that they both use simulation in a fairly novel sort of way. Simulation of appearance is one kind of simulation, such as the simulation of the appearance of dinosaurs in “Jurassic Park.” Another kind of simulation is simply the simulation of the of a set or a location within the production of the film.<br />There is the simulation of a sequence of events, that cinema does so very well. . We can think of cinema as storytelling, but we could also think of cinema as simulation. But it shifts a bit when we get to the simulation of process. In order to simulate process, I think computers or some kind of processing machine is needed, and if we move to the simulation of phenomena such as weather or the arc of water as it flies toward you, that definitely requires a computer. <br />Most interesting for me is behavioral or cognitive simulation. In Sarah Roberts’ work “Elective Affinities”, there is some sense of cognitive simulation, there are different states of mind, different thoughts that could be called up. In “Blind Date” there’s some sense of the simulation of behavior, the behavior of the hand, more in terms of what it’s saying in its state of arousal. In both cases we’re actually using computer variables in order to do this. <br />Finally, there are language games. For cinema I think language games that involve some kind of being in the computer, which may be human-like or not, could be much more interesting---and I don’t mean like a space alien, but I’m sure this will be done. I’d like to see some sense of a mind, or a presence at work in the computer. What will this do to cinema? Frankly, I don’t think it’s going to do very much to cinema. In fact, I prefer to view cinema as a field which can expand. We will need actors, and we will need editors and directors and writers to write the work which will create this sense of being. This is a bit in the direction of what Sarah Roberts did with “Elective Affinities.”<br />How does a language game differ from play? Many CD ROMs are trying to exploit play, some successfully but most not very successfully. You’re usually presented with a little environment in the screen and you can do things: click on things and you get little animations, or something similar. It’s a limited form of play, but I would say play is different from a language game. A language game involves rules, it involves an exchange of power. When you talk with someone else, there’s a language game involved. I think that’s much more interesting.<br />What about regular games? In my view. probably the only really successful interactive works are interactive video games, if we’re thinking of non-documentary. I think that in some cases documentary, has already been treated interestingly with interactivity. What’s interesting about a computer game? When I was at the Namco Arcade in London not long ago, I saw kids, usually, but not always, teenage boys, playing video games, very intensely involved. What was interesting is that typically, if the video game was good, there’d be a couple of other people watching, and everyone would engage in an exchange during and after the game. Basically, a narrative was developed, a rather pathetic narrative, but just like in my “Blind Date” piece, , but there would be some kind of narrative, some kind of excitement, some kind of charge or drama, because of the presence of time passing. Time would be of the essence, time is everything in a video game: you’ve got to be fast, you’ve got to shoot something, kill something, use fists or knees or legs or even, in one game, thighs as weapons that are used to snap the opponents head, in a fixed period of time. <br />So time is of the essence, and it’s also of the essence in cinema, in a vastly more sophisticated way. To me, what’s positive about a game is that time must pass inexorably, and you can’t go backwards. With most interactive multimedia though, part of my problem is that time can go backwards, you can go back, you can have another look, there’s no urgency, there’s no drama because the user controls the time. But when we sit and watch a film, we don’t have control of time. We have to be there and submit to the film. We don’t have to submit to interactive pieces, but if we’re interacting in a dialogue with a work that simulates behavior, frightening as that might be, then that urgency can come back, because we can’t undo what we said to this work, or in the case of Sarah Roberts’ piece, time continues to tick away, as each character affects the others and they cannot go backwards. They continue to move forward with their thoughts. That’s where I think drama has been lacking within interactive work, and I think that’s why it fails creatively in a lot of cases. <br />Finally, I want to discuss simulation in one other way. I have some potatoes. If I toss a potato into the air, then there’s a certain kind of simulation that I must do in order to catch the potato, especially if the potato is tumbling. I have to guess which way it’s going to land, because if it’s going to land on a pointy side, then I’m probably not going to catch it. So, my brain has to do that simulation very quickly. You may argue whether or not it’s simulation and I probably can’t convince you that this is simulation, but I think what’s interesting about interactivity is that the simulation that exists in the computer can evoke certain kinds of simulations in the mind, just as film does. But it doesn’t happen if I’m pointing and clicking and then I see an image, and then I click on that image, and I see another image, it doesn’t tend to happen to me, I don’t have any sense of a being or a presence or a process. The potato as it goes through the air is much more interesting.<br />Something needs to be said about interfaces. The interface for film is one in which we can enter. We can enter the film, we can become absorbed by the film. The screen does not cause us to bounce off it, as in television. The situation with computers is even worse. We’re always aware of the interface, the interface doesn’t go away. Every time we point-and-click we’re reminded of the interface <br />I can give you another example of interface. This is from a book by Douglas Hofstadter, who wrote Gödel, Escher, Bach: An Eternal Golden Braid,” I don’t know how to say it, and the book from which I quote, “Metamagical themas : Questing for the Essence of Mind and Pattern.” I’m not sure if he gets there but he has some provocative things to say:<br />Imagine the interface of words. This is an interface which usually disappears, except when we make puns, then we’re aware of it, and when we make double entendres. But, what if I say, and here I quote Hofstadter, “The whole point of this sentence is to make clear that the whole point of this sentence is.” He reminds us that this would not be anomalous if it were in Italian.<br />Finally, consider the following scenario. You have an interface, and you’re seeing through the interface to processes, simulated states of mind, behavior that’s simulated. You also have a database, in which you have film clips or sound or images, or other things. The simulation chooses from the database, based upon its own internal thinking and also on the interaction with the viewer. <br />This is different from cinema. With cinema you have an interface, the theater, the projector, the screen. In the cinema, the interface goes away, we don’t care about it. But we don’t have a computer-based behavioral simulation in the middle. The database in the cinema is the film that we see, although you could say that in making a film you’ve got a database of actors and databases of all of the other components to draw on, but once the film is made, the database is the film itself. <br />New and Hybrid Forms of Documentary<br />Michael Murtaugh<br />Michael Murtaugh is currently working at the new Metropolis Science & Technology center in Amsterdam. There, he is investigating the design of networked presentation engines that, in the context of a particular narrative, link visitors in an exhibition space to audience members participating via the world wide web. Murtaugh holds an Bachelor of Science degree in Computer Science, and a Master of Science in Media Technology, both from the Massachusetts Institute of Technology. From 1994 - 1996, Murtaugh was a member of the Interactive Cinema group of the MIT Media Lab under the supervision of Prof. Glorianna Davenport.<br />I worked under Glorianna Davenport at the MIT Media Lab. Glorianna herself was trained as a documentary filmmaker. She was a student of Ricky Leacock, who was very instrumental in at least American documentary, specifically cinema verité, and he taught and led a film/video group at MIT, which as far as I understand it, was one of the few places you could really go to be trained in that particular style of documentary filmmaking in America. Glorianna trained in that tradition, and in the mid ‘80s the media lab was formed at MIT. The original goal of the media lab anticipating this overlapping of media industries, computation, broadcast, television, film, and print media. Media were coming together and the Media Lab was anticipating the need for to be this kind of cross-disciplinary study to look at how the different pieces can fit together and create a new media. <br />So Glorianna became director. The film/video group transitioned into then what was called the Interactive Cinema group. I was graduated in 1994 as a computer science major at MIT. I had already been working with Glorianna on a project she did, an installation space in the media lab and, so my interests were mixed. My training was primarily computer science but I always had an interest in film, so as we liked to say in this group. the unique aspect of it is that it really draws on both traditions. We really have a range of students from both computer science and film backgrounds, and it’s really a dialogue between the two.<br />Glorianna’s idea of training is one in which all people, regardless of their background, participate in the process of actually gathering the story. They really become journalists, out with the camera, shooting on the field, learn about editing to some degree. But in addition all students also do some sort of programming and try to actually build a ind of simple system around the content, so that they really get their hands on all sides of it.<br />The concept Glorianna Davenport has come up with is one of “storytelling systems.” To introduce it, consider the following diagram:<br />If you think about a kind of traditional film process in a rough sense, it is a process of gathering a lot of raw material. The little balls with C’s represent gathered granules of content, or perhaps shots. They may have been edited into sequences, but in general the process is one of taking this raw material, constructing larger segments, building up scenes, arranging scenes into sequences, and finally, putting it all into a final fixed format, such as a two hour documentary. <br />In a storytelling system, the model changes and instead becomes, ideally, a very extendible database or archive of content, with some kind of description of the content. Between the material and the viewer, experiencing the story, is the storytelling system. This really is the key to this style of interactive cinema, an editor in a box. What we’re thinking about is how can we provide to the computer a kind of database, not of totally raw material, but maybe sequences, or scenes, and how can the computer take the final steps of actually arranging the material, acting as a kind of high level editor, putting the pieces together in a way that is responsive to the viewer <br />The immediately gain is that first of all, we have a content base that can grow. Shooting ratios in documentary are high. It is very typical that one may have ten times or more footage then what actually ends up on the screen.. That may be perfectly reasonable, but sometimes, in the process of putting together the final cut, things may get lost. An interesting character might have gotten lost because he no longer really fits into the constraints direction the film has taken, or he had to be sacrificed for reasons of time.. Frequently, one has several good stories that could really be developed, but just because a single documentary must serve an entire audience, one cannot dwell on characters, objects or events that might be of interest only to a segment of the audience.<br />From the audience member’s side, we have the option of spending as much or as little time with sections of content as we wish. We never are in danger of learning more about penguins than we really want to know, as the old story goes. We hope that we can build systems that are really repeatable, that encourage audience members to return and experience the material in other combinations, other ways.. <br />The other subtlety to this is something that Glorianna Davenport likes to call “evolving documentaries”, which is that there is no final cut. Nothing has to be finished. Now, with the internet and the possibility of an audience member repeatedly returning to a story there is no reason for closure. Is it possible to have a kind of “story base”, a database that grows as the story grows? We look at how this kind of method can apply to very complex stories and ongoing stories.<br />The very first project that we worked on is a documentary looking at downtown Boston. There’s a very large project called the “Big Dig,” -- it’s a kind of nickname--which will put a major section of the highway that goes through downtown Boston underground. It’s a long term project that’s not scheduled to finish until the year 2004 or 2005, and it will cost something like 8 billion dollars. It’s the largest public works project ever mounted in America. What makes it interesting for this kind of evolving documentary investigation is that it is a story that doesn’t lend itself to just one, static telling because of its complexity, because of its bulk, really. In fact, there are many ways that you can look at this story. We interview different types of characters - just to give you an example, there are residents of the neighborhood, Nancy Caruso is a resident, politicians like Fred Salvucci, Homer Russell, who is a city planner looking at how this area will be redeveloped because once this highway is put underground, which will open up a lot of new land in Boston. And there’s the former mayor of Boston, who covers the whole history of why this structure was built in the first place and where it’s going. Basically we have video clips that are typically thirty seconds to a minute long, and “thumbnail-size” still images that tell us what is in the image database.. <br />The granularity, the length, of the pieces that we give to the system is very important in that they need to be long enough to be coherent on their own, much like a single sentence or phrase in a story, but not so long as to resist being edited on the fly by the computer, the editor-in-a-box.<br />Around the edges of the screen we create four axes. I’ve already mentioned characters. we also have places - the North End is a neighborhood next to this construction that will be affected. The Artery itself is the structure that’ll be taken down. Along the bottom of the screen we’ve put a timeline - beginning before the artery existed, indicating when it was created, and finally looking to the future. And the final axis contains high level themes, like streets--tourism is a very big theme in Boston—a sub-theme of economics, which of course is a large theme. <br />The process consisted of Glorianna sending out a whole bunch of students - this is not a solo effort. I shot some of the footage that’s in here, but in general it was Glorianna, who each fall gets a bunch of students, some of whom have never used a video camera before, and sends them out into the city to find some aspect that’s related to this larger story. The clips are then edited down into sequences and then put into this system where we describe each clip by some set of descriptors: character A talking about location B that has to do with the future and economics and fear. Each sequence is tagged that way. The system itself works very simply - click on one key word, and all the materials with that descriptor are placed in that bin. Each key word represents a kind of virtual bin. <br />This process is additive. If I click on Homer Russell and then I click on on the future as well, a new bin is created in which the largest clips are those about Homer Russell and the future, then there are other items just about Homer or just about the future. <br />The magic of this is that when you play a clip about particular themes, other clips about those same themes become activated. Basically this was made as a “proof of concept” of this kind of storytelling engine.. In its current form you could see this as a kind of editing tool, for people whose only experience is traditional non-linear editing. You can think of it as being a very useful kind of plug into something like an Avid, where instead of putting your clips in single bins, you can actually assign meaningful keywords to them. and then actually have system that has a kind of metric for looking at a clip, and proposing a clip to follow it. It’s based on a very simple kind of feedback loop association. It’s editing by very simple association, in the sense that every keyword forms a kind of potential link between it and a clip that uses that keyword, and to every other clip described by that keyword. <br />One thing that’s interesting about this as a model which is quite different from a lot of other interactive models, is that in fact, interaction is not required. It can actually put it into a mode in which it just plays automatically, meaning that the system itself is just going to keep playing, picking the next most active clip at any time, and in that way it just sort of free-associates. It also knows what clips it’s already played, of course, so it always moves forward. There’s none of the sense of coming back to the same point. The system is smart enough to know what’s available and what’s new.<br />The nice thing about this model, which is a kind of editor that’s able to work on its own as an autonomous storytelling system, is that if we wish, we can sit back and watch the story, giving ourselves over to the author as in a traditional cinema. But it also allows the viewers to step in when they want to. For instance, if I see something that piques my interest visually and I want to steer the story in that direction, I can automatically shifting the focus of the database over to, say, clips about streets, because the clip that interests me is about streets. The other way that I can interact is if I’m interested in the theme of economics for instance, by picking that word, I can in effect bias the story in that direction. <br />One can also bias the way the keywords have an effect. For instance, I can in effect give each of the characters a negative polarity, so that instead of all the clips associated with a particular character appearing when I click, they are excluded, pushed away. One can also deactivate keywords.<br />There is another system that may be of some interest. It was really made as an end-user interface for the worldwide web, something for people who are new to a story, and it can be found at <http://ic.www.media.mit.edu/JBW/>. The content is the biography of Jerome Weisner , who was a president of MIT at one point, and who co-founded the MIT Media Lab. Weisner served as science advisor to President Kennedy in the ‘60s, and this piece was begun after he died in 1994 as a portrait of his long life. <br />The way this interface works is that the viewer first sees faces and pictures in a grid, and each row of the grid represents a different period in Weisner’s life, from his years growing up, to his World War II years as a researcher working on problems of radar at the MIT Radiation Laboratory, his time in Washington, his presidency of MIT and finally his later years. Also scattered through the grid are characters, typically these are the faces of the people we interviewed, and various themes in his broad and humane range of concerns spanning art, education and science. In this interface we have three axes, time, character and theme. We can look at thumbnail-sized still images representing all the still and moving images, and we can also have a listing of all of the content, still images, clips and text. <br />In the Weisner presentation, if we click on the keyword science, we can see within the database all the materials related to science. In addition, we get other very interesting information about other keywords that connect, through the content, to science. In fact, there is material about science throughout every period of Weisner’s life . If instead I click on a theme such as nuclear disarmament, I can see the content and also that this theme is focused first in his Washington years then continues through his later years. The interface leads the viewer down a path.<br />If we are interested, for example, in the theme of the Cold War, we can bring up Weisner himself, giving a speech about disarmament. Just by clicking on that theme, when it is active, tells the program to go ahead on its own, in as connected way as possible. In this particular case, the program shows us a text document from the 1980’s about our perilous sense of security, and the risks of the arms race as it was building up in the ’80s. <br />To stay with Cold War example, there presentation gives us a very strong indication that nuclear disarmament was a major issue in Weisner’s life.. The point of this kind of interface is that it offers a unique entry point to Weisner’s story for each viewer. As someone new to the story, perhaps is is interested more in Viet Nam or disarmament or in the theme of education, or you wanted to learn about Weisner’s early life. Each is a possible entry point into the story, based on what knowledge the viewer brings to it. Of course, as a presentation of history, what’s nice is that you’re never locked on any one axis. One could click on “Time” themes and in this way form a story that’s organized temporally, or choose “disarmament,” for a story organized thematically, that jumps back and forth between his Washington years and his later years. <br />We think this is a form that’s really useful for understanding something as complex as Jerry Weisner’s life and the ways in which he touched on all of these intersecting, interconnecting historical themes of the twentieth century. We present an interface that is a kind of patchwork of different interrelated concepts and basically ask the viewer to do the work of thinking about how one piece influences another. As someone who has gone through the content several times, I can report that it really is an exciting process, because you do begin to see things that enable you to understand him as an enormously complex character, and you can understand the nature of his interests in disarmament, human rights and the turmoil of the 1960s, among other things. <br />Once we get something of this sort on the internet, with these forms that enable the database to grow, can we also actively engage the audience in participating in contributing to the database? Glorianna Davenport’s idea was that someone out there on the internet might be able to contribute a story relevant to the life Jerry Weisner, and we will add it to the database. The challenge of building a “society of audience,” beginning a widely-distributed connected to the story over the network, is fascinating How do we foster that? Is it a new story form? Is it even a possibility? These are some of the questions we are trying to answer.<br />Some Short Notes on Developing a Digital Curriculum <br />John Collette. Head of Digital Media. Australian Film, Television and Radio School. <br />These notes are designed to address the impact of digital technology on the development of traditional film and broadcast media. Areas regarding the "
delivered by computers, such as interactive discs and the internet are not the key focus. New Media developments will have a technical synergy with digital developments in film rather than one based on the creative realisation of content. <br />Overview. <br />The underlying media that comprise film and broadcast products are becoming digital. As computers become better at processing media types and the cost of computers falls, this is going to increase. <br />Media such as text, sound and still images have all seen their formats converted to digital ones, and most professional applications for these media are handled digitally at some stage. Film is still the preferred acquisition medium for imaging in still photography and cinematography, however in still photography, the commercial handling of images is entirely digital - magazines, newspapers, advertising and publishing of images are all digital. This process is possible in film, and although usually only certain elements of films are handled digitally a this point, the amounts are increasing all of the time. In the early 1990's, a "
special effects film"
was a rarity, and these had about 10 - 20 digital shots in them. Today, these films are very common, and they incorporate over 500 shots on a regular basis. Even Disney animation is completely digitally mediated, and the upcoming feature Atlantis has over 1000 "
computer graphic shots planned. <br />If we look at broadcasting, the transfer to digital formats such as serial digital formats based on the CCIR601 specification in the professional arena and the new DV formats in the consumer arena is almost complete. These are the formats we have now. <br />What is under question at this point is the ability to perform different types of imaging tasks, for still and moving images in the filming process. For some time, low resolution digitised references to film material have been edited on Avid and Lightworks non-linear systems, and this has been the subject of a lot of attention. What is really important is the ability to use material that will appear in a finished program and manipulate it digitally - the material in an Avid is a "
for the purposes of negative matching, (although a good avid system with AVR27 or above can handle acceptable broadcast images). <br />So, there are four basic processes that we can do with images. Paint or paint onto, edit, composite (layer) and "
in 3D the appearance of "
imagery with 3D software. For schools, the question is how far to develop these processes - do you provide a full "
production suite, with the difficulties of maintaining the most complex software and hardware, or is the objective to allow the use of digital process as a creative tool that will provide access to the metaphors of digital production? <br />The possibility of developing a facility in the "
style is a viable one for most schools. Personal computer workstations are able to handle images at a rate that will allow the production of broadcast quality material or even film resolution elements for productions. The quality of the material is excellent, and the process that underlies the development process is the same no mater what "
you utilise. This is important, because the role of the school is to produce a professional that can solve creative problems with the resources available to them. Filmmakers are problem solvers, and the emphasis should be on the development of process rather than on "
. The process is the same on most software, which throws the emphasis back to the schools to focus on the creative possibilities and development of the medium, and to produce a quality creative environment that challenges the students. <br />The configuration that is indicated is a Macintosh system. This could be replaced with a Windows NT system with the same basic features. Addition of devices to acquire images, such as a scanner or video input card, and some storage options such as fast hard disks for video storage or a CD ROM burner for storing finished images are essential to realise the potential of a workstation. Something like a CD ROM burner can be shared by a group of machines, but the one thing you can never have enough of is disc space for holding images! <br />So, most of the processes require to use a frame or a series of frames in digital form. The basic size of a video in PAL format (768 x 576 pixels) is 1MB. This will fit on a floppy disc, but to use a certain duration of frames starts to increase the demands on the storage available. A film image can really be any resolution, however the usual format is 2048x1536 pixels for an Academy frame. We have found that 1024x768 pixels is often quite sufficient for the purposes of shooting from a monitor, and this resolution is the size of a good resolution 17"
computer monitor. <br />Schools simply cannot afford the expensive "
equipment for putting film into and out of 35mm formats for digital work. <br />Shooting from a monitor is good solution - the resolution is acceptable, although there are some tests that need to be done for different materials, and to ensure that the colour does not shift unacceptably. To put work like this into a film you must work backwards from the desired outcome - what are we trying to communicate? what resources are available? what is the limitation that is imposed by these resources? This is, in fact, like every school's process, and so it isn't too difficult to make the jump, we face these same questions every day! <br />This is an example of such a system at Cal Arts, thanks to Myron Emery for allowing me to take the photo.<br />Some things like titles that use digital movements of type or images, and that layer these things are really easy to accomplish using the monitor as a recording device. Elements that require "
outcomes are not, as these require special colour space, scanning systems and unfortunately, work on the most expensive software and hardware. Again, these are things that schools are used to "
. <br />(Note: to make a movie for shooting from a monitor, produce a Quicktime movie with a resolution of 1024x768 pixels of the sequence you need. Do a few seconds at a time. Avoid using compression options in the Quicktime movie. In the "
application, open the movie and select the menu option for "
and check the "
option. This will hold the movie on each frame until you press another key. The spacebar or the forward arrow key will move the movie one from forward, so with the tripod and camera set up in from of the monitor, shoot one frame at about 1/4 second exposure, and move forward one frame and so on. It's easy and it works, although the students must do some exposure tests first, and they get tired of being a human intervalometer pretty quickly!) <br />Because the material for this type of "
process has to be chosen carefully, the material is likely to be titling, special shots that have an obvious "
and possibly short projects where the entire "
is put through this type of treatment. This would place some limits on the length of the program, according to the disc space available and the stamina of the student for recording sequences back from the screen. A project could be completed in 1 minute "
- especially with software like After Effects for compositing, the opportunity exists to render only certain frames of a project at high resolution, and to preview a test of a project at low resolution. Breaking a piece down into segments like this allows the possibility of something as long as you like really. <br />What this type of opportunity offers for schools is the expressive possibility of design on the screen, the combination of text, images and layers that we see so often in broadcast work like commercials is available for experiments in narrative and filmic forms. While we have been used to the dominance of the language of temporal montage through editing as the primary "
of film for so long, this new area implies a type of "
montage that admits a screen where images and meanings form a polyphony of meaning. This is by no means the only possibility, but digital technology allows the exact production of elements that have existed through Melies to the Surrealists but were difficult to control and execute. The English filmmaker Peter Greenaway provides and expanding language of this possibility, although his films rely on the visual form more closely related to fine art painting than narrative film - "
what you see is more than what you get"
in the words of Bob Rosen. The Baz Luhrmann film adaptation of Romeo & Juliet (1996) showed an instinctive utilisation of these possibilities in the execution of a classic narrative text, one of the best. This new language offers the possibility for filmmakers to drive the possibilities of the medium into new genres that still hold the core creative vales of the art form as we know it, while allowing new expressive possibilities. <br />So how do we teach this? <br />As schools have different models for their curriculum and different student profiles, there is no exact way to say how digital must be incorporated into a curriculum At the AFTRS we have both generalist and specialist courses. Just as we train cinematographers, directors and editors, we also train digital media specialists. These students require different levels of facilitation to producing students for example. What all students need to understand is the underlying opportunities of this aspect of the medium, that if a frame of film is recorded on celluloid or it is stored as a 10 bit Cineon file, we are recording a type of information with various technologies, and that some idea of this process is essential to embrace the full creative possibilities of the medium. <br />We feel that it is important to have overview for all students, and that some specialists, like cinematographers, should do some basic digital imaging, and further specialist training in blue screen photography. Our process is aimed at understanding the underlying process of the area, rather than a piece of software. If we teach Photoshop, it is with the intention of revealing the process of digital imaging, rather than learning to press buttons in Photoshop. The emphasis is on image resolution, colour resolution, colour space, digital colour theory, cut and paste operations, colour correction, (using gamma curves and histograms - the "
controls of Photoshop), building masks and stencils and controlling the operation of filters. These techniques not only teach how to use the software, but more importantly, how to solve imaging problems in the digital domain, and the student can then utilise any digital paint system such as Paintbox, Matador or the paint module in Flame very quickly. <br />The emphasis must be on project work, with a foundation of core skills put into place. The training on a system should focus on how to manage media, where things should be stored, how the software is processing the media, (not "
push this button and this happens!”). After outlining the functions of a software, short exercises that utilise a technique should be given to provide repetition and reinforcement These should also provide a quick sense of creative satisfaction, because they are important in allowing the student to get some positive feedback and a sense of excitement and achievement. <br />Once the basic operations of the software are understood, use it in a production! We always work back from the idea, so we start with a production problem, something we would like to see on screen, and discuss how to achieve this on the system. This means that we don't let students "
without a clear goal of what they want to achieve - experimentation is really important, but wok for assessment often suffers if it is produced this way. So we ask them to outline the sequence, a shot, a title sequence, a short 1 minute piece. Then we decide what elements are needed - background plates, still pictures, different shots, whatever. We list these and prepare storyboards, and then we talk about how to do this in the software - what we need to shoot, scan in where it is stored and some idea of the process and the basic timings of the material. Occasionally, we even make the storyboards in Photoshop, using stills, to give a clear visual idea of how the thing will look on the screen. <br />So that's it. We tend to use non linear systems like Media 100 instead of Premiere, but this is a question of budget and choice. Otherwise, the application suggested here are great for most things. <br />Networking: <br />The last thing that is a good idea to develop is a production network. This is easy to achieve these days, as fast 100base-T ethernet is available on basic PC level systems. Our production network is designed to allow movement of files, sounds, images and sequences to different work environments in the school for production. You need to understand what file types the application you are using will read - TIFF, TGA and PICT files are the most common, and we also use SGI/RGB files and Cineon filmstrips for really advanced work. We use AIFF files for sounds. <br />The 100 base-T solution is easy to implement, but is not "
. We looked at really fast fibre optic solutions like fibre channel, ATM and FDDI, bit these cost $5000+ to put one workstation on the network. Not an option! (Although if they become more mass produced, the price will, of course, fall to the level of PC networking. This may be in the next 3 years.) Students utilising these networks need to understand how to connect to a remote device or disc, and NOT to leave their files everywhere!! If you consider this, it can be a really effective way of creating an integrated production environment. <br />Project Report: Curricular Consequences of the Digital Domain<br />Rod Bishop, Australian Film Television and Radio School<br />When David Puttnam referred to his time as a Hollywood executive, he suggested, in no uncertain terms, that he was instructed to make films for fourteen year olds from Orange County. Perhaps they keep saying “but will it play in Orange County, David?”. I’d like to reflect upon the cultural differences that exist within the global film industry. <br />Five years ago I produced and co-wrote a film aimed at 14 year-olds from Orange County. It was a horror-comedy film, Body Melt, made in Melbourne, Australia. We wanted to make a film that set up an internal dialogue about the tradition of horror films. In the plot we have a Caligari figure who is experimenting on a group of residents in a suburban environment in Australia - very much like suburban environments you find in America. His patients do not know they are the subjects of his experiments. By pretending to be a doctor, he is able to administer the drugs to them. The object is to create a super-person, or super-race. The Caligari character’s experiments on the unsuspecting suburban residents goes horribly wrong and they all start to melt. <br />In the film we had 52 prosthetic effects, each of which reflected various films from the horror genre, especially The Hills Have Eyes and the Freddy Krueger films. Our object was to make a film that would - as we say in Australia - scare the shit out of people. But it was also to be a film that aficionados of the horror genre would find intellectually stimulating. In casting the film, we consciously cast Australian soap opera actors – actors who are well known in Australia and quite exceptionally well known in other parts of the world. These actors lined up to be in the film. They couldn’t think of anything better than being in a film where their soap opera personalities literally melted on the screen.<br />The first film festival in the world that asked to screen the film was the Sitges Fantasy and Horror Film Festival outside of Barcelona, Spain. The festival has been going for 25 years and for fans of the horror genre, this festival is Mecca. Sitting through the first few days of the festival and the almost unbelievable, insatiable appetite of the Spanish for gore and horror and terror, I found not only amazing but quite encouraging - particularly since our film was coming up. <br />Like all movies, horror films need a good “hook” in the first ten minutes to keep the audience alert and awake and interested. The audience responded on cue and I thought, “Beauty, mate, this is really happening, these people are getting into this movie.” As it progressed and the other special effects appeared, the audience continued to respond. But when the first well known Australian soap opera star from an Australian series called Neighbours appeared, everything changed. The entire audience erupted in applause. They knew the character’s name from the Neighbours soap opera series and they yelled it at the screen “Har-rold…Har-rold” A whole row of people stood up and started bowing at this character on the screen. From that point on, the audience was no longer interested in the horror stuff. All they wanted to see were appearances by their favourite Aussie soap stars. <br />I was somewhat surprised by this response. I knew that Neighbours and other Australian soap operas were very popular around the world, but I had no idea they penetrated to this kind of audience. The point of this story is to relate how we set out to make a horror film “to scare the shit out of people”, but this particular target audience were only interested in recognisable soap opera stars. Our film for the 14 year olds from Orange County ceased to be a film for 14 year olds from Orange County. It transcended its genre and connected to an audience in a totally different way. When David Puttnam denigrates the notion of making films for 14 year-olds from Orange County, his examples do not take account of all the cultural differences that occur when those films are screened under widely different circumstances. <br />Now, to the curricular consequences of the digital domain. Looking at the digital domain for film and television schools, it seems that the future for production houses, for studios and for film and television schools is a production intranet - where acquired materials, whether they originate on film, electronic images or sound, will become digitized and held in a central database or asset storage system inside the organization. They can then be streamed out as image and sound files to workstations, such as those made by Silicon Graphics and Avid, for production purposes. Like any intranet, this will be a discrete system, accessible only to members of the organization. The production intranet, however, suggests that in future we will move away from training students for “film and television,” to training students for “the big screen”, “the small screen” and the “computer screen.” <br />Employment in the future will depend on a student’s ability to manipulate digital images and digital sound among various formats. In other words, digital multi-skilling at that level may become the key to a successful career. <br />Turning to the curricular consequences themselves, I think we must decide for whom and for what reasons we are designing curricula. Most of us are teaching students in certificate courses or undergraduate courses or post-graduate courses with the aim of seeing them produce work and gain sufficient experience to enter the industry. Some of us are also involved in providing retraining, re-skilling, to industry personnel. <br />The process by which students are selected for our schools has changed. I’ve spent the past 20 years selecting students for film and TV courses. Eighteen of those years have been at the low-end, in a university environment with very poor resources for film and television. For the past two years, I have seen this operate at the high-end, the AFTRS with its modern purpose-built facility and $A25m of equipment. <br />About 15 years ago, the applications for these courses began to be accompanied by VHS tapes instead of the usual Super 8 or 16 mm films. Very quickly, about 90% of the 300 or so applications that I’d receive each year arrived on VHS tape. Ten years ago I noticed a significant change in the quality of this material. Applicants who were 17 or 18 years of age were starting to show signs of having learnt the basics of film language. These applicants were watching their favorite movies on VHS tape, then using slow motion devices on VHS players to study shot selection, editing techniques, the use of pans, tracks and dollies, and the use of various lenses for specific shots. The applicants would then use various recording devices – whether Super 8 or video, to construct their own films and videos before applying for film and television courses. As camcorders and now digital cameras became available, it is obvious that many adolescents have the opportunity to use increasingly sophisticated equipment at home. As computer hardware and software systems drop in cost, the more powerful image-creation and sound-creation tools are becoming available in a home environment. In five years time, a lot of what we currently regard as being “high-end”, will in fact be low-end. In other words, a lot of the image processing effects we’ve marveled at quite recently will in fact be available in the home.<br />This raises three interesting issues:<br />1-If potential students are starting to manipulate images, shoot material and produce their own films and videos from the age of 12 or 13 and if they’re capable of being able to produce something with the quality of production value that we would normally expect to come from a film or television school, the question arises as to why they would want to go a film school at all. Of course, these always be people who’ll want to go to film and television schools, but can we guarantee they will be the best applicants. If film makers develop that young, will they bother to go through a tertiary education system or a conservatory-style film and television school before they enter the industry? <br />2-The second issue that arises is “development”. Are students who get access to increasingly sophisticated equipment at a much earlier age generally just copying their favorite sequences from various films? What effect does this have on content and creativity? My own experience suggests that sheer technical quality is increasing with younger age groups, but character development, plot development and dramatic development is not keeping pace with the technology.<br />3-Then there is the issue of new forms of distribution, the opportunity, sometime in the near future, to post films and videos on the internet so that they are available to other film and television schools or to production houses around the world. The internet will provide the student film makers with a whole new avenue for raising their hand and saying, “Hi, I’m a talented filmmaker or a talented television maker, and here’s my material. Do you want to see it?” Their ability of young film makers to put their material in front of other people all around the world will increase exponentially.<br />Returning to the kinds of films and videos submitted to film and television schools as part of the application process, we find that in Australia, the young male applicants often imitate their favorite sequences from Hollywood films or they reproduce what they regard as “cool” work of independent directors. These “cool” independent directors can come from any country. They are just as likely to be Wim Wenders as Quentin Tarantino as Hal Hartley or Luc Besson. The young female applicants often create the same sort of imitations of the “cool” work of the independent directors, or they attempt a kind of avant-garde, psychological films in the style of directors such as Maya Deren, Marguerite Duras, or Jane Campion. With the exception of Jane Campion, the films from these female applicants are original, in the sense that they have usually never actually seen the work of a Maya Deren or Marguerite Duras before reaching a film and television school. <br />But the striking feature of the films, whether they are coming from males or females, is their global focus. They seldom attempt to sell a bright, distinctive Australian culture. Later in their careers, when some of them become quite successful feature film directors, they often produce unique films that critics around the world refer to as “quirky” – its become a kind of hallmark of Australian cinema at the moment. That’s one of the reasons that I have real problems with David Puttnam’s criticism of the Hollywood studios wanting films for the 14 year-olds from Orange County. First of all, what’s wrong with making films for 14 year-olds from Orange County, particularly since millions of other people, who aren’t fourteen , and who don’t live in Orange County, obviously want to see them? <br />These students who are making films with that kind of a global focus seem to have a more realistic grip on their career paths than do many of their teachers. They are responding to global forces that I think their teachers ignore. Rock et Man, an AFTRS student film, is but one example. It was consciously made for 14 year-olds from Orange County. The students who made it want to work with Hollywood-scale budgets. Their film is a deliberate reaction against European art-film style, and I use that as a generic term - whether those films come from Europe, Australia, America or anywhere. The students who made Rocket Man even went to the point of including American accents in the film rather than Australian accents. If they had done that 10 or 15 years ago in Australia they probably would have been expelled from the film school. <br />There is a reluctance in many film and television schools to embrace the spectre of digitization. There are a couple of reasons for this – the first lies in the history of technological developments in cinema, where there hasn’t really been a major change since the introduction of sound. Everything else: wide-screen, Dolby, whatever - has been an extension of existing production processes. Therefore, generations of filmmakers and generations of teachers in film and television schools have grown up without ever having to face a profound change in their teaching or the ways in which their students produce work. The second reason, I think, comes from what seems to be a link between digital illiteracy and a certain arrogance about art cinema. Those “true believers” in art cinema are also set against digital technologies by seeing the high-end digital effects in television commercials and overtly commercial Hollywood product.<br />I will summarize the results of the survey that we conducted as part of the “Curricular Consequences of the Digital Domain” project.. The results that appear in the full report seem extremely encouraging. But it must be remembered the results represent the responses of only 44% of the membership of CILECT. That 44% response is likely to be from the people who have already started to equip, or are already well equipped with digital technology. <br />However, within the 44% that responded, there is a very healthy degree of adoption. <br />61% of the schools have digital curricula oriented to film production.<br />57% of the schools have curricula oriented to digital television production.<br />55% of the schools report that they to have courses oriented to the new media. <br />The majority of the curricula fall into two categories: non-linear editing; and image processing. <br />In the area of non-linear editing, 83% of the respondents have non-linear editing as part of their curriculum. Avid as the favored software. <br />All of the North American schools that responded teach non-linear editing as do 90% of the Western European schools and 80% of the Eastern European schools. <br />Some 77% of the schools have digital post-production facilities.<br />Teachers of non-linear editing are generally locally trained, and experience was thought to be more important than formal qualifications in this area. <br />About half of the schools responding have curricular for computer animation, and about half have curricular for digital special effects. <br />There were 70 different software packages used for visual effects and computer animation. <br />The most frequently used software are Adobe, Macromedia and Soft Image. <br />There was relatively little use of high end software/hardware packages were infrequent - only 4% of the 44% who responded were using high-end tools like Matador, Flame or Cineon. <br />The majority of the teachers of image processing were also locally trained and again, experience was favored over formal qualifications. <br />Looking at the implications of our study for curriculum models, it appears that software training almost always occurs in the form of short courses or modules of up to 2 weeks in duration. Courses in editing, animation and image processing may be up to a year in duration, and film, television and new media courses generally last more than a year. <br />The data suggest that there are at least four curriculum models:<br />The short course model for intensive training - usually in software. <br />Craft courses in non-linear editing, image processing and animation. These appear to be stand-alone or separate courses within the school’s curriculum - generally a year in length. <br />Integrated courses where image processing, non-linear editing and new media are integrated into existing courses - current curricula have merely absorbed digital training. <br />Separate courses, i.e. new media courses separate from traditional film and television training areas. <br />Digital technologies have other ramifications for what we know now as film and television schools. Let us return to the issue of the increasingly technically sophisticated film and videos that may be produced with home equipment and how this prevent the best applicants from applying to our schools in the future. The one big solution for this potential problem, and a number of others is Interactive Distance Learning: putting film schools on-line. I think we should be looking at how to better use the Internet, the World Wide Web, the whole of the on-line world. We should be considering breaking down the walls around our schools through distance education and looking at ways to teach students in any part of our own countries or in any part of the world by offering courses on-line. <br />The AFTRS is planning an experimental on-line course to be conducted in Melbourne, Australia. At this stage, we believe that a lot of our teaching programs are not suitable for on-line delivery. They are simply too technically-based for today’s software and data delivery systems. But this will change in the future. The production intranets mentioned earlier will also be important for streaming video and sound files outside of educational institutions and into remote locations for distance education. In the experimental model for the Melbourne course we expect to use a selected group of about 15 students at least one person from another country some distance away from Australia as an experimental group including,. We will equip the students with the necessary hardware and software to work in a home office environment. Their assignment would be to create product for the internet, for on-line delivery, and we would expect that product to be “dramatic” in content.<br />To launch this course, we will need the assistance of industry, particularly the telecommunications companies. The home office equipment would be connected by cable modem to a broadband network. In Australia, at the moment, we have experimental broadband networks run by telephone companies. We would build some limited face-to-face interaction into this process, but essentially the course would be interactive, from remote geographical locations. <br />If the experiment is successful, we will begin looking at more grandiose ways of breaking down the walls around our school and going online. One of the advantages would be the ability to pick up those talented potential students, those who are already working with sophisticated equipment in the home environment. They could then participate in the School’s activities in a way that will add value to their current work. <br />The Teaching of “New Media” in a School of Film and Television<br />Robert Rosen, Chair, Department of Film and Television and Director, Film and Television Archive, UCLA<br />(What follows are notes used for my presentation to the CILECT Congress at Ebeltoft. In outline form the arguments are unavoidably elliptical, possibly unclear, and unintentionally dogmatic in tone. For that I apologize. My hope, however, is that even in this form they may help to encourage dialogue about the field. - R.R.)<br />Digital media technologies in a school of film and television can be developed in two different directions: (1 ) As an assist to traditional film and television production (non-linear editing, digital sound, pre-visualization, production planning, special effects, etc), (2) As a new field for creative activity ( "
works such as interactive film and television, CDROMS and DVD productions, website productions, creation of composite documents, digital publishing, etc.) The Department of Film and Television at UCLA is involved in developing both models, but this presentation will focus primarily on the second.<br />DIGITAL MEDIA ARE REVOLUTIONARY (NOT EVOLUTIONARY) TECHNOLOGIES<br />Digital media in the 1990's are where motion pictures were in the 1890's. You know when you see it that the world will never again be quite the same again, but you have no clear idea precisely what form the creative product will take or what its longterrn social influence will be.<br />There is a need to struggle against an historically documented tendency to force genuinely new media forms into paradigms of past media: film as nothing more than a way to convey theater or vaudeville; television as radio with pictures or movies on a small screen; interactive media as basically linear, unidirectional story telling with spurious possibilities for "
<br />Five distinguishing characteristics of digital media technologies that open possibilities for basic paradigmatic shiftss in creative product.<br />Easily combinable: the convergence of media forms. Digıtal technology reduces a dizzying array of separate media forms into a single common denominator in a form that pemıits them to be readily combined in new and innovative creative works.<br />Totally transformable. Seeing is no longer believing when images can be artificially created, transparently altered or irrevocably metamorphosed.<br />Perfectly and infinitely reproducible A dream for preservationists and a nightmare for copynght proprietors. <br />Genuinely interactive A revolutionary effacement of the time-honored distinction separating spectator and spectacle. <br />Readily and economically transmittable, as stand alone product, by fiber, by satellite, on-, etc.<br />My advice to Intel on how to increase the usefulness of computers was to invest major dollars in basic research on the aesthetics of new media, and to use film schools as sites for open-ended experimentation comparable to the "
that provided the creative impulse for rock and roll. Consider new media to be a complement to more traditional media forms-- neither a replacement nor simply an adjunct.<br />A few of the many attributes of digital media that lay the groundwork for an alternative aesthetic paradigm:<br />New spectator sensibilities that differentiate me from my students: a generational shift.<br />My cognitive mapping of experience is basically analog whereby sound or time, for example, is represented through the metaphor of space half way round the dial is just about the right volume. My students' basically digital cognitive mapping of experience is that sound or time is represented by the onoff capabilities of buttons. These sensibilities translate into a taste for alternative forms of narrative.<br />My comfort is with narratives that provide reasonably clear guidance on the logical, temporal, and spatial trajectory of a story, a reassuring intuitive sense that the work somehow "
hangs together” structurally and that there are adequate syntactic cues that insure stable spectator positioning and coherent transitions in time, space and point of view. My students’ comfort is with narratives masked by discontinuity in logic, time, space and point of view; their capacity to consume a work in its parts ( a "
of dialogue in Repo Man, for example, without any sense of what makes the film as a whole meaningful) and their sense of film as a gestalt, an unanalyzable experience, a complex pastiche, where the whole is experienced intuitively as somehow greater than the sum of its parts.<br />New aesthetic predilections in story-telling fostered by digital technologies.<br />From one hundred years of story-telling based on the assembling of elements (one reason that editing was so basic to the evolution of film language), to story-telling based on flow, metamorphosis and transformation.<br />From compositional strategies dependent on lighting, optical technologies and the organization of objects in space to composition as a new form of editing within and into the frame -- the textured layering and interaction of irnages, both real and invented<br />From linear story telling leading to closure to non-linear story-telling and the acceptance of open text.<br />From the spectator as a passive consumer of narrative spectacle to the spectator as a participant in the spectacle-- as actor, cocreator, critic, etc.<br />From a real movie theater ( or its equivalent in the home) to the virtual theater of cyberspace-- from passive individual viewing to interactive collective participation.<br />From access to sophisticated story-telling sound and image technologies restricted to corporate entities to broad-base, readily accessible image-creating technologies: the empowerment of ordinary people to tell stories using sounds and images as a routine aspect of daily life.<br />THE UCLA EXPERIENCE WITH TEACHING 'NEW' MEDIA<br />The good news: multiple currents reflecting a plurality of openended broadly diverse directions for a newly emergent field ( digital arts, digital publishing, digital library/archive, computer animation, digital instructional technologies.) The bad news: conflicting claims of hegemony within the field of new media and between new media and the use of digital technologies for traditional media.<br />In the field of new media UCLA's curricular activities are primarily in five areas<br />1. The Laboratory for New Media. A dozen courses housed in a Macintosh based laboratory, organized according to principles appropriate to an emergent media/art forrn: ( a ) Creative empowerment for beginners through the use of low-end, userfriendly technologies.( b) Convergence of disciplines by creating courses and a work-space open to all areas of the program, including directing, producing, writing, animation and critical studies. ( c ) An open-ended, non-dogmatic and experimental approach to digital visual Imagery and new narrative. All projects are produced and exhibited entirely within the digital domain.<br />2. The Curricular and Research Laboratory. Courses organized by the Cntical Studies program in conjunction with the Film and Television Archive that focus on digital publishing (including an on-line scholarly journal), the uses of interactive technologies for critical analysis of media, and the creation of composite documents using archival materials<br />3. The Animation Workshop. Macintosh and Silicon Graphics- based laboratories housing specialized courses on interactive media, and computer animation.<br />4. Archive Research and Production Laboratory. A primarily PC-based laboratory sponsoring diverse activities including: the design and production of original CDROM product for commercial distribution, using moving image materials held by the Archive; faculty research on the uses of archival materials for the creation of composite documents; digitizing of archival holdings and servicing of online access for educational purposes; the maintenance of the Archive web site.<br />5. The Hyper-Media Studio. The creation of an experimental studio and faculty research facility that focuses on the interaction of live performers with intelligent software agents.<br />CORE LESSONS LEARNED FROM OUR NEW MEDIA ACTIVITES<br />1. Humility.<br />No magic software or hardware that will be “the answer"
for the long term, given the dizzying pace of change. Don't over invest in permanent facilities.<br />One needs a principled commitment to the development of curriculum and in the operations of new media laboratories to experimentation, playful exploration, and an openended quest for new forms.<br />2. Process over product<br />A film school's goal in promoting an emergent art form is to create creative people, not product.<br />Creative empowerment through the use of low-end technologies is more important pedagogically than state-of-the art technologies.<br />Critical discourse and debate is of pivotal importance to the creative process when the aesthetic paradigm is itself in question.<br />3. Pluralism<br />Student creativity and achievement in the area of new media production need not be limited to directors and animators, but it is as likely to occur among students of writing, critical studies and producing. New media can serve as a point of convergence for a plurality of disciplines.<br />Faculty research and creativity can and should take multiple directions --aesthetic experimentation, digital library and archival issues, electronic publishing, etc.. Avoid capitulating to the hegemony of a single approach in a field where everything remains to be discovered.<br />[DEMONSTRATION OF A CASE STUDY IN CDROM PRODUCTION: EXECUTIVE ORDER EO 9066: THE INCARCERATION OF JAPANESEAMERICANS DURING WORLD WAR II]<br />Additional considerations:<br />The pros and cons of corporate partnerships<br />Outreach from the University to the community.<br />Practical and principled problems in producing a professional CD-ROM in a university context<br />The conceptual challenges of composite documents.<br />Mediating scholarship and popular accessibility.<br />Toward a new form of interactive documentary.<br />