Walter Wright and his Amazing Video Machine

Publication TypeJournal Article
AuthorsHagen, Charles; Laddy Kite
SourceAfterimage, Visual Studies Workshop, Volume 2, Issue 10, Rochester, NY (1975)
Keywordspeople-text
Abstract

Workshop by Walter Wright at Portable Channel; descrption of the programs at the Experimental Television Center; discussion of video synthesizers

Full Text: 

Walter Wright is one of a growing number of video artists working with synthesized imagery--video which has been electronically transformed and manipulated to produce a wide variety of visual effects. Wright, who is currently artist-in-residence at the Experimental Television Center in Binghamton, N.Y., for the last several months has been traveling throughout New York state with a portable synthesizer, demonstrating the machine and giving workshops in its use.

Wright was in Rochester to conduct one such workshop, sponsored by Portable Channel, Rochester's community video center. During the course of the week-long session, Wright, assisted by Chuck Heuer of Portable Channel, explained the workings of the synthesizer and helped participants explore its capabilities for processing images. The workshop culminated with a live public performance by Wright using the machine. We interviewed Wright at the Portable Channel facilities during his stay in Rochester.

Q: How did you come to be here at Portable Channel?

WW: This is part of a New York State Council on the Arts program, which is also part of the Experimental Television Center's program. It's an individual project within the television center. My particular project is taking the synthesizer around to various locations and letting people play with it, just for exposure at the moment. It started off just as a workshop and a kind of "hands on" experience.

We've been going to public schools, community colleges--literally anyone who would take us. And now there are more people who want to take us than we can deal with, so we're going to restructure it next year, I think, and deal mainly with the SUNY (State University of New York) system. We run on a very small amount of money, which is all eaten up at this point. We got to Portable Channel through the kind of rural communications network that's been set up between various video centers.

When we conceived of this project and tried to find places to go, we just phoned up a bunch of people and said, "Can we land on you for a few days and do a workshop?" Most of them said, "Great! Sure!" and we scheduled a date. Some of the others came from meeting people at conferences and things like that. Like the high school people--they'd say, "Well gosh, we've never seen anything like this before. I think the kids would be interested. Why don't you bring it up for a day?" Some of the community college dates have come through some of the people who are teaching in the college, and working independently in video.

Q: What's been the response of people that you've been able to introduce to the synthesizer?

WW: Well, most of them respond very positively. A lot of people can't figure out what to do with it--"It's very pretty, but what does it do?" or "Why do you do it?" or "What does it mean?" or "What is it for?" --that's kind of the standard response. But most people respond very positively to it. I've had very few negative responses, but then that may just be due to the fact that I went there because people wanted me there anyway.

Q: You mentioned that this system is a portable one, and that you have one at Binghamton that is more or less studio oriented ...

WW: It becomes studio oriented as all the cables become patched into it. It's on wheels, and we have moved that system around--we took it to Albany with us. Both systems are esentialiy portable, but we always have to have at least one at Binghamton for the program that's going on there.

Q: What exactly is the program at Binghamton, and who takes part in that?

WW: We started off as a public access facility in a way much like Portable Channel, with the exception that we just dumped stuff on people. We provided individual instruction, but no kind of formalized workshop scheme. No money was paid into the program; we had no income. We just dumped portapacks out in the streets so people could tape record themselves playing billiards in the basement in the nude or whatever they wanted to do. We gave out tape and the tape disappeared, and the equipment got broken.

After the community-access phase, which was about three years ago when everybody was into social reform through video, the Council cut back the funds a little bit. We decided at the same time that the expenditure of energy was not going towards developing television as an art form and we thought we'd like to move more in that direction, since that's the title of the center and kind of the thing we work on ourselves. So right now we are an open studio, like the fourth "National Center for Television." We're an open studio for anyone in New York State. We have to restrict it that way, because we're funded by the State Council. We work on a reverse priority basis--in the other national centers if you're an unknown artist you come at the top of the list, and if you're a known artist you can go work at WNET-TV, in New York. So we kind of specialize in local people. Our State Council report usually consists of about half a dozen pages at the end of the program indicating the number of people who have come in and worked with our equipment. Sometimes it's up into the hundreds of people--a lot of local people and people from around the state, about half and half, I would say. We kind of restrict the use of the machine with our resident people to allow other people to come in.

Q: Who are some of the people working at the Center?

WW: There are three full-time staff members: Ralph Hocking, the director, who sometimes takes his salary, more often doesn't; Sherry Miller, who's the coordinator and does take a salary and looks after administration and all that stuff; and Davey Jones, the technician, who also takes a salary. And we've added a few people: I'm there as a relatively full-time person now, funded under this separate project as artist-in-residence, and we've taken on another technician, Don MacArthur, who's working on timebase correction, digital circuitry and something way out there in development.

Q: So if a small group or individual came to work there at the Center, they would first establish a date through Sherry. Are technicians and people there who could assist them?

WW: We don't really provide assistance, since Davey's busy most of the time repairing equipment. Most of the people, if they don't have any prior experience, are given a run-through on the machines. We try to work out an arrangement with them where they can come twice--maybe for a couple of days the first time, just to run through the equipment and get to know it. Again, this is another difference from WNET. We don't have engineers, like WNET, that are going to work for them. So our technician is not working for you when you're there; people are available to answer questions as they walk by, but you're on your own--which is upsetting to some people and not upsetting to others.

Q: Is your synthesizer basically homemade?

WW: Parts of it are homemade. The colorizer unit itself was designed by Shuya Abe. There are similar units at WGBH-TV in Boston and WNET-TV in New York. KQED-TV in San Francisco has one, and there are a couple of others floating around in California, and maybe a couple of others floating around on the East Coast, too. Shuya has never made it a commercial product or anything like that. He manufactures them on the side. So that's not really a homebuilt unit. The rest of it is like the "wobulator."

Q: How did you become involved with synthesizer imagery yourself?

WW: I left Toronto after becoming disillusioned with architecture. I was also into computer design at the time--like design methods using computers. I met some people who were doing computer graphics in film before the bottom fell out of that, and I thought I would like to do computer graphics in film. So I went to New York and I hunted around and I came across a firm called Computer Image, who made television commercials. I went to work for them as a computer animator. Now, they don't really have a computer--they worked directly in videotape--so that's how I landed up in videotape.

We were located a block and a half away from WNET lab and I started making tapes and showing them, and people found out about Dolphin (which is the name of the production facility there). Eventually Emshwiller landed up there and did his Thermo-genesis and Scape-mates things during weekends. I was his animator and that kind of got it all together. One day Russell Conner, from the Council, came to me and said, "Look, you're one of the few synthesizer operators around. Go to Binghamton." So I went to Binghamton.

Q: When was that?

WW: That would be three years ago. I ended up in Binghamton with some funding from the Council and an impermanent position--sat around and drank Scotch with Ralph for about six months and discussed televison and art. I fought with the machine for about a year or two before I got it together to make tapes and do workshops and things like that. So from there I got out of making tapes and more into the live stuff and into the development of the systems along with our technicians. I'm tending toward using live stuff as an instrument. Post
.....That would make 1/2 stu....

Q: Right now I think time base correctors are up around the $10,000 range.

WW: Well, there are some that are cheaper--$1600.

Q: But a $400 time-base corrector would be a big improvement.

WW: Yeah. That's one of the aims of the Center next year: to try to develop the technology, as a means of providing access to the media. Trying in terms of that, and in terms of extending the image process in black and white and in color--and also in terms of the synthesizers. We're about where Moog was eight years ago. David just completed work on a unit that he and I have been collaborating on for about a year and a half now. It's the first voltage-control video sythesizer--which is exactly where Moog was eight years ago, when they took electronic music from the Princeton studio and started to produce voltage-control modules so that you could set up control functions, memories and things like that. That's where we're at right now.

Q: Do you forsee the day when artists working in video will be able to reach mass audiences, perhaps through cable systems or something like that? Or do you think that would even be desirable?

WW: I don't know. I suppose some artists would love to reach the larger audience, in terms of a power trip. I'm not too sure of the audience. It might go the same way as the difference between commercial film and experimental film. Maybe the video artist will work to a relatively limited audience. Still, I don't see the major commercial networks running half-hour abstract synthesizer tapes. Certainly commercial AM radio is not running explorations in electronic music right now. So it would probably be a limited audience. One of the problems is that cable would probably be a good place for people to work, but the cable systems seem to be in a continuous process of redefinition--a little like university curriculums. We used to be on cable every Thursday night for an hour in Binghamton, until the cable station ran out of budget and discontinued all local production. Now they get everything out of a can. But people like Woodstock Video Company are the cable, in a sense, a couple of nights a week.

Q: When you were doing your weekly cablecasts, what kind of feedback from the community did you get?

WW: None.

Q: Do you see that more as a response to your particular programming or a response to the cablecasting?

WW: It was kind of a community response to cable, I think. People only saw it when they flipped by it on their dial. Most of the time they only ran time, temperature, and advertisements for Phil's Chicken House.

Q: Top Value Television (TVTV) has been very successful in getting some network exposure for 1/2-in. programming. For example, the 1968 Democratic Convention, "Lord of the Universe" and, more recently, "Gerald Ford's America." Do you see that as an indication of things to come?

WW: Not really. Actually, it's an indication of things to come, but it's more a development of commercial broadcast than the experimental field. If you take Top Value's product, and look at it very objectively, it looks like documentary film. They've done very little to extend the definition of documentary. The only thing they've done is use subject matter which has an appeal in its immediacy. It's almost like trendy documentary, in a way. It's not the standard CBS documentary, but the only real difference is its subject matter. When you look at the visual side, I don't think TVTV has done anything. In fact, they may represent a step backwards in documentary, because I think there were film documentaries which were much stronger than theirs. But they were extending the subject range with a little sense of humor, which the normal news documentaries don't have.

Also they've been pushing the technology a little bit, with the color portapack. I think they are being accepted because they are working in it right now, but they are being accepted because the major networks are moving in that direction anyway. CBS and NBC both have announced internally that 16mm. film is out--they're going to the color portable equipment, whether it's 1/2-in or 1/4-in. The time lag is due to union problems--they are changing over the whole system. That's my assessment of TVTV.

Q: Could you give a brief description of what the synthesizer is and does?

WW: Well, basically, it was developed for one reason, and that was to get into color--to be able to do color inexpensively. That meant that we had to forget the idea of color cameras--they were too expensive, certainly, when this was developed five or six years ago. So it had to start with black and white images generated from inexpensive cameras like portapack cameras or 1/2-in. type studio cameras. Then color is added to the black and white signals. This is done electronically. You can look at the synthesizer as a black box--black and white goes in one side and out comes the color. The way this unit is designed, there are seven channels running simultaneously. It works a little like an audio mixer: there are seven black and white signals coming in and each of them goes into a little black box where it emerges with color.

Q: Is it possible to achieve a fair degree of sensitivity in color choice with these?

WW: One of the problems of this machine, because it's old, is that it doesn't allow you the kind of control that you ultimately want over color. Those seven colors are fixed. You can shift them, but you shift them all at once. You do not have control over the color of an individual image. So they all mush together and all the colors shift together. You can't say, "I like that purple and I like that red, but I don't like that yellow. I'd like that yellow to be orange." That's the kind of control that you would ultimately want. The colorizer we're working on now is designed around that principle. Four serparate channels; four serparate colors--it's actually got four encoders. The problem was financial. To do that would have required seven encoders on this machine. That was according to Nam June Paik. That's not true, but according to Nam June it would have required seven encoders, and we only had one.

There are two major kinds of synthesizers, I guess, running at the moment. The Paik-Abe synthesizer operates basically like the one I just described. I guess you would call it a phase shift colorizer. The color is controlled by varying the phase which goes into the color encoder--the phase of the color subcarrier, if you know what that means. It combines that with Nam June's Wobulator, which is basically a machine which distorts the image on a TV set through the use of magnets and oscillators--the image can be "wobbled." With the multiple camer inputs, plus the color, it gives you a fair range of possibilities to work on. This uses real images and it uses the phase shift colorizer.

At the other end of the spectrum would be Stephen Beck's unit, at KQED, which is a direct video synthesizer and doesn't use any cameras at all. All of his images are generated electronically. Basically, they derive from being vertical and horizontal lines. These can be expanded through keyers and oscillators, and generate geometric shapes and grids of shapes: diamonds, circles, triangles. These can then be moved around on the screen and multiplied and then combined with external cameras and keyers.

You can mix real images into this, but that's kind of like an adjunct, in terms of production, to the synthesizer itself. It's what I would say would be a direct type of video synthesis. The signal is completely synthesized within the box. It doesn't use a camera or anything like that. Nothing goes in except control signals--oscillators and things like that--and out the end comes a picture. With ours, there's a picture coming in the front and a picture coming out the end. Those are the two variations.

The Rutt/Etra video synthesis system is like an expanded "wobulator," or you could look at it that way. It's really a television set on which the image can be distorted, with a great degree of control, by using various sound signals. You don't hear the sound, but what are basically function generators or oscillators control the distortion of the image and move it around on the screen. So essentially, it's a tool for video animation, and that represents another application of video synthesizers. The Rutt/Etra is sort of an inexpensive version of the Computer Image "Scan-o-Mate," and is essentially, again, a video animation unit really meant for moving copy around on the screen for commercial use.

Q: A number of people have been trying to integrate computer systems with video systems. Could you characterize some of the attempts in that area, and what they may evolve into?

WW: I suppose it's the same thing as computer systems and electronic music. It seems a very obvious thing to do, because the kind of people who are involved with synthesizers are the kind of people who are involved with that kind of technology to begin with. There are a lot of programmers who are into music and synthesis. One of the most creative fields in the late '60s and '70s was computer programming. It was a real interesting thing to get into at the time. Lots of far out applications. You didn't really have to do missile programming. You could do things that were interesting. There were a lot of people who were related to both fields, so it seemed obvious to tie the two things together.

Also, electronically, it seemed obvious to tie the two together. The sequencer which is used in electronic music could just as easily be replaced by an electronic computer, because all it does is produce control voltages that are, in a sense, memorized. They used to he memorized in an analog way, but they could just as easily be memorized in a computer. In fact, you could develop whole sequences. Moog, for instance, had been working on that for a couple of years. In terms of video systems, I guess it seems like a good idea, too. One of the problems is that you've got to get a voltage-control video system.

Also, the problem again is the same with electronic music. There's somebody sitting in his control studio and before him he virtually has an orchestra. But he only has ten fingers, so the only possible way he can get the orchestra effect is to have a 64-track tape recorder sitting beside him, and a couple of months to get it all together, because he really has to tape each part perfectly. The same holds true with video systems: you have all the functions of that box sitting in front of you, but you can't control all of it at once. The computer represents a way to go, at least in being able to accompany yourself. It has a memory andtherefore, it can play back. You can play something into it and have it play back in real time. You can play against that for another track, maybe memorize that and play that back, then play the third... It's like the groove system at Bell Labs, where a computer, in fact, sits there like a monitor to the composer. He plays his little tune, then he sits there and listens to it back again. Then he can go in there and can edit.

That's another good reason for the computer: you can change an individual note in terms of any one of its characteristics--pitch, timbre, reverb, anything--and then have it play back again. The computer takes over this function. That's nice. That means you don't have to go through your 64-track tape and cut out the one little piece that's wrong and put in another little piece that's right. I guess people working in video see the same kind of possibilities.

Q: When you're producing on the machine, does it become like a performance? Are you responding to your own intuitions?

WW: Well, when I started with the computer image machine, one of the big problems was control. There was a fixed system. It became very obvious to me that one of the ways to proceed was to develop a notation. Then I didn't need to sit there and mindlessly knob twirl for hours in order to develop a tape. I could sit there and mindlessly knob twirl in the sense of developing a pattern, and then be able to make a notation which would immediately enable me to recall that pattern.

Once I had a notation system, it again seemed relatively obvious to me that one could sit down and write something out, as long as you could picture it in your head--a score--then go to the machine and do it. I did this several times and at least it appeared to be possible. With this machine I haven't been doing that, because it's an open ended machine. For example, yesterday Laddy brought in all this equipment, piled it on the machine, and spent half his day working like an octopus to connect it all up. This machine is open ended. It's virtually impossible for me to develop a notation system for it--I've tried. If I work within a given structure, I can notate camera angles, levels of pods, color assignments, patching. From there it seemed I could get into compositions. I could get into actually performing in real time with it.

That's one of the differences, you know, between television and film--I guess the major difference. A lot of things are very similar--you're still working with a motion picture but video works in real time. You don't have to wait for the stuff to come back from the lab, you don't have to cut it up and so on. It's all done in real time in television. In fact, it works best if it's done in real time. So the obvious extension of that is to move it into a real time performance, and there have been very few of those. I mean light shows sort of faded out in a big kafuffle. I don't think they ever did much to advance the medium. There were several light shows that were very good and did show potential. Some of them are still going on. They have gotten out of the commercial market that light shows were into. You know, even slide productions are something that could work out, and certainly films can work out. They can't be live performances, but they can certainly work in an area which is abstract like this, and is meaningful in terms of a visual experience and represent the composition.

Q: It sounds, from the way you describe the functions of the different machines, as though each machine would tend to enforce a certain kind of imagery, and that if you wanted to develop an imagery that was different from what is available, you'd have to invent a new machine.

WW: That seems to be one of the things that's going on right now. Everybody has been working on derivations up to this point. Three of us, at least, are trying to get together, in terms of saying what we've got, what it is and what it does and, "Is there anything we left out?" If there is, we develop a machine for that and then we've got all these machines that can be hooked together so we can at least get some feeling for the whole range. We have a feeling now that we've covered at least a broad enough aspect of it to get it all together. For example, Woody Vasulka (of Media Study Center, Buffalo, N.Y.) is working in one particular area. Their big thing has been developing programmable keyers and switchers. We've been working in another area which has been strongly oriented towards color and abstract. Eventually we should be able to get this all together. Now that may mean it's not going to be one machine that actually takes part in this kind of thing. You may not actually do a performance on one instrument; the performance may include several inputs.

In fact, it might be much stronger if it could be done that way. One of the problems in video is that you're limited by your projection. There is no means of projection. Now if I want to sit there and present something, I'm very restricted in the kind of situation I could do it in. The best situation would be in a living room, with a couple of couches and television sets so people could watch and relax and try to relate to that little screen, which perceptually is a frustration anyway because it doesn't cover enough of your field of vision to actually keep your brain happy for a very long period of time. Your eyes tend to wander even if you don't want them to wander. We really don't have anything other than, say, stacking up 12 or 15 television sets...25-inchers with a great big matrix, something like that.

Q: It sounds as though the actual tape of a presentation is perhaps one of the least important aspects of the whole process.

WW: Maybe it's not the least important. Maybe it's the one that's going to require the most work... Because there are all sorts of possibilities for presentation and television: cable TV, commercial TV, tape distribution. I don't think it's anywhere near what it's going to be. The whole cassette thing is a dead issue already, in terms of technology. I think they're continuing the production of those units basically to maintain the market that they've got now, and are holding back, in a sense, on the next means of distribution. I guess it was Perry that was saying he's been doing some research into the video disc process. Right now they're very close to the same thing as the audio disc. It would cost you about seven dollars for the video disc; the player for the video disc would be produced in mass for about $50 to $400, same as a regular record player. The master machine, surprisingly enough, is only around $5000 or more. So the whole process is just on the verge of being introduced. When all of this hits, where do the casettes go?

Q: Do you work toward a specific idea when you make a tape?

WW: A lot of my tapes aren't meant as finished pieces; most of them are meant as exploration pieces, where I take the equipment and I narrow it down to two or three things that I'm going to control. I guess a year ago we really got down to the idea of being able to record and get back to a pattern, so at least we could keep a pleasing pattern that we found amongst the thousands. So the tapes then really got into taking single patterns and developing them. Now I have four or five patterns which are slightly related. The problem is transition, which is the next big problem in any kind of compositional system--how do I make the transition from this pattern on this machine? The tapes really represent my own kind of documentation of trying to do that.