The year was 1967
Upcoming SlideShare
Loading in...5
×
 

The year was 1967

on

  • 530 views

Personal History of Interacting with Computers 1967-1973

Personal History of Interacting with Computers 1967-1973

Statistics

Views

Total Views
530
Views on SlideShare
530
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

The year was 1967 The year was 1967 Document Transcript

  • Some Personal Examples of Human-Computer Interaction: 1967-1973.John C. ThomasIt was spring, 1967 and although I had never actually used a computer, in a senior paper Iwrote at Case-Western Reserve, I argued that trying to emulate human thinking byprogramming a computer was the method most likely to result in real insight. I arguedthis by analogy to someone trying to understand a television. Using other commonpsychological science techniques; for example, doing things to a television and thengrinding it up and putting in a centrifuge, or making careful observations of whathappened on the screen when you twiddled various dials would never, I argued, lead toany real understanding of how a television worked. A better approach would be to try tobuild a television.Soon, I discovered that I was not (as I had supposed) the originator of this idea and whenI applied for and got accepted to graduate school in psychology at Michigan, I went towork as a research associate in the fall of 1967 for Walter Reitman, a former Herb Simonstudent. His class, Models of Thinking, included an extra session learning SNOBOL, asymbol-processing programming language. Our interaction with the computer was via atime-sharing (COMSHARE) terminal that sat in a little closet at the Mental HeathResearch Institute. This terminal communicated with the mainframe at 35 characters persecond. I wrote my very first “simple” program. I do not recall the exact syntax, butbasically, I set a variable, let’s say X = 3, incremented by 5 in a loop and when it equaled50, it was supposed to stop and print out “End.” As I sat there, I begin to think, “Gee,computers aren’t so fast after all. I can add faster than that.” Then, I had this weird littlefeeling, “Wait. Is it so literal that it won’t stop when it goes higher than 50 just because Isaid to stop when it equaled 50?” Just then, the terminal motor went off and the lightwent out. My next thought was, “Oh, my God! I broke it!” I recalled then that we had anaccount and decided that it had only reached the dollar limit and terminated our session.Oops. Of course, now it may seem naïve and silly to think I could have actually brokenthe computer with buggy software, but I have never forgotten that moment of panic whenI really did imagine that I had ruined by advisor’s computer.Walter Reitman went back to CMU as a visiting professor the following year (hopefully,not because I broke his budget) and I worked with John Gyr, a former Piaget student. Wewere doing experiments and simulations involving perceptual development andperceptual adaptation. We replicated earlier experiments involving adaptation to prisms,for instance, and how it was impacted by active exploration as opposed to passiveobservation. We also wanted to test the “limits” of human perceptual adaptation by usinga computer rather than optics because on the computer, we could define arbitrarymappings. In the limit, we even planned to have a completely random mapping. Thefolks in our small research group all predicted that no-one would be able to adapt to that.We never got to test our hypothesis however. Although by this time I could write simpleprograms that avoided infinite loops, John hired a professional programmer to build asystem for us to easily set up various adaptation schemes, run subjects, collect data andperform statistical analyses. Every week for two months, we met with this programmer
  • and he explained every week how he had been planning on showing us a demo of thesystem but it occurred to him, that with a small change, he could make the system evenmore general purpose and more powerful and that next week, he would show us anamazing demo. The requirements grew without bounds; not necessarily from our input,but more from his brilliant insights. One week, we all came to see the demo --- well, allof us, that is, except the programmer. He had apparently gotten and accepted an offer fora much better job in another state. So far as I know, he had never actually written a lineof code for us. Experiences like this probably led Missourians to the motto: “Show me.”One of the projects I was working on at this time was writing a program for a PDP-9 thattook as input, two two-dimensional views of various simple objects and inferred thethree-dimensional relationships. After working on my program all morning, I issued acommand to save my work. Instead of a confirmation, I got some strange andunreadable error message, something like: IOPS24! I had no idea what IOPS24 meant,but the save command had always worked before. Perhaps I had hit the wrong key. Itried it again. That turned out to be a big mistake. John Seeley Brown so happened to bein the same machine room and explained that the error message meant that I hadforgotten to push the “write enable” button for the mag tape. I was supposed to havepressed that button and then pressed Control X (or some such thing) and only then re-issued the save command. Any other sequence would (and did) result in losing the entirecontents of memory. Great.I tried not to take it personally when John Gyr went back to Switzerland the next year asa visiting professor. Anyway, I begin to work with Jim Greeno on human problemsolving and wrote programs in FORTRAN II for our IBM 1400. The programs ransubjects and collected their data. Another program analyzed the data. These programswere all written with pencil and paper first and then keypunched onto 80 column cards.Another one of Jim’s grad students had written some special additions to FORTRAN tocontrol the terminals (these were 20 lines of 80 characters; IBM 2260’s). There were fiveterminals for subjects (numbered, sanely enough 1-5) and one for the experimenter(numbered, 6). By convention, if you used the parameter “7”, this did the same thing forall five of the subjects’ terminals. At the beginning of the experiment, I “initialized” theterminals and then the data items for all five subjects. So, for example, I “turned on” allfive terminals by using the parameter “7” and wrote “0” for the clock count of all fivesubjects by using the parameter “7” for my previously defined five-element array, oneelement for each of the five subjects. Whenever I tried the program, it worked … forawhile…and then aborted in some novel manner. I kept putting more and moredebugging statements in my code, but the program just wouldn’t work, although it alwaysblew up in some new way. But wait. Aren’t computers supposed to be deterministic? Atone point, I awoke in the middle of the night with complete assurance that I had “found”the bug. The next morning I tried my idea. Nope. I was wrong, despite my subjectivefeeling of certainty that I had it solved. Eventually, I did figure it out. I think this is anice example of how the user (me) can be led into following the same conventions across“invisible” boundaries (in this case between FORTRAN itself and some assemblylanguage extensions that someone had written). Every time I added more debugging
  • statements to my code, of course, the extra zeroes got put into some new place in memorythus causing a different kind of mischief.When I wrote my analysis programs, I used my own behavior as a model for theprograms. Every time a new keystroke timestamp was found in an array, the programwrote that value to disk for permanent storage (the computer’s memory was only 16K,later increased to 32K) so there was a lot of writing to the disk. A large lot. And one dayas I listened to all this quite audible writing to the disk, “Zih-da-zih-zih; zih-da-zih-zih;zih-da-zih-zih” (repeated over and over) it suddenly occurred to me that from thecomputer’s point of view (if I may use that phrase), it made a lot more sense to write allthe data to disk at the same time. A lot has changed since 1970, but it is still the casethat amateur (or “end-user”) programmers, even if they write algorithms that work, mayor may not have any insight into the efficiency of what they are doing and how issues ofefficiency might or might not scale when more users are added to the picture. And, it isstill the case that, however cheap computing has become, resources are still not infinite.This story might also indicate that even primitive feedback about performance (in thiscase, the sound of the disk) is better than none at all.After I finished my Ph.D. at Michigan, I went to work in 1971 at Harvard MedSchool/Massachusetts General Hospital working for Nancy Waugh and James Fozard onan NIMH grant to study the psychology of aging. In addition to actually designing andrunning experiments, I wrote programs for the PDP-8 to record the subject’s answers andtimes and then analyze them. I took three weeks of courses in Maynard, Massachusetts.The first week focused on PDP-8 hardware. We had oscilloscopes and probes and tracedthe clock cycles through the machine. We could actually see how different hardwareinstructions operated. I wrote programs in an interpreter called FOCAL whose syntaxwas much like FORTRAN. But I also understood how the FOCAL interpreter worked. Iunderstood how assembly instructions were turned into hardware instructions and howthe hardware functioned. When something odd happens on my Thinkpad today, I haveno hope of knowing what is responsible. The cycle time for this PDP-8 was 70 msec. andthe main memory was 4K. Each bit in memory was represented by a tiny black circularmagnet; they were tiny, but they were distinctly visible to the human eye! My programswere stored on paper tape. It took so long to print out a paper tape, that when I madesmall errors, I would simply “tape over” some bits and punch a few new holes by hand.Before I was hired onto the project, Nancy and Jim had paid a professional programmerto make things “easier” by “enhancing” FOCAL with a few assembly language routinesthat ran the experimental hardware. After writing these routines, but before getting achance to debug or document them, he left for a two month vacation in Afghanistan. In1971, recall, there were very few cell phones. In particular, one of his programs wasnamed “FSET” and was supposed to set a software clock. I quickly renamed it“FDESTROY” because every time I tried to run it, it seemed to destroy some part of myprogram. I would load my program, check out on the console lights that the program waswhat I thought it was and run my program. Fine. But, at some point after, the programwould stop working. Worse, when I checked on the contents of memory, part of myprogram had been changed! That FSET routine had a single bit wrong. But, as is typical View slide
  • and troubling to the amateur programmer, a very small change in the program caused avery large change in behavior. Our friend who was vacationing in Afghanistan hadaccidentally set the “indirect” bit. So, instead of putting the hardware clock value of thefirst keystroke into the designated storage space for the software clock for latercomparison, it used the hardware clock time value as the address in which to store thevalue. Since, being human, each time I tried to test the program, my first keystrokelatency was slightly different, so too was the address. Hence, the resulting programbehavior was always somewhat different. We also see here that when an amateurprogrammer uses someone else’s “helpful” code, he or she quickly comes to a kind of“cliff” when that code does not work as intended or expected. Luckily, in this case atleast, because of the three week’s training, I did have the tools I needed to track downand fix this bug. I believe that today, by the time I learned to understand all the layersbetween the word-processing program I using and the underlying hardware, all of themwould then be obsolete. I hypothesize that we (or at least I!) have reached some sort ofhistoric technological “concept horizon” or “vanishing point.” It is now impossible tolearn everything about a complex system as fast as the system itself is changing. If that isso, what are the implications for human computer interaction? View slide