T R A C I N G T H E D Y N A B O O K : A S T U D Y O F T E C H N O C U L T U R A L T R A N S F O R M A T I O N S by John W . M a x w e l l M P u b , S i m o n Fraser University, 1997 B . A . (honours), University o f British C o l u m b i a , 1988 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F D O C T O R O F P H I L O S O P H Y i n T h e Faculty of Graduate Studies ( C u r r i c u l u m a n d Instruction) U N I V E R S I T Y O F B R I T I S H C O L U M B I A N o v e m b e r , 2006 © John W . M a x w e l l , 2006 Abstract The origins of the personal computer are found in an educational vision. Desktop computing and multimedia were not first conceived as tools for office workers or media professionals— they were prototyped as "personal dynamic media" for children. Alan Kay, then at Xerox' Palo Alto Research Center, saw in the emerging digital world the possibility of a communications revolution and argued that this revolution should be in the hands of children. Focusing on the development of the "Dynabook," Kay's research group established a wide-ranging conception of personal and educational computing, based on the ideal of a new systems literacy, of which computing is an integral part. j Kay's research led to two dominant computing paradigms: the graphical user interface for personal computers, and object-oriented programming. By contrast, Kay's educational vision has been largely forgotten, overwhelmed by the sheer volume of discourse on e-learning and the Web. However, an historical analysis of Kay's educational project and its many contributions reveals a conception of educational computing that is in many ways more compelling than anything we have today, as it is based on a solid foundation of educational theory, one that substantially anticipates and addresses some of the biggest civil/political issues of our time, those of the openness and ownership of cultural expression. The Dynabook is a candidate for what 21st-century literacy might look like in a liberal, individualist, decentralized, and demo- cratic key. This dissertation is a historical treatment of the Dynabook vision and its implementations in changing contexts over 35 years. It is an attempt to trace the development of a technocultural artifact: the Dynabook, itself partly an idealized vision and partly a series of actual technologies. It is thus a work of cultural history. But it is more than simply a looking back; the effective- history of the Dynabook, its various incarnations, and its continuing re-emergence and re-artic- ulation mean that the relevance of this story is an ongoing question which needs to be recognized and addressed by educators, technologists, and learners today. This dissertation represents an introduction to this case. Table of Contents Abstract • • '• Table of Contents List of Figures v ' Acknowledgements vii Dedication v i i i Chapter 1: Introduction 1 The Story So Far—A Conventional Mythology • • 3 A Critique of the Popular Mythology 8 Impoverished perspectives • 9 The Division of tabour in Modern Technology 11 How Education is Complicit 15 Alan Kay and the Dynabook Vision 21 From ARPA to Xerox PARC 23 Toward the Dynabook . . . 2 8 Elements of the Dynabook Vision 32 The fate of the Dynabook 3 5 What Follows 3 6 Chapter 2: Positions and Approaches 39 Introducing Myselves 39 Roots • • • • 4 0 My Encounter(s) with Objects 43 Why This Study? Why This Approach? 50 Reflecting on my history — 50 "Computer criticism" 52 Multiple perspectives, blurred genres — 52 Methodology and the Problematic of Distance 54 Introducing genre theory 58 History as politics — 59 The Dynabook as/in history 60 How Do We Know a Good Idea When We See One? 64 Personal computing/educational technology as a site of struggle — 65 Chapter 3: Framing Technology 68 Technology as Media 68 McCullough's Framing of Media 71 Latour's Mediation: Articulations and Translations 74 Technology as Translation , 77 Standardization and the Tower of Babel 82 The Mechanics of Text -83 On'Abstraction' 85 Digital translations 87 Software 89 The Semiotics of Standardization 93 Simulation as Interpretation • 95 The Ethics of Translation 99 Back to the Tower of Babel -100 Our responsibility to technology ' 101 Chapter 4: Alan Kay's Educational Vision 104 Computers, Children, and Powerful Ideas 105 "Late Binding" and Systems Design 112 S m a l l t a l k — " A New Medium for Communications" : 117 Objects and messages 118 The Design of Smalltalk 120 Late binding in Smalltalk 122 The Smalltalk environment 125 "Doing With Images Makes Symbols" 126 Ways of Knowing: Narrative, Argumentation, Systems Thinking 131 What is Literacy? , 135 Vision: Necessary but not Sufficient 140 Chapter 5: Translating Smalltalk 142 Origins: Smalltalk at PARC in the Early Years 143 Educational limitations • 148 Technological limitations 149 Smalltalk's Initial Transformation at Xerox PARC 150 A Personal Computer for Children of All Ages becomes Smalltalk-80 150 From educational research platform to software development tool 156 From "designers" to "end-users" 159 The Microcomputer Revolution of the Late 1970s 163 From a software research tradition to a "gadget" focus 164 From a research focus to a market focus 165 The Dynabook after Xerox PARC 166 The Vivarium Project 169 Vivarium research • 175 HyperCard and the Fate of End-User Programming — 179 From media environment to "Multimedia Applications" 179 From epistemological tools to "Logo-as-Latin" 182 Chapter 6: Personal Computing in the Age of the Web 186 What is a "Powerful Idea," Anyway? 186 The ,1990s: The Arrival of the Web , 189 From stand-alone PCs to information applicances — 190 From Closed to Open Systems 191 The Web as an Educational Medium 193 From learning experiences to an economy of learning objects 193 The Dynabook Today: How Far Have We Come? , 196 Vendorcentrism 200 " N e w M e d i a " vs.""Cyberculture" in the 21st century 201 Lessons from the Open-Source Movement 203 Of Unix and other IT cultures 207 Authoring in the Age of the Web ; 211 . iv Web authoring and computer literacy 214 Why Not Smalltalk? Why Not the Dynabook? 217 The Dynabook: existence and essence 221 Chapter 7: Squeak's Small but Mighty Roar 225 Squeak: A Renaissance Smalltalk 226 "Back to the Future" • • • 227 Squeak as an Educational Platform 231 Etoys: Doing with Images makes Symbols 232 Squeak vs. Squeakland 239 Squeak in School 242 The Squeak Community and its Trajectories. — 247 The Blue Plane and the Pink Plane 248 Squeak Communities Today 250 Squeak in print? 253 Where is that Dynabook, Anyway? 254 Squeak and Croquet at OOPSLA'04 255 Squeak: Mouse that Roared? 259 Chapter 8: Drawing Things Together 261 Where we've Been 261 Dynabook: Artifact or Idea? 267 Cultural history is not Biography 270 Back from the Future 272 Who Cares About the Dynabook? 275 Kay's relevance to education 276 Education and Powerful Ideas 279 The Politics of Software Revisited : 280 Bibliography 286 Appendix A: UBC Research Ethics Board Certificate of Approval 304 List of Figures Figure 4 . 1 : Jimmy a n d Beth w i t h t h e i r D y n a b o o k s 109 Figure 4.2: C a r d b o a r d m o c k u p circa 1971-1972 110 Figure 4.3: C a r t o o n by T e d K a e h l e r 116 Figure 5.1: Kids in f r o n t of A l t o c o m p u t e r 145 Figure 5.2: O r i g i n a l o v e r l a p p i n g - w i n d o w interfaces 146 Figure 5.3: A d e l e G o l d b e r g ' s Joe Box in a c t i o n 147 Figure 5.4: M a r i o n ' s p a i n t i n g system 148 Figure 5.5: A Smalltalk " b r o w s e r " 153 Figure 5.6: P l a y g r o u n d e n v i r o n m e n t , circa 1990 177 Figure 7.1: From " P r o g r a m m i n g Y o u r O w n C o m p u t e r " 234 Figure 7.2: P l a y g r o u n d II " S c r i p t o r " 235 Figure 7.3: Etoys " V i e w e r " in Squeak 3.8 237 Figure 7.4: Etoys tile r e p r e s e n t a t i o n a n d e q u i v a l e n t Smalltalk code 239 Figure 7.5: The h a l l m a r k " d r i v e a c a r " Etoy ...241 Figure 7.6: Kay's sche ma f r o m t h e 2004 T u r i n g Lecture 258 Acknowledgements A g o o d m a n y people contributed i n a variety of ways to m y being able to take this project o n a n d to complete it. M y sincere and lasting thanks go to them. T o R i c k i G o l d m a n for m a k i n g m y path clear; M a r y B r y s o n for her insistence o n critical vision, and for her unshakable support; G a a l e n E r i c k s o n for kindness, w i s d o m , a n d unflagging interest. T o Prescott Klassen for the sense of history and the encouragement to think big; D a v i d Porter for giving me creative opportunity a n d encouragement for m y explorations; Rowly L o r i m e r for the space to think, to work, a n d to engage; M a r t i n L ' H e u r e u x for reliable intellectual foiling, a n d for sharing m u c h o f the path; A v i Bryant for being an exemplar. T o W a r d C u n n i n g h a m for W i k i s , especially c2 a n d for the incredible repository of software w i s d o m collected there; S i m o n M i c h a e l for Z W i k i , w h i c h served as m y writing environment; W i k i p e d i a for m o r e breadth than I have any right to. T o Pavel C u r t i s a n d A m y B r u c k m a n for objects; Matthias M u l l e r - P r o v e for blazing a path t h r o u g h the literature. T o K i m Rose for the direct line to the sources and for unfailing paitience; T e d Kaehler a n d A n n M a r i o n for generosity of time a n d help far beyond what I asked for; BJ A l l e n - C o n n , B o b b y Blatt, a n d D a n Ingalls for their help i n putting the pieces together; John D o u g a n for his help with the images. T o A l a n K a y for a w o r l d to grow up in, and m y D a d for seeing it. T o m y wife, Kelly, for constant intellectual stimulation, for historiography, for patience, for superb copyediting, a n d for a h o m e for me a n d m y ideas; N o r m for seeing what I was o n about before I even d i d . T o m y M o m for the solidest of foundations. T o James a n d A y l a for the future. vii Dedication F o r James, A y l a , A n j a , C a r a , M a i a , A d r i e n n e , Julien, Ben, Joelle, B r a m , a n d Natalia Joy. Chapter 1: Introduction This is a story about educational computing—that is, computers i n education. W h a t does that mean, exactly? H o w we come to an answer to that question is a good deal of what the next 200 pages are about. That is to say, the question of what "computers i n education" means isn't a simple one. It is not the k i n d of question we can answer i n a sentence or two and then get on to the business of plugging i n cords and training users. Rather, the meaning of computers i n education is something that is contested, and has been contested for about forty years already. Over that time, the answer to the question has not become particularly clearer; o n the contrary; I w i l l take pains to argue here that it has looked substantially clearer at points i n the relatively distant past than it does now. The story I am about to tell is that of A l a n C . Kay and his work o n a possible vision of personal and educational computing; a research program w h i c h began i n the early 1970s at X e r o x ' research labs i n Palo A l t o , California and w h i c h has gone through a variety of institu- tional contexts since then, even continuing today. A l a n Kay's work is romantically described in a vision he articulated some thirty-five years ago under the rubric of the Dyna- book, w h i c h continues today to act as a sort of touchstone and reference point for the ongoing development and evolution of a particular rendering of what personal and educa- tional computing might mean. Kay's story isn't well known, compared, for instance, w i t h the story of Steve Jobs and Steve W o z n i a k inventing the A p p l e computer i n their garage i n the late 1970s, or of B i l l Gates' founding of Microsoft corporation i n that same decade. But despite its relative obscurity, I w i l l argue that A l a n Kay's story is one of the root texts i n the construction of personal and educational computing. In delving into this history, and i n evaluating our contemporary aporias i n the light of it, I w i l l argue that the cultural trajectory of personal and educational computing can be made better sense of—and that opportunities for personal agency, critical understanding, and political action appear—in the light of such a historical study. Chapter 1: Introduction 1 A starting point for this research is the constructedness of personal and educational computing. N o w , the constructedness of the w o r l d is a popular topic i n recent social and cultural theory, but what is often missed is the element of ongoing political situatedness and the active and generative space opened up by critical engagement w i t h these constructions. The world is not given, but is i n large part created by participants i n particular social, cultural, historic, and practical contexts. Moreover, the constructedness of the world does not have to be something over/against regular folks. A good part of my agenda here is to show that the construction of personal and educational computing is not something done to users, learners, teachers, or even critics. Personal computing is not given; it has been constructed through particular historical contingencies, and, more important, it is continu- ally and ongoingly constructed. What is a computer? What good is it? What is it for? How does it fit into our lives? It is important to remember that these questions remain open. The work of defining this ground is still going on; the game is still up for grabs. I hope to show how A l a n Kay's work—beginning i n the 1970s—was the first major, sustained entree into this realm: that is, the work of opening up the possibility of personal agency i n the struggle for the construction of meaning and effective action in personal and educational computing. The considerable influence i n this area weilded by large market- driven corporations like A p p l e Computer and Microsoft Corporation is altogether more recent. Furthermore, despite the apparent dominance of corporate market logic i n defining the meaning and significance of personal and educational computing, I intend to show how attention to the history of this field can reveal opportunity for individual 'users'—however circumscribed their agency may appear i n the face of corporate domination or the threaten- ing chaos of the Internet. The apparent disempowerment of individual learners, teachers, and other front-line 'users' i n the face of a rapidly growing and complexifying w o r l d of computing and digital media is the target of this work. In my studies of educational computing, I am repeatedly faced w i t h the challenge of making sense of a field w h i c h basically does not make sense—that is, it is without a guiding rationale or set of c o m m o n principles w h i c h might guide action or Chapter 1: Introduction 2 even critique. Educational computing seems a multi-headed and often self-contradictory beast, almost wilfully ignorant of its own history, and as a result often at the mercy of what- ever fashions or—in the post-9/11 world—terrors may carry the day. The result is that whether the current obsession is to be upgrading our software, updating our blogs, or fend- ing off network-borne viruses, the extent of most users' understanding and feelings of control over what they are doing is, to say the least, compromised. T H E S T O R Y S O F A R — A C O N V E N T I O N A L M Y T H O L O G Y Y o u may ask yourself, how did we get here? H o w do we find ourselves in a w o r l d dominated by an often overwhelming technological infrastructure, i n w h i c h fear and insecurity have become such driving forces? In order to answer this question, we can begin by examining the conventionial history of personal computing, that w h i c h serves as the origin myth and working model of this w o r l d . In the beginning—so the popular story goes—there were mainframes; computers were enormous, air-conditioned beasts tended by teams of white-coated priests (or, alternatively, by teams of p o s t - W A C gals carrying reels of tape and patch cables). In these early days, the story goes, computers were put to large, institutionalized purposes: taxes, billing, artificial intelligence, and world domination: somehow these increasingly large and powerful machines would surely break their chains and devour humanity, or at least enslave us. But the promethean leap apparently came i n the late 1970s, when a ragtag army of hobbyists i n southern California—working i n pairs i n garages—invented the personal computer out of spare parts and baling wire. These early computers were tiny and inexpen- sive, tended by greasy adolescents i n dirty t-shirts. It wasn't long, however, before the smell of money mingled w i t h the the odor of solder and the whiff of burning components. A cadre of early computer entrepreneurs—Steve Jobs, B i l l Gates, et al.—set up shop to battle the established computer industry (IBM), who peered down from its mainframes and wondered what to do now that the secrets were out. Chapter 1: Introduction 3 I. This 'new w o r l d ' of computers is romantically captured in a new genre of computer magazines—like BYTE—that appeared i n the late 7 0 s and early '80s: half inch-thick glossy publications stuffed full of opinionated editorials, recipes for homebrew computer projects, and full-page colour advertisements for the new software "titles" which were appearing to tap this new market. Software appeared i n three major categories: office productivity soft- ware like spreadsheets and w o r d processors—the personal computer had terrible, unresolved issues w i t h legitimacy and desperately desired to be accepted by real business- people; computer games—probably the most lucrative market of the three; and educational software—which often looked quite a bit like the games, for marketing reasons at least. Educational software tended to be drill-and-practice exercises, tarted up w i t h as m u c h colour and sound as the makers (and the existing hardware) would allow. A n d there was also Logo, an educational programming language developed for children and undoubtedly good for developing young minds, though it seemed no one was quite sure how. In any case, regardless of the intellectual depth (or lack thereof) of these works, it was enough to estab- lish educational software as a persistent genre in the minds of the computer-buying public: one of the things that these machines were "good for," a reason for buying—or at least for justifying the purchase. A c c o r d i n g to the popular story, three key events i n the early 1980s rescued personal computing from its greasy hobbyist image (with a vaguely countercultural air about it) and made it into an economic powerhouse. The first was I B M ' s hugely successful introduction of the " I B M P C , " w h i c h set the paradigm for what a personal computer was, almost completely eliminating all others (other players in the P C market made " P C clones" after this). I B M brought the respectability of established business, and poured marketing money into making the " P C " an indispensable part of small business operations. The second—much more i n keeping with the promethean mythology of personal computing—was A p p l e Computer's 1984 introduction of the Macintosh, branded "the computer for the rest of us." Apple's early marketing of the M a c lives on i n popular history. W i t h it, they simultaneously defined a legitimate alternative market for personal comput- Chapter 1: Introduction 4 ers—easily reduced to "creative" types—and cemented I B M ' s mainstream business market. In caricaturing I B M ' s "big brother" image, A p p l e undoubtedly helped reinforce I B M as the market leader for serious computing. The paradigm—or genre—that I B M established was shored up by Apple's circumscription of the margins. This division lives on i n the popular imagination to this day. The third event was not so m u c h a single event as a clear trend: video games' growth into an enormous, lucrative market. A t a r i was the market leader here i n the early 1980s, dabbling i n sales of both personal computers and dedicated video-games machines, but more importantly w i t h the design and distribution of the games themselves, to whatever platform. If there was a question—what are they for?—surrounding personal computers, there was no such worry about video games. Regardless of the ambiguity of what personal computers might be good for, the market and industry surrounding it grew w i t h phenomenal energy through the 1980s and into the 1990s; a new breed of computer millionaire emerged i n Silicon Valley, around Boston's "Route 128," and i n several other centers i n N o r t h A m e r i c a (notably Seattle, i n Microsoft's case). There was money to be made, and between the innovating potential of digital tech- nology and the gradually growing demand for it i n the marketplace, personal computing flourished. M o r e and more businesses, schools, and individuals bought personal computers; the industry steamed ahead w i t h new and innovative uses for them: productivity software, educational software, games, and now interactive multimedia, graphics, audio and video tools. The " M u l t i m e d i a P C " of the early 1990s, centered around its C D - R O M drive, pushed the market ahead again, and the growth of a content-based C D publishing industry seemed certain. The key innovation to emerge i n the 1990s, of course, was the W o r l d - W i d e W e b , w h i c h first reached public consciousness i n 1994 and 1995. A l m o s t overnight, the personal computer's identity shifted from that of productivity tool to information appliance, tapping a world-wide ocean of information; pundits waxed rhapsodic. For educational and personal users (that is, apart from the established office productivity market), the "Web" became the Chapter 1: Introduction 5 single most important reason to o w n a computer, and the W e b browsing software Netscape Navigator was proclaimed the new "killer app." Netscape Communications C o . raised 1.2 billion dollars i n its now-famous Initial Public Offering (IPO) i n 1995, sparking a flood of investment reminiscent of the California gold rush of 1849, or at least the D u t c h tulip market of the 1630s. Through the late 1990s this new gold rush ran w i l d , w i t h billions of dollars invested i n driving innovation online. W h e n the "tech bubble" finally began to subside (it would be an overstatement to say it burst) i n 1999 it left i n its wake a landscape littered w i t h new technologies; some useful, many not, some significant, many soon forgot- ten. W h a t had become clear was that the paradigm of personal computing had been firmly established throughout western society: a 2005 report, for instance, states that 75% of Cana- dians have a computer at home; 72% are Internet users. M o r e than 30% get their daily news online (Canadian Internet Project 2005). A 2004 Statistics Canada report states that 97% of Canadian schools were Internet connected, w i t h an average of 5.5 students per connected computer (Statistics Canada 2004). One more important statistic is this one: the number of people online—that is, capable of communicating on the Internet—is one billion, as of late 2005, according to M a r y Meeker of M o r g a n Stanley Research.1 A b i l l i o n people makes for a large-scale, complex society by any measure. A n d yet, our primary means for interacting w i t h this complex environment is the personal computer, a bastard, haywired-together technology born a scant two-and-a- half decades ago by greasy youths i n garages i n California, sold mostly by consumer-elec- tronics hucksters i n the intervening years, and developed largely via gold-rush hysteria. W h a t we've inherited is the PC as generalized interface to a big, scary w o r l d out there. But it is significantly underpowered i n comparison to the task; I do not mean here that the processing power, the M H z , or the R A M is insufficient—what I mean is that what has become a significant communications medium—a major established genre or paradigm of human expression, communication, and commerce—is built on extremely shaky founda- 1. Interestingly the study reports that 36% of those users are located in the Asia-Pacific region; while only 23% are in North America. See Meeker (2005). Chapter 1: Introduction 6 tions, and patched up and reinforced over the years w i t h little more than glossy magazine advertisements. A hundred years ago, the exigencies of the book publishing w o r l d led print- ers increasingly to use cheap pulp paper, despite the fact that pulp paper disintegrates into dust w i t h i n about a century under most conditions. But this is vastly more robust than the state of the personal computer, w h i c h threatens to burst asunder for many "users" on almost a daily basis, i n the face of quotidian bugs, virulent viruses, overwhelming spam, software piracy, invasion of privacy, pop-up pornography, chat-room pedophilia, and general information overload. N o w , fear and loathing have never been serious impediments to commerce or progress; indeed, they are often powerful drivers. The personal computing market is certainly driven by such forces, and educational computing is no different. Far from "personal" computer users—a collective which, at numbers like those quoted above, is roughly equivalent to " c i t i - zens"—being i n any k i n d of control of the digital world, the real battle to control the discourse is fought by large and mighty corporations. Microsoft, for one (and they are certainly not alone i n this), has established itself as an immense, indispensable part of the environment by offering to manage the interface between 'users' and the vast, ambiguous, frightening, and complex w o r l d of technology and the Internet. That they have been accused on many occasions of being more part of the problem than the solution matters little; Microsoft's marketing genius—a paradigm-defining one—is i n understanding and managing just how m u c h or how little consumers want to know, or understand, about what goes on beyond their monitor screens. It is not a stretch to say that all successful technology companies today succeed because they play this particular game well; consider Google's enormously successful management of online content (and the dearth of attendant critique). In education, W e b C T , one of the most influential companies i n educational tech- nology today, succeeds precisely because of their successful control of the ambiguities and complexities of the environment i n w h i c h their customers need to work. This is the d o m i - nant dynamic of the first decade of the 21st century. Chapter 1: Introduction 7 A C R I T I Q U E O F T H E P O P U L A R M Y T H O L O G Y Such is the conventional story of personal computing. This is the mythology of this moment i n time, the history w h i c h makes sense of the w o r l d we live i n . It is, of course, only one story, and it is inadequate and indeed obfuscating o n several levels. It is helpful to look at the story of personal computing as one emergent w i t h i n a context of contemporary journalism, advertising, and marketing, for these are the main arenas i n w h i c h the conventional story has played itself out so far. T o the extent that popular journal- ism and advertising constitute public discourse, this is i n fact and practice our story. But it is not difficult to problematize this. A simple tactic is to simply look for what is absent. In the first place, there is practically nothing about "computer science" i n the story; it plays out as though the formal, academic study of computing (half a century old) d i d not exist, or perhaps as if this realm were some dusty, antiquated pursuit that we were better to have left behind i n the promethean moment of the late 1970s. The second major absence is that of software. The conventional story, as reported and advertised i n newspapers and magazines, and played out i n catalogues and showrooms is overwhelmingly concerned w i t h computer hardware. Software, when it is considered at all, remains i n its standard-sized, shrinkwrapped boxes. Personal computing has largely been about personal computers, as artifacts, commodities, toys, gadgets. There is very little about what actually goes o n inside these computers, even i n the face of the obvious and oft- repeated fact that the wealthiest man i n the world, B i l l Gates, headed a company that ' doesn't deal i n hardware at all. Somehow, the fetish is entirely physical, and we have come to accept that software is a necessary evil that allows the hardware to work, and w h i c h some- how slouches toward its slow improvement. Presumably, it is easier to talk and write about hardware than software. The finer points of chip design are buried deep w i t h i n the black box—or rather, the shiny exterior (or at least the beige plastic cases) of the machine; the details of software are actually i n our faces more than we like to admit, but besides a few trite discourses (GUIs vs command line; M a c O S vs Windows), this fails to get the attention that Chapter 1: Introduction hardware does. W h e n C D - R O M s appeared i n the early 1990s, and afterward the Internet, we began to talk about "content" w i t h respect to computers, despite the fact that we rarely speak of digital content i n ways that are any different from the content that appears i n books or on television. But our conception of "content" is nowhere near sufficient to grasp the significance of software today. The third conspicuous absence i n the conventional story is history itself. The sheer volume of discarded computer hardware suggests an alarming tale w h i c h appears now and then amid reports of sending old PCs to Africa, like eyeglasses i n the Second Sight project. But nothing is ever said of the volume of discarded effort spent designing, developing, learn- ing, and using the software of years past. W i t h the exception of a persistent genre of o l d - timers' reminiscing their old beloved version of W o r d (or Wordperfect, of Star W r i t e r , or whatever—always writers talking about w o r d processors) long past, we give close to zero thought to the decades of evolution of software. The mythology seems to prescribe that the newer is always a straightforward improvement on the older (usually along the lines of more betterfaster cheaper), and wholesale innovations (the web browser, for instance) accepted as being born fully formed from the foreheads of their developers. This obsession w i t h the march of the new masks not only the person-years of toil and thought, but also the myriad missed steps and missteps along the way. It masks, fundamentally, the constructivist's cry, "It could have been otherwise." Impoverished perspectives The conventional story of personal computing is caught between the twin horns of two popular caricatures of technology: instrumentalism and determinism. Instrumentalism is the simple and c o m m o n belief that we create technologies to achieve particular ends, to solve particular problems. The assumption in instrumentalism is that these ends or problems are clearly defined i n advance, such that technological solutions can straightforwardly be specified and developed. Instrumentalism further carries w i t h it Chapter 1: Introduction 9 the assumption that technology is value-neutral, a mere tool i n the hands of a purposeful designer or user. Technological determinism is i n some ways the mirror-image of instrumentalism; the determinist perspective holds that technology has a logic of its own: most fundamentally, that progress is inevitable, towards better and better ends (this the Enlightenment's posi- tion) or toward more sinister and oppressive ends (the position of m u c h critical theory and a good deal of latter-day science fiction). It is easy to pose these two stances against one another, and view the w o r l d of technol- ogy as a struggle between the two or as a playing-out of a middle ground or compromise. I think it better to see instrumentalism and determinism as commonplace perceptual facets of technological systems; w h i c h appear 'naturally' to us i n differing circumstances, but w h i c h fail i n most cases to really focus our attention or provide a useful analytical frame- work: we look at advertisements for new cell phones that can record movies and download coupons and we muse, "what next?" i n a happily determinist frame of m i n d . W e purchase the next iteration of the cheap disposable inkjet printer i n a spendthifty instrumentalist mode. A n d then we wade through mountains of spam i n our e-mail in-boxes and curse that the Internet is out of control. W h a t to do? A n d how could we know anyway, given that our thinking about technology is so circumscribed? W e need to remember—despite the constant temptation not to—that how we confront problems and issues today is historically conditioned; we got to this point by way of a specific unfolding of circumstance. But historical awareness is limited; things haven't always been as they are, and they might have been otherwise, but it certainly does not follow that we can simply choose otherwise: to consciously adopt a different position. Technology is political. It is not a neutral, external realm of human activity separate from political and ethical concerns. Neither is it an 'influence' on the ethical and political, nor are these facets of our lives mere 'influences' on technology. Rather, technology is p o l i - tics and ethics—beginning right w i t h our difficulty i n remembering so. T h i s is a stance w h i c h I w i l l elaborate i n some detail i n the pages that follow. In particular, I want to spot- Chapter 1: Introduction 10 light this notion w i t h particular attention to computer software, a subset of technology w h i c h is more and more shot through our private and public lives. Software has always been political, but today, i n the early 21st century, the politics of software have become acute. A n d while there is an emerging discourse and literature addressing this (e.g., see Lessig 1999; 2002b; M o g l e n 2000; 2003; Stallman 2001; 2003), it has not reached widespread public attention. I see this as a crisis facing Western societies (and by extension, everybody else, given the agendas of globalization). The reason for the lack of focus on the politics of soft- ware, despite the technological messes that accumulate around us, has to do w i t h the basic ahistoricity i n our thinking about technology. M y method here is to lead w i t h historicity, so that this moment i n time can be framed, and so that the idea of software as politics has some concrete meaning. T H E D I V I S I O N . O F L A B O U R I N M O D E R N T E C H N O L O G Y Let us begin w i t h a particular question about technology, computers, and software: whose problem is this, anyway? Alternatively, we can ask: who's responsible for this mess? The c o m m o n and superficial response, w h i c h often bills itself as the humanist perspec- tive, is that the designers and marketers of computing technology are responsible for the technological systems surrounding us. This argument casts our technological dysfunction i n either a technological determinist light (Menzies 1989; Bowers 2000) or an instrumental- ist one w i t h a determined overclass: the military-industrial complex (Edwards 1996). W h i l e these treatments both correctly identify a nastily asymmetrical power dynamic surrounding technology, they run into trouble when they attempt to isolate the problem as external to the lifeworld of ordinary people—that technology is a system put over against 'us.' The char- acterization of computer technology as having been imposed upon society by an engineer/capitalist elite neatly divides up the responsibility for our ills: someone (industry, salesmen, zealous technologists, etc.) is to blame, and the analysis ends there. The resulting responses tend to impotence: whether we should enact laws (limiting corporate power; protecting individual privacy; protecting consumer's rights; regulating the Internet; etc.), or Chapter 1: Introduction 11 'resist' technology (don't carry a cellphone; chop up your credit card; refuse to upgrade your w o r d processor; computers out of the classroom), or write critiques and stern warnings about the fate of the world. These are all commonplace ideas; we all engage i n many of these tactics—I certainly do. There is an underlying and foundational trope lurking herein, though, and it hamstrings everything we might like to do about our technological predicament. The assumption is, broadly framed, that technology is an external force on our lives, driven by someone else's agenda. M o r e specifically put, the assumption is of a division i n society: a division of labour between experts and end-users (or producers and consumers). W e w i l l - ingly and unproblematically learn this division, choose it, take it on, and reproduce it. W e reify it i n our buying habits, i n our curriculum plans, i n our legislation, i n our discourses. I w o u l d not claim that these power imbalances aren't very real, but we are doomed to live by their terms when we take o n the roles assigned to us. But, of course, we're also stuck w i t h them, and changing the world is not just a matter of changing one's shirt. N o w , it is not my intent to go into a lengthy discussion of hegemony or domination here. M y purpose is rather to do the history of how we got to this particular place. In the hermeneutics of the historical process are—I optimistically believe—the generative possi- bilities. W h a t can we know about the division of labour i n information technology, between experts and end-users7. C P . Snow's famous "two cultures" of the sciences and the humanities only begins to frame the division as it presents itself here; the computer age brings w i t h it an economic and political apparatus that institutionalizes the producer/consumer divide on top of the expert/end-user division. The tension between expert knowledge and public dialogue is age-old. Latour identifies the origins of it w i t h Socrates i n Plato's Gorgias, i n w h i c h politics is (mis)represented as one of right vs. might (Latour 1999, p. 219ff). Latour uses this as an analogy for our popular conception of the relationship of science to politics. Instead of call- ing for a science free of political influences, Latour wants a "politics freed from science"— that is, freed from the k i n d of political shortcutting it is often called upon to do: "a substitute Chapter 1: Introduction 12 for public discussion" (p. 258). L o n g has "Science" (Latour uses the capital "S" in this rhetor- ical characterization) been called upon to end the messiness of actual political discussion: the introduction of the "impersonal laws" of nature as an antidote to the irrationalism and ambiguity of human judgement, and thus opposed to Politics as such. Latour presents an alternative "science" (without the capital) w h i c h involves the proliferation and extension of complex collectives of reason, argumentation, and agency w h i c h are political discourse. Latour's capital-S Science (or Reason) is thus a conventional tool for silencing one's oppo- nents, but he reminds us that this version of science is not the whole story, and that there is no analytically convenient "inside" and "outside" of science (1987, p. 145ff). Latour is concerned too with the division of labour. Complicating this account, however, is the work of technology theorist A r n o l d Pacey, who wrote o n the "culture of expertise:" Pacey offers an argument for a specially situated kind of technological determinism or "technological imperative" at work w i t h i n groups of engineers and technologists, that an airplane such as the French/British Concorde w o u l d never have emerged apart from a drive for engineering excellence i n itself. Pacey cites Free- man Dyson on nuclear weapons: that their existence is i n part due to the telosoi the culture of expertise, that they are "technically sweet" projects that appeal to physicists, as opposed to the hard, hacked out engineering of conventional weapons (Pacey 1983, p. 43). W h a t is amiss i n this k i n d of a world, Pacey suggests, is the compartmentalization of our values w i t h i n various spheres of activity (public, private, men's, women's, educational, profes- sional, etc.), and that a solution might be a broad-based effort to break down these compartmentalized traditions and virtues. W h a t this means to me is that the divide between inside and outside, or between expert and everyman, is not one that can merely be undone or unbelieved in; rather, it is a cultural phenomenon that we are dealt. Latour's two characterizations of S/science are, historically speaking, both actual, and actively in tension. Pacey's observations point to the fact that we continue to reify the poles of the division. The more we believe i n them, the more real they become. The longer we believe i n end-users, the more distant we become from the expert Chapter 1: Introduction 13 V pole. The result is a disastrous commonplace sensibility about technology's place i n society. W i t h i n education, computational-literacy advocate A n d r e a diSessa described what he calls the "culture gap," characterized by an "anti-learning bias" on the part of technologists, an insistence o n superficial transparency of computing artifacts, and a deep-seated expecta- tion that only some individuals can assume (professional) positions of knowledge and authority—a notion w h i c h brings w i t h it distrust of broad-based competence (diSessa 2000, p. 225ff, 237). This is but one particular articulation. Similarly, m u c h of the literature on computing that falls roughly w i t h i n the Science, Technology, and Society (STS) rubric (e.g., Turkle 1995 and others i n Shields' 1995 volume) is unfortunately inscribed i n the stereotypical humanist vs engineer division. The result is analysis that says 'the engineers only thought of things from the engineering perspective, and have imposed solutions o n us that fail to take into consideration what we humanists need.' W h i l e there is undoubtedly a grain of truth expressed i n this, it is but vanity to construct this as a story of oppression from above. T o make it into such a moral position does considerable violence to the discourses coritextualizing the engineers' work—as if they were working i n isolation. Turkle's 1995 analysis is a case i n point: she reports on the M I T ' s Project Athena, an enormous effort to computerize an entire campus i n the 1980s. Turkle cites Project Athena's ban o n the progamming language B A S I C , followed by a reversal under considerable pressure, but w i t h the condition that B A S I C w o u l d remain officially unsupported. H e r account points this out as an example of the arrogance of the systems and administrative people i n the face of'real w o r l d ' needs of users. The critique, however, is predicated on the assumption of an insider/outside split: engineers vs. humanists; develop- ers vs. end-users; experts vs. regular folks. But such divisions, no matter how commonplace or self-evident they may appear (reifi- cation works thusly), are caricatures; they fold into non-existence the untold hours of labour that go into the design and maintainance of systems, the extensive and complex networks of discourse and practice that must be created and sustained i n order for such Chapter 1: Introduction 14 systems to ever exist, and the deep points of connection that actually b i n d together the people and machines and systems o n both sides of the apparent divide. There are, luckily, alternative conceptions. T w o of the strongest, from the science stud- ies literature, and w h i c h serve as touchstones for me, are the writings of Bruno Latour and Donna Haraway. Both thinkers are bloodhounds on the trail of taken-for-granted bounda- ries. Latour's boundaries are those that separate science from politics, society from nature, human from nonhuman; these are mythological artifacts of the 'modern' age, Latour argues (1987; 1993). Latour counters that there is no inside or outside of science, only a prolifera- tion of hybrids. Donna Haraway's target boundaries similarly are those w h i c h purportedly guarantee purity of a privileged conception of humanity. H e r powerful contention (1991) is that the cyborg is us, we are always already compromised and impure, hybrids political and natural, material and semiotic, technical and moral. The situated stance taken by Haraway is significant: we are not i n a position to re-invent the w o r l d wholesale, but rather to fight for a fairer distribution of power, one not so over- whelmingly dominated by entrenched institutional power bases. This is not an all-or- nothing struggle, but rather a tactical strategy to spread the fruits of technoculture around more evenly. Technology, especially i n its digital form, need not only be the instrument of established power to maintain and extend itself. That's what I mean by technology—and especially software—being political: it is an active politics that works bidirectionally; it is generative, as the Foucauldians have pointed out. There is actually a strong tradition of this sort of work and thinking i n computing, i n the academy, i n education. A n d indeed, it is my contention that A l a n Kay's body of work of speaks very clearly to this issue. H O W E D U C A T I O N IS C O M P L I C I T The conceptual rift between 'experts' and 'end-users' is thriving i n our educational institu- tions. The whole field of educational technology is based on a confused discourse about ends and means; it reifies experts and end-users, technological means and pedagogical ends, as i f these were pre-existing categories. A n d i n a sense they are, as the academic world is Chapter 1: Introduction 15 similarly predicated o n this division of labour: researcher vs. researched, subject vs. object. The technological aspect then is symptomatic of a larger unquestioned division between experts and non-experts, making it a structural or systemic issue. The sheer volume of history—of tradition and culture—underlying this division of labour issue is immense: it goes right to the core of modernism and capitalism and science and our very way of being i n the world. It has everything to do w i t h how we inscribe the boundaries of technoscience— the structure of the economy, our construction of gender and class, our expectations about freedom and choice, our acquiescence and resistance to globalization and corporatization, our expectations about public vs. private vs. c o m m o n . W i t h i n educational technology, the division of labour manifests itself along a number of different axes. In the first and most obvious case, educational institutions' uncritical acceptance of industry-originated 'solutions' and large-scale buy-in to marketing campaigns contribute substantially to the establishment of the subject positions w h i c h disempower pretty much everybody involved: students, teachers, and the schools them- selves. I w i l l not go into this at length here, as the general phenomenon of the corporatization of schools has been dealt w i t h elsewhere (e.g., Bromley & Apple's 1998 volume, Education/Technology/Power). The superficial appeal of industry-based solutions is easy enough to see: the difficult initial design and implementation work are taken on by an industry 'partner,' thereby freeing the school or college to concentrate on their core busi- ness: education. O f course, what's missing from this particular division of labour is any developed sense that the one may have an impact on the other: the 'problem' to w h i c h the 'solution' is an answer is one pre-defined by the vendor. A recent example is A p p l e C o m p u - ter's offering sets of wireless laptops to educational institutions; it is not at all clear what problem this solution actually addresses. The superficial answer was that learners w o u l d be freed from computer labs, but Apple's wireless-laptop scheme looked remarkably like computer labs o n wheels: access to machines still had to be booked, hardware locked down to prevent theft, and, most importantly, the machines were still (ironically) 'time-shared,' as computer labs have been for thirty or forty years. Chapter 1: Introduction 16 A second manifestation of the expert/end-user divide is perhaps best articulated w i t h reference to the "miracle-worker" discourse: This apparent and longstanding lack of success in reaching implementation goals w i t h respect to uses of digital tools i n schools has created a specific niche for the working of miracles—the provision of digitally mediated environments w i t h i n w h i c h to re-mediate the production of knowledge i n educational contexts... W i t h i n such a context, the miracle worker's effectiveness is measured by their capacity to spin narratives of success against all odds by providing tools, but more often discourses, that appear to transform students' engagements w i t h information, (de Castell, Bryson, & Jenson 2002) The "miracle worker" discourse reinforces the machinery of desire that is central to the marketing efforts of high-tech vendors. Seen on this level, that the individual 'miracle worker' is predictably non-duplicatable—or at least 'unscalable'—is unfortunately almost the point. W h i l e we love to love those who take the initiative to make a real difference i n their schools and who personally drive innovation, the t o o - c o m m o n reality is that when these few individuals burn out, retire, or take advantage of their technical expertise and get a higher-paying job, what is left is a reminder of how wide the gap really is, setting the stage for the next round of marketing campaigns. In a third manifestation, the trend toward online distance education, "distributed learn- ing," "learning objects," and so forth establishes an even more cynical (or at least 'closed') position, quite comparable to the textbook publisher, in w h i c h all knowledge and authority is vested w i t h the publisher/information source and the model is a simple instructionist one of transferring this information to the user. A s w i t h the solution-provider discourses, the information-provider discourse makes plenty of sense i n terms of business models, but not so much for learning. The "distance-ed" variety of this discourse is the centralized version, while the "learning objects" version is a distributed market economy; either way, the educa- tional process is one way, and reliant on an 'impoverished' recipient. Chapter 1: Introduction 17 A fourth manifestation of the expert/end-user divide w i t h i n the educational environ- ment may be more damaging than any of the above: i n this case, the critical faculties of the educational establishment, w h i c h we might at least hope to have some agency i n the face of large-scale corporate movement, tend to actually disengage w i t h the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary 'humanist' stance i n w h i c h a shallow L u d d i s m becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of "the technology must be i n the service of the educational objectives and not the other way around." The determinist critique, i n turn, says, 'the use of computers encourages a mecha- nistic way of thinking that is a danger to natural/human/traditional ways of life' (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000). M i s s i n g from either version of this critique is any idea that digital information technol- ogy might present something worth actually engaging with. De Castell, Bryson & Jenson write: Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology i n the classroom is that teachers should become "comfortable" using it. [...] W e have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things—writing and speech, images and sound-^every conceivable form of information can now be combined w i t h every other k i n d to create a different form of communication, and what we seek is comfort and familiarity? (2002) Surely the power of education is transformation. A n d yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. T o be sure, this makes marketing so m u c h easier. A n d so we preserve the divide between 'expert' and 'end-user;' for the 'end- user' is profoundly she who is unchanged, uninitiated, unempowered. The result is well documented: scores of studies show how educational technology has no measurable effect on student performance. The best articulation of this is surely Larry Chapter 1: Introduction 18 Cuban's (1986) narration of the repeated flirtation w i t h educational technology; the best one-liner the title of his article, "Computers M e e t Classroom: Classroom W i n s " (1993). W h a t is left is best described as aporia. O u r efforts to describe an instrumental approach to educational technology leave us w i t h nothing of substance. A seemingly endless literature describes study after study, project after project, trying to identify what really 'works' or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature w h i c h suggests that this is all a waste of time. But what is really at issue is not implementation or training or support or any of the myriad factors arising i n discussions of why computers i n schools don't amount to much. W h a t is really wrong w i t h computers i n education is that for the most part, we lack any clear sense of what to do w i t h them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we w o u l d all be compelled to throw the computers out the window and get o n w i t h other things. But clearly, it would be inane to try to claim that computing technology—one of the most influential defining forces i n Western culture of our day, and w h i c h shows no signs of slowing down—has no place i n education. W e are left w i t h a dilemma that I am sure every intellectually honest researcher i n the field has had to consider: we know this stuff is impor- tant, but we don't really understand how. A n d so what shall we do, right now? It is not that there haven't been (numerous) answers to this question. But we have tended to leave them behind w i t h each surge of forward momentum, each innovative push, each new educational technology "paradigm," as T i m o t h y Koschmann put i t . 2 I hereby suggest that the solution—not to the larger question of what should we do, right now, but at least to the narrower issue of how we can stop being so blinded by the shiny 2. Koschmann's (1996) article, "Paradigm Shifts and Instructional Technology" suggested that there had in fact been a series of incommensurable paradigms (in Kuhn's sense) governing the field; Koschmann was setting up "computer-supported collabora- tive learning" as the new paradigm. Chapter 1: Introduction 19 exterior of educational technology that we lose all critical sensibilities—is to address the questions of history and historicism. Information technology, i n education as elsewhere, has a 'problematic' relationship w i t h its o w n history; i n short, we actively seek to deny its past, putting the emphasis always on the now and the new and the future. The new is what is important; what happened yesterday is to be forgotten, downplayed, ignored. This active destruction of history and tradition—a symptom of the "culture of no culture" (Traweek 1988, p. 162) that pervades m u c h of technoscience—makes it difficult, if not impossible, to make sense of the role of technology i n education, i n society, and i n politics. 3 W e are faced w i t h a tangle of hobbles—instrumentalism, ahistoricism, fear of transformation, Snow's "two cultures," and a consumerist subjectivity. Seymour Papert, i n the midst of the backlash against Logo i n schools i n the m i d 1980s, wrote an impassioned essay that called for a "computer criticism," i n the same sense and spirit as "literacy criticism." In that article, Papert wrote of ...a tendency to think of "computers" and "Logo" as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situtations—people and cultures—to a secondary, facilitating role. The context for human develop- ment is always a culture, never an isolated technology. (Papert 1987, p. 23) A n examination of the history of educational technology—and educational computing i n particular—reveals riches that have been quite forgotten. There is, for instance, far more richness and depth i n Papert's philosophy and his more than two decades of practical work on Logo than is commonly remembered. A n d Papert is not the only one. A l a n Kay's story, roughly contemporaneous and i n many respects paralleling Papert's, is what follows. Since this story is not widely known, let me begin w i t h a brief and admittedly rough sketch of the origins, general direction, and some of the outcomes of Kay's work. 3. We as a society are ignorant of these issues because, in a sense, they can not be made sense of. Maclntyre (1984) makes the much larger case that morality and ethics cannot be made sense of in the modern world, because our post-enlightenment inheritance is but the fragments of a tradition within which these could be rationalized. Chapter 1: Introduction 20 A L A N K A Y A N D T H E D Y N A B O O K V I S I O N A l a n Curtis Kay is a man whose story is almost entirely dominated by a single vision. The vision is that of personal computing, a concept Kay began to devise i n the late 1960s while a graduate student at the University of Utah. It is not an overstatement to say that Kay's vision has almost single-handedly defined personal computing as we know it today. Neither is it an overstatement to say that what he had i n m i n d and what we've ended up w i t h are very differ- ent. The story of that vision—how it has managed to manifest itself on all our desks (and laps) and also how far this manifestation remains from its original power and scope—is the story I mean to tell here. It is a story that deviates from the popular or conventional story of computing i n a number of interesting ways. A n d , while this story is well known, it is rarely told outside of the computer science community, where Kay's contributions are founda- tional. W h a t is less remembered is that Kay's contributions to computer science were driven largely by an educational vision for young children. A l a n Kay was born in the early 1940s in N e w England, and grew up as something of a child prodigy; he proudly reports being a precocious—difficult, even—child in school, argu- ing w i t h his elementary school teachers. He studied biology and mathematics in university, but dropped out and played jazz guitar i n Colorado for a few years i n the early 1960s; then, o n the strength of an aptitude test, joined the U S A i r Force and became a junior program- mer. Having then discovered computers, he decided to finish his undergraduate degree and go to grad school i n 1966. H e chose the University of Utah, where computer graphics pioneer Dave C . Evans had set up one of America's first computer science programs. A t Utah, Kay's career took off like a rocket; the timely meeting of a wildly creative m i n d w i t h the fledgling A m e r i c a n computing research program—Kay was only the seventh graduate student i n computing at U t a h (Hiltzik 1999, p. 86ff). T o appreciate the difference between computing as most people encounter it today— personal laptop computers w i t h graphical user interfaces, connected wirelessly to a global Internet, using the computer as an access and production environment to media—and what Chapter 1: Introduction 21 computing was i n the m i d 1960s—expensive and delicate mainframe computers staffed by scientists, with little that we w o u l d recognize as "user interface" (even time-sharing systems were a radical innovation at that time)—is to roughly frame Kay's contribution to the field. O f course, he did not accomplish this alone, but his vision—dating back to his M S c . and P h D theses at U t a h (see Kay 1968) and strongly driving the research of the 1970s—is so central, and so consistent, that it is arguable that without Kay, the face of our everyday involvement w i t h digital technology would be immeasurably different today. Kay is i n one sense an easy study, i n that he has remained consistently on point for thirty-five years, over w h i c h time he has contributed a large collection of reports, articles, chapters, and postings to online fora, as well as a large number of lectures and presenta- tions, many of w h i c h have been recorded and made widely available. In particular, Kay's writings and talks in recent years provide valuable reflection on his work and writings from the 1960s and 1970s; i n all, a r i c h archive for the historian. W h a t I find most important about Kay's oeuvre is, I believe, summarizable in a few brief (though rather expansive) points. These set the stage for the story I w i l l attempt to tell here: • Kay's vision (circa 1968) that i n the near future, computers w o u l d be the c o m m o n - place devices of millions of non-professional users; • Kay's realization that this k i n d of mass techological/cultural shift w o u l d require a new literacy, on the scale of the print revolution of the 16th and 17th centuries; • his belief that children w o u l d be the key actors i n this cultural revolution; , • his fundamental approach to the design challenge presented by this shift being one of humility, and thus that the cardinal virtues would be simplicity and malleability, such that these "millions of users" could be empowered to shape their o w n techno- logical tools i n accordance w i t h the needs that they encountered; • Kay's insistence on a set of architectural principles inspired by the cell m i c r o b i o l - ogy and complex systems theory of the post-war period: how the complexity of life arises from the relatively simple and c o m m o n physics of the cell. Chapter 1: Introduction 22 There are many ways i n w h i c h A l a n Kay's vision of personal computing has indeed come to pass. In reading his manifesto from 1972 ("A Personal Computer for C h i l d r e n of A l l Ages"), there is little that sounds either dated or far-fetched. M o s t of the implementation details alluded to i n his writings have i n fact become commonplace—Kay was unable to predict the dynamics of the marketplace on personal computing, and so his timelines and pricepoints are both underestimated. It is indeed clear that his vision of a new "literacy" far exceeds the reality o n the ground today. M y contention is that this is the piece of his vision w h i c h is the most critical; the need for a digitally mediated literacy is greater now than ever, and for reasons w h i c h Kay could hardly have foreseen i n the early 1970s. From ARPA to Xerox PARC A l a n Kay's story begins with the A R P A project—the U S Department of Defense's Advanced Research Projects Agency, a Pentagon funding programme i n part inspired by the C o l d W a r and the perceived threat to A m e r i c a n technological superiority that was raised w i t h the launch of the Soviet satellite, Sputnik, i n 1957. In A R P A is the root of the popular concep- tion that computers have sprung from the military; the vast majority of computing research i n the formative decade of the 1960s was funded by A R P A ' s Information Processing T e c h - niques Office (IPTO). It is easy to take the significance of this funding formula too far, however, and conclude that computers were devised as weapons and digital technology is born of insitutionalized violence and domination. The story is quite a bit more subtle than that: the adminstrators of A R P A - I P T O research funds were not military men, but civilians; not generals but professors ( N R C 1999; W a l d r o p 2001). It is perhaps better to think of A R P A as a C o l d W a r instrument of A m e r i c a n techno-cultural superiority, rather than a military programme. The funds flowed through the Pentagon, but the research was aston- ishingly open-ended, w i t h the majority of the funding flowing to universities rather than defense contractors, often i n the absence of formal peer-review processes ( N R C 1999, pp. 101-102). In fact, to look deeply at A R P A and its projects is to see a ironic case—rare, but certainly not unique ( A T & T and, as we shall see, X e r o x Corporation, played host to similar Chapter 1: Introduction 23 development communities)—of large-scale public works being committed i n the name of capitalist, individualistic, A m e r i c a n ideology. The public funding that went into A R P A projects in the 1960s no doubt vastly outstripped their Soviet counterparts; who, then, had the greater public infrastructure? The men who directed the A R P A - I P T O have come to be known by their reputation as great thinkers w i t h expansive ideals for the c o m m o n good, and their open-ended funding policies that focused o n people rather than specific goals. The first, and most celebrated director, JCR Licklider, oriented the I P T O to the pursuit of interactive computing and inter- networking, concepts w h i c h were nearly science fiction in 1960, but w h i c h today are foundational to our dealings w i t h digital media. 4 After Licklider came Ivan Sutherland, k n o w n as the "father of computer graphics;" Robert Taylor, who w o u l d go o n to help r u n X e r o x ' research lab i n 1970; and Lawrence Roberts, who i n the late 1960s oversaw the implementation of the A R P A n e t , the prototype and direct ancestor of today's Internet. In 1970, the advent of the Mansfield Amendment, w h i c h required Pentagon-funded research to be more responsible to military ends, is seen by many (Kay 1996 a, p. 525; "Waldrop'2001, p. 325) as the end of an era—an era i n w h i c h the basic shape of today's digital technological landscape was being laid out. O f the spirit of the A R P A project i n the 1960s, Kay reflected: It is no exaggeration to say that [ A R P A ] had "visions rather than goals" and "funded people, not projects." The vision was "interactive computing as a complementary intellectual partner for people pervasively networked w o r l d - wide." By not trying to derive specific goals from this at the funding side, [ A R P A ] was able to fund rather different and sometimes opposing points of view. (Kay 2004a) The legacy left by the 1960s' A R P A project is rich, and includes the Internet, time-sharing systems, computer graphics (both 2D and 3D), hypertext and hypermedia, and networked 4. Licklider wrote an early research manifesto called "Man-Computer Symbiosis" which laid out a blue-sky vision of what com- puting could become, one in marked contrast to the then-dominant trend to artificial intelligence research. See Licklider 1960; Wardrip-Fruin & Montfort 2004). Chapter 1: Introduction 24 collaboration. M o r e important to the story at hand is the establishment of a community of computing researchers i n the U n i t e d States, from universities like Utah, U C L A , Stanford, M I T , and Carnegie-Mellon. A t these universities, fledgling computing departments and programs had received early and substantial research funding, and the A R P A - I P T O direc- tors made substantial efforts to bring these researchers together at conferences and retreats. The result was, by the late 1960s, a tightly knit community of A m e r i c a n computer science research. A l a n Kay, who had his first encounter w i t h computer programming while on a stint i n the U S A i r Force's A i r T r a i n i n g C o m m a n d i n 1961, went to the University of U t a h to pursue a Masters degree. There he met and studied w i t h Dave Evans and Ivan Sutherland, who were pioneering research i n computer graphics. Kay spent the years 1966-1969 at Utah, working on and around A R P A - f u n d e d projects. It was here, i n his M S c and P h D work, that he began to formulate a vision for a personal computer. Kay has referred to Sutherland's work o n computer graphics as "the first personal computer" because Sutherland's project— Sketchpad—was the first interactive graphics program as we would recognize it today; a user sat i n front of a display and manipulated the images o n a screen by means of a pointing device (in his instance, a pen) and keystrokes (Sutherland 1963). This required that a single user monopolize the entire computer—in the 1960s an enormously extravagant thing to do. The inspirational impact of work like this should not be understated, especially where A l a n Kay is concerned. Kay's account of the A R P A years is of one mind-blowing innovation after another—from Sutherland's elegant drafting program to D o u g Engelbart's famous 1968 demo to the Fall Joint Computer Conference i n San Francisco w h i c h showed the w o r l d a working model of hypertext, video conferencing, workgroup collaboration, and graphical user interfaces, literally decades before these concepts became embedded i n the public imagination (Engelbart & English 1968/2004; W a l d r o p 2001, pp. 297-294). Kay's research at U t a h focused on the design of a computing system called the FLEX Machine, w h i c h combined the interactive graphics ideas of Sutherland's Sketchpad w i t h leading program- ming language concepts of the day and put them i n a package that could sit o n a desk. But Chapter 1 : Introduction 25 Kay's work at U t a h was very m u c h coloured by interaction and collaboration w i t h the community of A R P A researchers. One of the greatest works of art from that fruitful period of A R P A / P A R C research i n the '60s and 7 0 s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and think- ers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I'm aware, no governments and no companies do edge-of-the-art research using these principles. (Kay 2004a) W h e n A R P A ' s funding priorities shifted to military applications i n 1970, this community saw a crisis of sorts; where could they continue their work i n the manner to w h i c h they had become accustomed?5 A s the story goes, by historical accident, X e r o x corporation, as part of a shift i n upper management, wanted to establish a research lab to ensure their continued domination (Hiltzik 1999; W a l d r o p 2001, p. 333ff). A s it turned out, former A R P A - I P T O director Robert Taylor was hired on at X e r o x to establish the lab and hire its researchers. Taylor knew who he wanted, by virtue of the community of researchers he had known from his A R P A work. A n d , due to the circumstances of the funding landscape of the day, he had his pick of the leading researchers of the 1960s. The result, X e r o x ' Palo A l t o Research Center ( P A R C ) , was staffed by a "dream team" of talent. Former P A R C researcher Bruce' H o r n reflected, " P A R C was the M e c c a of computer science; we often said (only half- jokingly) that 80 of the 100 best computer scientists i n the w o r l d were i n residence at P A R C " (Horn, n.d.). A l a n Kay was one of the researchers that Taylor courted to be part of the X e r o x P A R C team, and i n line w i t h the open-ended A R P A policy of the 1960s, Kay's agenda at X e r o x was also open-ended. He took the opportunity to use his new position to advance the work he had begun at U t a h on the development of a personal computer. 5. Young computer scientists in the 1960s were as attentive as any to the cultural movements of the day; see John Markoff's (2005) What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry, for this treatment. Chapter 1: Introduction' 26 It was i n fact impossible to produce something like Kay's desktop-oriented FLEX Machine given the hardware technology of the late 1960s, and as such Kay's early work was realized i n various forms on the rather larger computers of the day. But to reduce the FLEX Machine concept to simply that of a graphics-capable system that could sit on a desk (or even, ultimately a lap) is to miss much of Kay's point. M o r e fundamental to Kay's vision was a novel and far-reaching conception of computing architecture, and the FLEX Machine research is better positioned as an early attempt to articulate this. T o explain this, let me delve into Sutherland's Sketchpad, a system w h i c h i n Kay's view has not been equalled i n the nearly four decades since. The overt concept—an interactive computing system for drawing and manipulating images—of course has been built upon, and today designers, illustrators, draftspeople, and indeed anyone who creates images with a computer uses a system w h i c h borrows from the general tradition established by Sketchpad. But integral to Sutherland's original system was an architecture based on "master" drawings could be used to create "instance" drawings, and that the parent-child relationship between such entities is preserved, so that changes made to the master (or prototype) would be reflected i n any instances made from it. It is difficult to express the importance of this i n so many w o r d s , 6 but this concept is representative of a way of thinking about the relationship between the part and the whole w h i c h underlies all of Kay's work and contributions. A t the same time that Kay was introduced to Sutherland's work, he was also introduced to a programming language called Simula, the work of a pair of Norwegian researchers. Kay recognized that the "master" and "instance" relationship i n Sketchpad was very similar to the way the Simula language was arranged. This was the big hit, and I have not been the same since. I think the reasons the hit had such impact was that I had seen the idea enough times i n enough differ- ent forms that the final recognition was in such general terms to have the quality of an epiphany. M y math major had centered on abstract algebras w i t h their few operations applying to many structures. M y biology major had 6. There is, luckily, video available of Sutherland using the Sketchpad system. See Wardrip-Fruin & Montfort (2004). Chapter 1: Introduction 27 focused on both cell metabolism and larger scale morphogenesis w i t h its notions of simple mechanisms controlling complex processes and one k i n d of building block being able to differentiate into all needed building blocks. The 220 file system, the B5000 7 , Sketchpad, and finally Simula, all used the same idea for different purposes. Bob Barton, the main designer of the B5000 and a professor at Utah, had said i n one of his talks a few days earlier, "The basic p r i n - ciple of recursive design is to make the parts have the same power as the whole." (Kay 1996a, p. 516) This is the first of the "big ideas" that comprise A l a n Kay's work; we shall encounter several more. Toward the Dynabook Kay reports that his personal trajectory was significantly altered by a visit to see Seymour Papert's research group at M I T i n 1968. A t that time, Papert, W a l l y Feurzig, and Cynthia Solomon were conducting the initial research on exposing schoolchildren to computers and programming w i t h the Logo language, w h i c h Feurzig had designed. Papert's research involved the now-famous "turtle geometry" approach w h i c h suggested that children could more effectively bridge the divide between concrete and formal cognitive stages (from Jean Piaget's developmental schema) via a computational medium (Logo) w h i c h allowed them to manipulate mathematical and geometric constructs concretely (Papert 1980a; 1980&). W h a t impressed Kay was not so m u c h this insight about cognitive styles, but that children using Logo could reach farther w i t h mathematics than they could otherwise. Kay wrote: One of the ways Papert used Piaget's ideas was to realize that young children are not well equipped to do "standard" symbolic mathematics until the age of 11 or 12, but that even very young children can do other kinds of math, even advanced math such as topology and differential geometry, when it is presented i n a form that is well matched to their current thinking processes. The Logo turtle w i t h its local coordinate system (like the child, it is always at 7. The Burroughs B220 and B5000 were early computers Kay had encountered in while working as a programmer in the US Air Force in the early 1960s. Chapter 1: Introduction 28 the center of its universe) became a highly successful "microworld" for explor- ing ideas in differential geometry. (Kay 1990, p. 194) In what w o u l d be the beginning of a collegial relationship w i t h Papert w h i c h is still ongoing, Papert's insights about children and computers, i n combination w i t h Kay's insight that computers w o u l d likely be m u c h more numerous and commonplace by the 1980s, led to the crystallization of his thinking: This encounter finally hit me with what the destiny of personal computing really was going to be. N o t a personal dynamic vehicle, as i n Englebart's meta- phor opposed to the I B M "railroads," but something m u c h more profound: a personal dynamic medium. W i t h a vehicle one could wait u n t i l high school and give "drivers ed," but if it was a medium, it had to extend to the w o r l d of c h i l d - hood. (1996a, p. 523) Kay was immediately seized by this idea, and on the plane back from Boston he drew up the basis for the vision of personal computing he would pursue thereafter. Kay called it the Dynabook, and the name suggests what it w o u l d be: a dynamic book. That is, a medium like a book, but one w h i c h was interactive and controlled by the reader. It would provide cogni- tive scaffolding i n the same way books and print media have done i n recent centuries, but as Papert's work w i t h children and Logo had begun to show, it w o u l d take advantage of the new medium of computation and provide the means for new kinds of exploration and expression. Kay, now at X e r o x P A R C , began to sketch out what the Dynabook would look and act like. Early models (in cardboard) suggest devices not unlike the desktop and laptop comput- ers we know today. Kay noted that he was directly inspired by (microchip manufacturer Intel's founder Gordon) Moore's Law, w h i c h states that, due to predictable advances i n the miniaturization of chip manufacturing, available computing power doubles roughly every 18 months. G i v e n this k i n d of development timeframe, Kay foresaw that by the 1980s, the sorts of things he was able to accomplish i n his work o n the FLEX Machine would indeed be possible on small, even portable devices. Chapter 1: Introduction 29 Again, however, it is important to sidestep the temptation to reduce Kay's vision to a particular conception of a hardware device or set of features. The deep levels of his research were aimed at coming up w i t h ways i n w h i c h people—not computer scientists, but school- children/after Papert's examples—could interact meaningfully w i t h digital technology. In the 1960s, computers were still monolithic, vastly expensive machines; leading research of the day was aimed at the development of "time-sharing" systems w h i c h w o u l d allow m u l t i - ple users to simultaneously use a large computer by connecting via a terminal—this was profoundly not "personal" computing. Despite the economic and logistical obstacles, Kay and his newly established Learning Research Group at X e r o x P A R C wrestled to come up w i t h a new model of how people could interact w i t h computing technology. Kay's reflec- tions on the challenge give some sense of the scope of the task they set themselves: For example, one w o u l d compute w i t h a handheld "Dynabook" i n a way that w o u l d not be possible on a shared main-frame; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction i n complexity, and end-user literacy w o u l d require that data and control struc- tures be done away w i t h i n favor of a more biological scheme of protected universal cells interacting only through messages that could m i m i c any desired behaviour. (Kay 1996a, p. 511) This was research without precedent i n the early 1970s—no existing models of computing or user interface or media were extant that Kay and his group could follow. In a sense, it is interesting to think of the work done in the 1960s and 1970s as being groundbreaking because of the the lack of existing models. Kay recalls the number of innovative conceptual leaps that Ivan Sutherland's Sketchpad project made, and that when asked how this was possible i n 1963, Sutherland later reflected that he "didn't know it was hard." W h a t Kay and his colleagues seem to have been aware of is the sense i n w h i c h they were i n fact constructing whole new ways of working: ...we were actually trying for a qualitative shift i n belief structures—a new K u h n i a n paradigm i n the same spirit as the invention of the printing press— Chapter 1: Introduction 30 and thus took highly extreme positions that almost forced these new styles to be invented. (1996a, p 511) The analogy of the printing press is one that bears more examination, i f for no other reason than that Kay himself has extended the analogy. If the invention of the digital computer can be compared w i t h the invention of the printing press, then it follows that there is an anala- gous period following its initial invention i n which its role, function, and nature have not yet been worked out. In the history of printing, this period was the late 15th century, commonly called the incunabula, when early printers experimented w i t h ways of conducting their craft and their trade. The earliest printers, like Johannes Gutenberg, created printed.works that closely m i m i c k e d the work of medieval scribes in design, content, and audience. A s M a r s h a l l M c L u h a n (1965) noted, the content of the new media is the old media. But i n the 1490s, the Venetian printer A l d u s Manutius set up a printing business which, i n its exploration of the possibilities of finding a sustainable business model, pioneered and established m u c h of the form of the book as we know it today, i n terms of layout, typogra- phy, size and form (Aldus is generally credited w i t h the popularization of the octavo format, w h i c h w o u l d fit conveniently i n a pocket or saddlebag), and, i n doing so, defined a new audi- ence and market for printed books (Lowry 1979). A l d u s ' innovations established the nature of the printed book as we know it today, and innovations i n book printing since then have been refinements of A l d u s ' model, rather than deviations from it. A l a n Kay alludes to the example of A l d u s i n several places i n his writings, and it seems clear that even if Kay doesn't necessarily consider his work to be parallel, then at least this is its goal. That we are i n an incunabula period i n the early evolution of digital computing—as evidenced by the general confusion the topic evinces—is an idea I am completely comforta- ble with; that A l a n Kay's vision of personal computing is analogous to A l d u s ' pocket-sized books is at least worth consideration. Whether we can say one way or another, at this moment i n time, is in part the subject of this dissertation. Chapter 1: Introduction 31 Elements of the Dynabook Vision In a paper presented i n 1972, " A Personal Computer for C h i l d r e n of A l l Ages," A l a n Kay spoke of the general qualities of a personal computer: W h a t then is a personal computer? One would hope that it would be both a medium for containing and expressing arbitrary symbolic notations, and also a collection of useful tools for manipulating these structures, w i t h ways to add new tools to the repertoire. (Kay 1972, p.3) Papert's influence is very clear here, especially his famous admonition that children should program the computer rather than the computer programming the children. But we should also pay attention here to Kay's emphasis on the multiple levels of media: that they should represent not just the content, but the tools to act upon the content, and even the means for creating new tools. This sheds some light on the Dynabook metaphor, for books represent not only content w h i c h can be extracted (as a shallow definition of literacy might suggest), but are also the means to participating richly i n a literate culture. "One of the primary effects of learning to read is enabling students to read to learn" ( M i l l e r 2004, p. 32). Literacy is indeed what Kay and his team were after. "I felt that because the content of personal computing was interactive tools, the content of this new authoring literacy should be the creation of interactive tools by the children" (Kay 1996a, p. 544, italics mine). Kay's 1972 paper included a scenario i n w h i c h two nine-year olds, Jimmy and Beth, are playing a video game,8 "lying on the grass of a park near their home." Y o u n g Beth, bored of repeatedly trouncing her classmate, muses about adding gravitational forces to the game i n order to make it more challenging. The rest of the story has the two children seeking out their teacher to help them develop their model of how the gravitational pull of the sun should be integrated w i t h the spaceship controls i n the game. Together, "earnestly trying to discover the notion of a coordinate system," they use something m u c h like the Internet to look up some specifics, and then Beth makes the changes to the physical model coded i n the game. Beth later uses her Dynabook to work o n a poem she is composing, and her father, on 8. The game is the prototypical Spacewar!, which has a.special place in the history of computing. Chapter 1: Introduction 32 an airplane on a business trip, uses his own Dynabook to make voice annotations to a file, and even to download (and neglect to pay for) an e-book he sees advertised i n the airport. Kay was writing science fiction; it was 1972. But the vision is clear enough that we can easily recognize almost all of these elements i n our quotidian computing environment. It is now w i t h i n the reach of current technology to give all the Beths and their dads a "Dynabook" to use anytime, anywhere as they may wish. A l t h o u g h it can be used to communicate w i t h others through the "knowledge utilities" of the future such as a school "library" (or business information system), we think that a large fraction of its use w i l l involve reflexive communication of the owner w i t h himself through this personal medium, m u c h as paper and note- books are currently used. (1972, p. 3) M o s t importantly, Kay was close enough to the cutting edge of computer research to be able to judge just how "within reach" this vision really was. Kay's oft-quoted catchphrase," the best way to predict the future is to invent it," meant that his science-fiction writing was nigh on to a plan for implementation. H i s work at X e r o x through the 1970s was nothing short of the realization of as m u c h of the Dynabook plan as possible at the time. Kay foresaw that, given Moore's Law, that the hardware part of his vision should be feasible w i t h i n a decade or so: the 1980s. The unknown part was the software. So, while m u c h of the famous research at X e r o x P A R C was in producing the first "personal computers" (these were hardly laptops; they were however small enough to squeeze under a desk); Kay's core focus was o n the soft- ware vision; how w o u l d Beth and Jimmy actually interact w i t h their Dynabooks? H o w would millions of users make effective use of digital information technology? W h a t emerged after a few design iterations i n 1971 and 1972 was a programming language called Smalltalk, as i n "programming should be a matter of..." and "children should program in..." The name was also a reaction against the "IndoEuropean god theory" where systems were named Zeus, O d i n , and Thor, and hardly did anything. I figured that "Smalltalk" was so innocuous a label that if it ever did anything nice people would be pleasantly surprised. (1996a, p. 528) Chapter 1: Introduction 33 Smalltalk was (and is) a programming language; the original version—implemented the following year and therefore designated Smalltalk-72—owed m u c h to P a p e r ' s Logo i n terms of syntax and aesthetics. But its aspirations were considerably greater—in many ways, it was a generalization of the sort of thing Papert was after. Kay went so far as to eschew the "programming language" description, instead calling Smalltalk "a new medium for c o m m u - nication" (Kay & Goldberg 1976). Kay's research—and Smalltalk itself—got a boost i n 1973 when researchers i n P A R C ' s Computer Science Lab developed the first iterations of the Alto workstation, w h i c h is commonly hailed as the first personal computer. 9 Kay and his team called the A l t o an "interim dynabook"—not m u c h like Kay's Dynabook vision at all, really—these were about the size of a bar-fridge—but the A l t o is the direct precursor of the kinds of personal computers we have today (as opposed, that is, to the personal computers of the late 1970s and early 1980s): it had a bitmapped, graphical display, a pointing device, and, w i t h Small- talk running on it, the k i n d of "desktop" environment we now take for granted. In 1973-74, these "interim dynabooks" were capable of Logo-like turtle graphics, but also featured a mouse-and-overlapping-windows interface, animated graphics, and music—in short, "multimedia." A t the same time that the A l t o was being created, and that Kay's team was working on Smalltalk, other researchers at X e r o x P A R C were developing ethernet local-area networks ing, colour graphics software, w o r d processing and desktop publishing, and the laser printer—all commonplace components of "personal computing" today, and all fitting neatly into Kay's Dynabook vision—what K a y would much later call the " P A R C genre" of comput- ing (2004a). The researchers at X e r o x P A R C created hundreds of A l t o s and wired them all up w i t h ethernet, installed precursors to office software, and had, by the m i d 1970s, an internal prototype of the k i n d of computer-based office environment that is so c o m m o n - place today. None of this w o u l d be commercialized or marketed u n t i l the following decade, but it was all running at P A R C , and there it was allowed to mature into an established 9. Reportedly, over 1500 of these machines were constructed and used by individuals at Xerox in the 1970s (Hiltzik 1999). Chapter 1: Introduction 34 pattern of information technology. A n d , of course, many of the researchers at P A R C i n the 1970s subsequently left to start or join the companies that now dominate the computing w o r l d (3Com, Adobe, Microsoft, Apple, etc.). T o end the story here is to uncritically accept A l a n Kay's popular designation as "father of the personal computer." But what is missing from contemporary tellings of this heroic tale is the seed that started it: the vision of children and a new literacy. Notably, the three best available histories of this period (Smith & Alexander 1988; H i l t z i k 1999; W a l d r o p 2001) significantly downplay or pass over the educational vision w h i c h provided the focus for Kay's work. It is easy enough to see ourselves as adult, even professional users of desktop computing systems like those pioneered at X e r o x P A R C , but where are Beth and Jimmy today, "earnestly trying to discover the concept of a coordinate system?" Thefate of the Dynabook The personal computer did indeed come to be, very m u c h as Kay anticipated. Indeed, I have written the present work on a notebook computer strikingly similar to the one Kay described i n 1972. H i s vision of millions of users of computers is very much the reality today. But, I argue, the Dynabook vision has not been realized; the distinction I am making here is between the idea of portable, networked personal computing devices o n the one hand, and the vision of a new literacy and attendant educational imperative on the other. Between the surface features o f a personal computer and Kay's deeper insights about what that personal computing should entail is a vast gulf. The difference is significantly not one of technological innovation; all the individual components of Kay's vision are extant, even mature technologies today, from the lightweight, wirelessly connected notebook computers equipped w i t h multimedia authoring tools to the k i n d of simple, modular software models he pioneered (indeed, this part of the vision has been picked up by computer programmers and turned into a whole paradigm of software development). Rather, the difference is a cultural one, wherein what personal and educational computing means to us is vastly differ- ent from the vision Kay and his colleagues began to elaborate i n the early 1970s. W e have Chapter 1: Introduction 35 inherited all the components, but the cultural framework w h i c h ties them together relies o n older ideas, and the new computer-mediated literacy that Kay articulated continues to elude us. The ramifications of this cultural difference are, I argue, vast, and they specifically underlie the problematic relation we have w i t h digital technology I outlined i n the early pages of this chapter. The case I w i l l make i n the pages w h i c h follow is that our contempo- rary condition of fear and loathing of digital technology, our narrow and ahistorical perspective on computing, our unquestioned acceptance and reification of the roles of 'expert' and 'end-user', and, most importantly, the compounded manifestation of all of these features i n the confused w o r l d of educational technology can all be critically addressed—and i n some cases remedied—by attention to this particular yet foundational thread i n the history of computing. M o s t of us are, unfortunately, starting from a place of unfamiliarity with this tradition (or, for that matter, any such substantial historical perspec- tive); it is my intent w i t h the present study to at least shed some light on a specific cultural tradition which, I believe, has m u c h to say to our current predicaments. W H A T F O L L O W S . . . The chapters and pages to follow comprise a historical treatment of A l a n Kay's vision and research, of the Dynabook vision and its various (and partial) implementations i n changing contexts over three or more decades. I intend here to draw attention to the features of the . original vision which have changed through diverse times and contexts, those features w h i c h have remained constant, and those w h i c h have grown even more critical. Despite A l a n Kay's centrality to this story, and his predominant role i n its articulation, this is not a biographical treatment; rather, this is an attempt to trace the threads of a technocultural system over time and place. It is thus a work of cultural history. In the chapter that follows this one, I frame my positionality and approach to the inter- pretation and historiography of this subject. I begin w i t h my o w n story, and how by degrees I have come to appreciate the importance and centrality of A l a n Kay's contributions, and I Chapter 1: Introduction 36 elaborate the many forces w h i c h have led to my adoption of a particular attitude to and stance toward digital technology and its place in education. This treatment of my 'method- ology' is more a disclosure of my personal biases and theoretical inclinations than an analysis of social scientific method, for reasons I w i l l elaborate i n due course. In the third chapter, I provide a broad-strokes theoretical approach to the study of tech- nology w h i c h serves to ground the k i n d of analytical moves which come later i n this account. Here, I address the main motifs of technological mediation, the sociology of trans- lation (after C a l l o n and Latour), a semiotics of digital 'machines,' and finally an introduction to the notion oi simulation, w h i c h I take as the paradigmatic modality of digital media. These treatments set the stage for the history that follows. M y account of the history of Kay's project and the Dynabook vision begins i n Chapter 4, i n w h i c h I conduct a high-level review of the conceptual content of A l a n Kay's numerous writings and talks. This review breaks down into six primary themes w h i c h I believe reason- ably represent the core of Kay's vision of personal and educational computing. In Chapter 5,1 cover i n narrative form the key moments of the Dynabook's develop- ment through the 1970s and 1980s. This begins w i t h an exploration of the ways i n w h i c h the Smalltalk language and environment was translated from an educational platform to a profound and influential innovation i n professional computer programming and software engineering. Second, I discuss the emergence of a personal computer industry and market i n the late 1970s and 1980s and the intersection of this trend with the foundational research work done at X e r o x P A R C . T h i r d , I trace A l a n Kay's research beyond X e r o x P A R C to its home at A p p l e Computer in the 1980s, where very different economic, technical, and cultural forces were at play. Chapter 6 considers what personal computing—and; by extension, educational computing—came to mean i n the 1990s, w i t h the popular advent of the Internet and W o r l d - W i d e W e b . In many ways, this period represents the mainstreaming of a particular version of personal computing and a m u c h more substantial cultural tradition against w h i c h the Dynabook vision must now be considered. The history of computing in the 1990s Chapter 1: Introduction 37 is one of global-scale market developments (i.e., the computing industry as a m u l t i - b i l l i o n dollar phenemenon), as well as the emergence of unprecedented forms of cultural expres- sion and digitally mediated social organization (and especially, the rise of the Free and O p e n Source Software movement, arising out of marginal computing cultures from the 1970s); these two large-scale trends are i n considerable tension w i t h one another. In Chapter 7,1 trace the actual re-emergence of a substantial chunk of A l a n Kay's work and indeed the Dynabook vision against the backdrop of late 1990s computing culture and the trends introduced i n Chapter 6. The Squeak environment, emerging from Kay's team at A p p l e Computer i n 1996, lays technical and mythological claim to the Dynabook tradition of the 1970s. Here, I examine the development of Squeak, its development and user communities, and I attempt to evaluate its contemporary trajectories. Because Squeak provides an artifact so unambiguously connected to the idealistic work emerging from X e r o x P A R C i n the 1970s, it is possible to interrogate the relative health and coherence of the cultural traditions which it simultaneously draws upon and, arguably, creates anew. In the final chapter, I attempt to draw things together, bringing the focus back to the macro level and examining the Dynabook ideal in the large: is it better considered as a tech- nocultural artifact or as a touchstone idea connecting otherwise disparate embodiments? This question leads to a broader methodological question, regarding how 'ideas' are to be treated alongside more concrete material objects like artifacts and texts. Finally, I return to the higher level political and social questions of the ultimate relevance of this story to education and to the popular culture of technology. Chapter 1: Introduction 38 Chapter 2: Positions and Approaches I N T R O D U C I N G M Y S E L V E S In a number of respects I am a c h i l d of A l a n Kay's vision. I've grown up i n a w o r l d i n w h i c h Kay's vision has always to some extent existed, though it was not until recently that I had any sense of this. M y own lifetime is almost synchronous w i t h Kay's project; I was born at just about the time that Kay was working o n his initial design for a "personal" computer, the F L E X machine, while at the University of Utah. W h e n Kay went to X e r o x i n 1970 and began to work on a model of personal computing for children, I was just coming to the age at w h i c h my education, and my relationship to media, was beginning. Speaking strictly tempo- rally, my generation was the one that Kay was looking at as the target for his notion of personal computing, though it has taken me thirty years to recognize it. That I have grown up i n a world i n part defined by Kay's work, and that I have been at least somewhat aware of this fact is key to the present study. The significance of Kay's project, i n its educational and political aspects, is something apparent to me perhaps because of the particularities of my o w n history. In attempting to present my treatment and interpretation of this project and its importance to the world, I am i n a position of needing to examine and establish just what it is about my o w n perspective that makes these issues meaningful for me. The perspective(s) I present here is not that of a schooled computer scientist or tech- nologist; neither is it as a 'humanist' approaching the history of computing from without. Rather, I claim partial roots on both sides of this seeming divide, and as a result the story I w i l l tell is not likely to be generic i n either mode. 1 1. The evocation of distinct genres of technology historiography and critique is deliberate. Among writers who have attempted to interpret digital technologies to a wide audience, the 'distanced humanist' stance is well exemplified in Edwards (1996); Cuban (2001); Bowers (2000); Menzies (1996); Rose (2003), while the schooled technologist perspective is found in Papert (1980a); Winograd & Flores (1986); Harveyfl991); Stallman (1998); diSessa (2000). Chapter 2: Positions and Approaches 39 Roots A s a child, I had some early exposure to computers and computing. M y father had long been an electronics hobbyist, one of a generation of early radar technicians i n the Second W o r l d W a r 2 and a "ham" radio operator since the late '40s or early '50s. I grew up watching h i m wield a soldering iron, delighting i n home-brewing his radio gear. In the 1970s, when micro- processors and integrated circuits ("chips") became widely available, D a d began experimenting with building computers. I remember h i m working for years o n a teletype project—it must have gone through several versions—to be able to type text o n a little T V screen; certainly not impressive to our 21st-century eyes, but he was building it almost from scratch, out of parts ordered from the back pages of hobbyist magazines, and he reveled i n the pure challenge of figuring out how to make the thing work. H a d he really wanted a tele- type for his radio gear, he could have bought or wired up a kit in short order, but instead, he worked for some number of years on designing and creating his own. I remember h i m bringing assembly code written out i n pencil on envelopes and notepads to the table w i t h his morning coffee, and also his experimentation w i t h several different modes of creating circuitry: wires on pegboards, and later "etching" his own printed circuit boards. Despite any or all of this, I was not an electronics "whiz" as a child; I was not at all inter- ested i n radios or computers, and while I casually shared i n my Dad's intellectual journeys on occasion, I myself was barely competent w i t h a soldering iron, and never learned the workings of the computers he created or how to program them. W h e n I was 12, D a d encouraged me to study up on basic electronics and the M o r s e code and to take the tests to get my own ham radio license. I accepted the challenge i n good spirit, but when the time came I failed the tests—it simply was not compelling to me. I wasn't particularly disap- pointed, and I remember D a d praising me for being a good sport about it; he never pushed me i n that direction again. Comfortable w i t h this arrangement, I continued to watch over his shoulder and be his sounding board as he talked out his design ideas and railed against 2. Dad joined up with the Canadian services in 1941 and was shipped to the UK, where he worked with the British RAF radar corps until the end of the war. Chapter 2: Positions and Approaches 40 the numerous conceptual obstacles he encountered. I recall h i m explaining the basics of assembly code programming—learning to work with hexadecimal numbers, add this byte to the register, jump to this location, and so on—but it retained for me the character of an introductory lecture: heard and even appreciated at the time, but quickly filed away and largely forgotten. Still, the "spirit of the quest" made an impression o n me, and the exposure to the conceptual underpinnings—if not the details—of computing has surely stayed w i t h me. In junior high school i n the early 1980s, when the new Apple II computers came out, my friends and I played at programming simple things in B A S I C , and I talked dad into getting a "real" computer at home (a tiny Sinclair Z X 8 1 ) . I got reasonably good at programming i n B A S I C , and wrote endless variations on an obstacle course game, littering my programs w i t h clever details, elaborate introductory sequences, and the like—this kind of adornment was what was missing for me from Dad's earlier projects, I suppose. I even became some- thing of a "whiz" among my friends at school, owing to the extra time I had to explore at , home. W h e n , i n grade 11,1 was finally able to take a "computer science" course, I was well ahead of my classmates. O r at least, most of my classmates, for there were a couple of boys, w h o m I didn't know (I honestly wondered where they'd come from) who were clearly years ahead of me i n understanding and proficiency. W h i l e I could write programs i n B A S I C , these kids were far beyond that, programming the assembly code my D a d had worked i n , devouring acres of information from books and magazines. A n d I was intrigued, and got to know these kids a little, and learned from them. But shortly something interesting happened: whether it had to do w i t h high school and my nascent awareness of social status, or whether it was a subconsious reaction to not being the head of the class anymore, I decided quite clearly and purposefully that I didn't want to be part of that group; that the obsession that these couple of boys displayed did not strike me as healthy in some sense. A r o u n d the end of my Grade 11 year, I quite sharply turned away from computers and everything to do w i t h them. The next winter I started playing the bass guitar and I effec- Chapter 2: Positions and Approaches 41 tively forgot that computers existed. D a d continued with his projects, and my relationship w i t h h i m had more to do w i t h fishing trips and playing music. I had nothing to do w i t h computers until my 4th year at university, studying cultural anthropology. I was writing an undergraduate thesis, and I had by now seen enough of my friends writing w i t h a w o r d processor to know that this was clearly the way to go about it. I talked my parents into subsidizing the purchase of an A t a r i ST machine—billed as a poor- man's Macintosh, as it had a mouse and a graphical interface, and cost about half what a M a c did. The A t a r i did its job with my undergrad thesis (I am still loathe to throw out the floppy disks containing it), but it did other things as well: the one that made the biggest impression on me was the inclusion—in the bundle of software and toys that came w i t h the A t a r i — o f a version of Logo, the language that Seymour Papert had designed for kids i n the late 1960s. I didn't know what Logo was, particularly, but there was a one-page printed reference guide to the language primitives, so I was able to poke away at it and draw pictures on the screen with 'turtle' commands. I remember very clearly my series of discoveries w i t h Logo; I quickly learned that you could draw stars with it, if you had the turtle move forward a standard distance, then turn some divisor of 720, and repeat this the right number of times. I quickly learned that you could generalize it: draw a line, then turn 720/n degrees, and do it n times. Gee, I discovered, 360/5 w o u l d draw a pentagon, while 720/5 would draw a star—how about that! I have to admit I was amazed (and I still am); here I had learned something 'real' about geometry that 12 years of school hadn't really made me understand. Obviously, after 12 years of math class I could have told you that the angles of a polygon have some relationship to 360, but the fact didn't really mean anything to me until I started playing w i t h Logo. M u c h , much later I discovered that this was exactly the sort of experience that Papert was shooting for—only he had hoped that elementary-school kids would be making this discov- ery, not undergraduates i n their final year. W e l l , better late than never. I have to give credit to Papert and his team—and this foreshadows an important theme of A l a n Kay's w h i c h I w i l l return to at length—for they had managed to embed the potential for a very particular Chapter 2: Positions and Approaches 42 kind of experience i n their software (twenty-five years earlier, for heaven's sake) so that I could pick it up, sight unseen, almost completely by accident, and immediately have that very experience. I finished the thesis, graduated, and went off to be a young pseudso-intellectual, spend- ing a lot of idle time talking w i t h my friends about ideas. O n e of the topics we talked about at length was computers and the idea that people would soon—perhaps already were—inhabit a computer-mediated environment. W e all read W i l l i a m Gibson's prophetic novels about cyberspace and were enthralled by this notion, almost entirely fictional at the time. In 1992, after a few years of not particularly making a living as a musician, I started looking around for a graduate program that w o u l d allow me to explore some of the ideas we had been talk- ing about: cyberspace, hypertext, and so on. I didn't find a graduate program perse, but I stumbled upon a one-year diploma program i n " A p p l i e d Information Technology" at a local community college. I quit my band, bought a Macintosh, and enrolled. W h a t I had little sense of just then was how many of my peers were doing the same thing—quitting their indie rock bands and getting into new media. A few years later I recognized this as a major shift for my generation. My Encounter(s) with Objects M y return to school i n 1992 was the beginning of a series of encounters w i t h A l a n Kay's intellectual legacy, encounters w h i c h have fundamentally shaped my way of looking at tech- nology and the w o r l d I live i n . M y decision to enroll in the program at Capilano College was mostly serendipitous; I had no defined goals, but I saw a direction to go i n . M y classmates and I learned about—and produced—educational multimedia game/environments. The dominant software paradigm for us was A p p l e Computer's HyperCard, which, despite its superficial limitations—unmodified, it was only capable of black-and-white presentation— it seemed almost infinitely flexible, at least to my novice understanding. H y p e r C a r d was indeed an extraordinarily powerful media production tool, far more so than the bulk of what we have seen on the W e b i n the past decade. W h i l e building and programming m u l t i - Chapter 2: Positions and Approaches 43 m e d i a p r o d u c t i o n s i n H y p e r C a r d , m y f r i e n d s a n d I e n j o y e d ' d i s c o v e r i n g ' a n a e s t h e t i c o f m o d u l a r i t y i n d e s i g n — o r , p e r h a p s m o r e a c c u r a t e l y , H y p e r C a r d ' t a u g h t ' t h i s t o u s , b y w a y o f i t s e l e g a n t c o n c e p t u a l m o d e l . A t C a p i l a n o C o l l e g e I a l s o f i r s t e n c o u n t e r e d t h e I n t e r n e t a n d t h e . t h e n - t i n y W o r l d - W i d e W e b . T h e C a p i l a n o C o l l e g e p r o g r a m r e q u i r e d a l l s t u d e n t s t o p a r t i c i p a t e i n a n e a r l y c o m p u t e r c o n f e r e n c i n g s y s t e m , a n d t h i s h a d a m i n i m a l I n t e r n e t c o n n e c t i o n . W e w e r e p o s i - t i o n e d p e r f e c t l y t o b e a b l e t o w a t c h t h e I n t e r n e t ' s t r a n s f o r m a t i o n f r o m s o m e t h i n g ' b e h i n d t h e c u r t a i n ' t o a m a s s i v e s o c i a l f o r c e o v e r t h e s p a c e o f a f e w y e a r s . I n 1 9 9 3 , w h e n I g r a d u a t e d f r o m t h e I n f o t e c h p r o g r a m , I p r i n t e d u p a s e t o f b u s i n e s s c a r d s t h a t s a i d , " J o h n M a x w e l l - H y p e r m e d i a A r c h i t e c t . " P a y i n g w o r k d i d n ' t r e a l l y a r r i v e f o r s o m e t i m e , a n d I s p e n t a v e r y l e a n f e w y e a r s i n t h e m i d 1 9 9 0 s , b u o y e d u p b y m y r a w e n t h u s i - a s m , w i t h l o t s o f t i m e o n m y h a n d s t o i m m e r s e m y s e l f i n t h e d e v e l o p i n g I n t e r n e t c u l t u r e , l e a r n n e w t h i n g s , a n d t a l k a t l e n g t h w i t h m y s m a l l c i r c l e o f f r i e n d s . W e w e r e h o p e l e s s l y i d e a l i s t i c , a n d f o r m e d a c o - o p e r a t i v e t o s h a r e w o r k a n d f o r w a r d t h e i d e a l s o f a n o n - c o m m e r c i a l , a r t s - o r i e n t e d d i g i t a l w o r l d , r e j o i c i n g i n a n e a r l y v e r s i o n o f o n l i n e c u l t u r e i n w h i c h b u s i n e s s m o t i v e s w e r e s c o r n e d i n f a v o u r o f a k i n d o f t e c h n o - r o m a n t i c i s m . I n t h o s e y e a r s , w h i l e t h e I n t e r n e t a n d t h e W o r l d - W i d e W e b w e r e s t i l l l a r g e l y f l u i d , u n d e f i n e d s p a c e s ( r a t h e r l i k e m y c a r e e r ) , I s p e n t a g o o d d e a l o f t i m e m u c k i n g a b o u t w i t h s o m e w h a t m a r g i n a l t e c h n o c u l t u r a l o d d i t i e s c a l l e d MUDs—multi-user dungeons. A M U D i s a r e a l - t i m e , n e t w o r k e d , m u l t i p l e - p a r t i c i p a n t , t e x t - b a s e d v i r t u a l r e a l i t y ( C u r t i s 1 9 9 2 ) . T h e b e g i n n i n g s o f t h e M U D p h e n o m e n o n w e r e i n t h e e a r l y D u n g e o n s a n d D r a g o n s - i n s p i r e d " a d v e n t u r e " c o m p u t e r g a m e s . A M U D i s , a t i t s s i m p l e s t , a t e x t - a d v e n t u r e g a m e w h i c h c a n a c c o m o d a t e m o r e t h a n o n e p l a y e r ; t w o o r m o r e p e o p l e c a n t h u s g a n g u p o n t h e d r a g o n , o r w h a t h a v e y o u . M y i n t e r e s t i n M U D s w a s n ' t t h e g a m e a s p e c t — I h a v e n e v e r b e e n a c o m p u t e r - g a m e p l a y e r — r a t h e r , I h a d b e e n i n t r o d u c e d t o a p a r t i c u l a r I n t e r n e t M U D c a l l e d LambdaMOO b y a f r i e n d w h o w a s v e r y i m m e r s e d i n I n t e r n e t c u l t u r e . L a m b d a M O O w a s a n i m m e n s e , f r e e - f l o w i n g s o c i a l e n v i r o n m e n t , l i k e a s p r a w l i n g t e x t - b a s e d h o u s e p a r t y 3 . T h e c r i t i c a l i n n o v a - Chapter 2: Positions and Approaches 44 tion of LambdaMOO was that the virtual environment was entirely constructed from within, by its players, rather than by a specially empowered designer/programmer. Lambda- M O O when I first encountered it was only a year or two old, but its 'topography' was already immense and complex, simply because some thousands of users from all over the Internet had been constructing and programming it. LambdaMOO was, I learned* the pet project of a computer scientist at Xerox P A R C (which I had never heard of); its language and internal architecture were "object-oriented"—a term I had heard but which had little meaning for me—hence " M O O , " for MUD, Object-Oriented. In practical terms, what this meant was that you could create new things and define their behaviour in the virtual world by basing them on already existing virtual objects and then specializing them by writing simple scripts. This meant that individual players could very easily create complex, interactive objects within this virtual world. One could easily recognize a kind of aesthetic of creation in M O O worlds. The artfulness of it was a particu- lar kind of illusionism: for instance, considering the best way to create a green grassy space in the virtual environment brought into sharp relief heady issues of Platonism, simulacra, and phenomenology. Though I have never done any theatre, its connection and relevance to M O O i n g was obvious.41 thought M O O was fabulous! At the time, I honestly felt that this was—in spite of its completely text-based interface—a vastly more important and promis- ing technology than the World-Wide Web: here were people, presenting themselves and their environment virtually, in time. How much more interesting than static web pages! Fed up with my impoverished experience as a reluctant Internet entrepreneur, I went back to school again to take a new graduate program in publishing. Part of the program was a 4-month applied internship, and I set up a project working with a distance education program at BC's Open Learning Agency (OLA). The project was to create a MOO-based 3. LambdaMOO is still running—at lambda.moo.mud.org:8888—more than fifteen years after it opened, which must make it one of the longest-running persistent object stores in existence. The original virtual space has been added on to tens of thousands of times by tens of thousands of 'players,' but the living room into which you emerge from the darkened 'coat closet' remains the same, as does the inanity of the conversation one can find there, any time of the day or night, any time in the last decade and a half. 4. I later learned that Juli Burk (1998) explored this theme in some detail. C h a p t e r 2: P o s i t i o n s a n d A p p r o a c h e s 45 environment for high school distance learners. Here was an opportunity to do what I really wanted: to immerse myself i n a project, technically and culturally, and to do some high-level reflection. The result was my Masters project (Maxwell 1996), one of the most rewarding things I have ever done. The O p e n Learning Agency was a good fit for me, and I stayed on after my internship. Ironically, I was soon involved i n a project w h i c h would have been a m u c h better fit for a publishing internship than my M O O project had been: the O L A ' s schools program was interested i n offering their high-school distance education courses online as well as in print. The challenge of how to do both without having to do it twice was paramount, but a friend of mine there, Prescott Klassen, had an answer that set the direction for the next three years of my life. The answer to the problem of publishing in two formats from a single editorial process was a document management technology dating from the 1970s and 1980s called Standard Generalized M a r k u p Language ( S G M L ) . 5 Klassen and I embarked on an ambitious project to design and implement an S G M L system and workflow to completely overhaul the O L A ' s courseware production. The S G M L project was, i n retrospect, a descent into the abyss, but many good things came out of it. The project was technically a success, but organizationally doomed, and I gained a wealth of insight into the cultural dynamics of technology integration. I also learned a lot i n those three years—easily more than i n any other period of my life—about computing, document management, and publishing technology. N o t surprisingly, Klassen and I, having been given more or less free rein to do what we wanted, were able to move m u c h faster than the organization (a working group of 25 or so) we were attempting to change (Klassen, M a x w e l l , & N o r m a n 1999). A t the end of three years, our proof-of- concept complete, we both left the O L A burnt out (along w i t h a handful of other people close to the project, including our director). Prescott went to work on similar publishing projects at Microsoft Press, and I went back to school. 5. SGML is an ISO standard for document management (see Goldfarb & Rubinsky 1991). A somewhat bastardized application of SGML is found in the Web's HTML technology. Chapter 2: Positions and Approaches .46 The lasting pieces of the S G M L project at O L A that I took w i t h me are these: my grad- ual appreciation of the history of computing; that i n many, many cases, the solutions to today's challenges are to be found i n technology developed i n the 1960s or 1970s, and that these solutions are very often the work of individuals or small teams of thinkers—that as a result, they are wholly graspable by an individual, given a commitment to explore not just the details, but the historical contexts of their development. The larger theme lurking i n this issue is that there exists a tradition of "powerful ideas" i n computing, ideas w h i c h are unfor- tunately often ignored or bypassed i n favour of the superficial, partially understood implementations, w h i c h constitute another, larger tradition of discourse and practice. T o see through the partial, ahistorical implementations to the clearer thinking and powerful ideas lurking behind them gives one, first, a more solid place from w h i c h to withstand the constant churn and instability of the market-driven w o r l d of IT, and second, an apprecia- tion that many of the big ideas i n the history of computing are about people and cultures first, and the details of technical application second. I began to believe that most of this "isn't rocket science" after all. The possibility of demystification was enormously empowering. A particular case of this comes to mind; at a document-technologies conference i n 1998,1 witnessed two of the leading minds i n the field of S G M L and X M L — T i m Bray and Eliot Kimber—engage i n a debate about the role of abstraction i n information representa- tion. Kimber, a proponent of an abstract tree-based information architecture called "groves," argued for independence from and priority to particular representational strate- gies (like S G M L or X M L — s e e K i m b e r 1998). H i s admonition: "transcendsyntax.''Bray, o n the other hand countered by appealling to the U n i x tradition and way of doing things, claiming that by agreement o n a simple representational format (e.g., simple structured text files), a great arsenal of software tools could be combined i n ways not foreseeable by the original architect. This debate, I realized, was one of cultural difference rather than of tech- nical merit. That realization led me to understand that at the core of technological systems lay people and practices, and that it was an understanding of these that was important. This Chapter 2: Positions and Approaches 47 does not—emphatically does not—mean that the technical components of a system are irrelevant, or interchangable, or governed by social factors; following Latour, it means that the technical facets can only be understood well by seeing their embeddedness and partici- pation i n historical/cultural traditions of thought and practice. I was, however, sufficiently well steeped to appreciate both Kimber's and Bray's arguments in this light, rather than getting lost in the 'technical' details. The aforementioned "groves" concept o n its own made a huge impression on me, as well, though I am convinced now that I only scratched the surface of it. In simple practical terms, it means the representation of a document or any document-like information as a tree structure, w h i c h can then be topologically treated i n software. That such a structure can be simultaneously and reciprocally abstracted and concretized was another encounter w i t h object orientation; that by abstraction, we gain an alternate realization. This is difficult to describe i n so many words, but my seeing this concept 6 meant that I would never see a 'document' the same way again. For instance, it immediately led to a further realization that my object-oriented grassy field i n L a m b d a M O O and the semantic structure of a term paper were made of the same kinds of things, and were, i n some fascinating ways interoperable. Note that I am not talking here about the level of bits—of the ones and zeros that make up computation at the lowest level—rather, I mean this i n terms of high-level structure, at the highest semantic levels rather than the lowest. This was my intellectual journey, at least. A t the same time I was discouraged and fed up w i t h 'distance learning' and its institutional evolution 7 —which was core to our work at O L A and w h i c h was just beginning to become a major area of interest i n the late 1990s. In 19991 went back to school, (for good this time) and began a P h D program i n education. But I wanted to avoid the "educational technology" I had been w o r k i n g at O L A . Instead, I pref- ered to spend time reading and thinking curriculum theory, continental philosophy, and 6. One must bear in mind while reading my pained attempts to render my own experience of this into words that I am not a math- ematician, that I have always had a very strained and compromised relationship with mathematics. I dearly wish that my math- ematical education had been better, for I have travelled this far in my life almost despite it. 7. David Noble's 1999" Digital Diploma Mills" is singularly instructive on this point. Chapter 2: Positions and Approaches 48 decentered ethnography (while studying with R i c k i Goldman). M y techie side, all the while, was paying the bills by means of a few sizable technical contracts for the O L A , creating a couple of different iterations of a learning environment w h i c h w o u l d provide personalized access to "learning objects" (since our S G M L project had left the O L A w i t h a large store of curriculum content easily broken up into such modular components). I had, at a deeper level, ceased caring seriously about "learning objects,"8 but since O L A (and every other educational institution i n the networked world) was interested, these were the raw material for our efforts. O f more practical and intellectual importance to me was adopting an object- oriented platform for our development w o r k . 9 M o r e importantly, however, this work brought me incrementally closer to figuring out what object-oriented programming and systems were all about, and where this tradition had come from. By degrees, I began to real- ize that there was a c o m m o n intellectual thread underlying almost every significant idea I had become aquainted w i t h i n the past decade: from HyperCard's elegant authoring envi- ronment to M O O ' s build-from-within model, document "groves," the contextualization and re-contextualization of learning objects, and the abstractions we were able to make designing online information spaces. The catalyst to my conceptualizing all of this as a c o m m o n thread was my discovery of a paper written by A l a n Kay for the History of Programming Languages II conference in 1993 (Kay 1996a) o n "The Early History of Smalltalk." This paper, positioned amid a collection of highly technical reports on computer science history, stood out at once for me. In the first place, Kay was talking about an educational project for children, rather than the design of systems for professional programmers. Second, the sheer conceptual scope of the article— ranging from historical treatments of printing and print literacy to the evolution of compu- ter graphics and interface design; from the A R P A community i n the 1960s to the design of 8. For the most part, "learning objects" are related to object-oriented design in name only. The very idea of "learning objects" as portable, recombinable curriculum components is, I think, highly suspect. Friesen (2001) is as good a 'last word' on this topic as I have seen. 9. That platform was Zope, an open-source object-oriented web publishing application, and I was much pleased by how easily it allowed us to think abstractly about information relationships. Zope provides a simple object model for web publishing; in •> effect, it transforms web publishing from a process of retrieving files or producing database queries to one of publishing objects—a shift to a considerably more sophisticated level. See http://www.zope.org Chapter 2: Positions and Approaches 49 http://www.zope.org modern notebook computers—impressed me immensely. Kay's 1996 history outlines the practical emergence of the "object paradigm" amid his research project to develop personal computing for generations of children to come. It is, i n a sense, a post-hoc manifesto for an educational and technological vision. I was coming, by degrees, to appreciate this vision and its many connecting points to pieces of my own history. O n the strength of this encounter, I then began to recognize the various themes that had made an impression on me—Hyper- Card, M O O , groves, learning objects, Zope—as variations on and deviations from this core tradition; that here was the original source of this tradition of thinking about computing, media, and information design. W h a t impressed me most was that this tradition began with an educational vision: the original beneficiaries of this thinking were children, and it was only considerably later that personal computing and object-oriented programming became wrapped up with business needs and office productivity. The more I began to look at the history of computing over the past three decades, the more I realized that the tradition represented here—centering o n A l a n Kay's multi-decade project—had enormous impor- tance to the study of computing i n education, far more importance than is commonly appreciated. I had found my thesis topic. W H Y T H I S S T U D Y ? W H Y T H I S A P P R O A C H ? Reflecting on my history W h e n I returned to school i n 1999 w i t h an agenda to study technology from an educational perspective, it was w i t h a particular set of contraints that I began to set the scope of my work. • I saw a real need for any work I was to undertake—be it development or analysis— to have some sense of historical embeddedness. I had learned this during my work on the S G M L project at O p e n Learning Agency, and I felt that for my work at the doctoral level, it was essential that awareness, if not respect, for what had gone before had to be a foundational piece. Chapter 2: Positions and Approaches 50 • I had developed a jaded attitude toward ambitious projects and bold, 'new' ideas; having personally been through the belly of a few of these, I had the sense that a good proportion of the energy that drives innovative projects comes from the promise of proving something (or oneself) to naysayers. Every successful endeav- our is a rally against cynicism, but where this itself becomes the driver, healthy enthusiasm gives way to hubris. • I had come to recognize the "microclimate" surrounding most educational technol- ogy projects, w i t h i n w h i c h all things are possible and kids do "wonderful" things. Beyond the fragile boundaries of the sphere of energy provided by a few inspired individuals (often teachers), duplication or scaling-up of such projects is impossi- ble. I later ran into this concept better articulated as the "miracle-worker discourse" (de Castell, Bryson, & Jenson 2 0 0 2 ) . • I had the sense that the division of labour between experts and end-users was path- ologically implicated i n most educational technology projects, and that somehow this reification needed to be resisted. • I was wary of—if not positively hostile to—baldly technocentric thinking; there was no way I was about to uncritically engage i n any actual development projects; I returned to school determined to not be anyone's "webmaster," nor to work on any "e-learning" projects, despite numerous opportunities to do so. Part of my intent i n returning to school was to claw my way back from my position on the 'techie' side of the division of labour. For, lest we ascribe too m u c h power to the (vaguely defined) 'technocrat,' we should remember that technical work is still work—that it is labour, carried out according to the same kinds of logic that governs labour i n more traditional contexts. Technical work carried out in a naive technocentric mode is i n the worst cases straightforwardly exploitative, i n the best cases still narcissistic. Chapter 2: Positions and Approaches 51 "Computer criticism" In a 1987 article i n Educational Researcher, Seymour Papert proposed a genre of writing he called "computer criticism." I felt that my work should take this to heart. But what would "criticism" mean, exactly, i n the context of educational technology? Contrary to the wealth of disengaged condemnation that bills itself as critique, criticism i n the larger sense prop- erly demands a certain k i n d of engagement to be meaningful. A s Papert pointed out: The name does not imply that such writing w o u l d condemn computers any more than literary criticism condemns literature or social criticism condemns society. The purpose of computer criticism is not to condemn but to under- stand, to explicate, to place i n perspective. O f course, understanding does not exclude harsh (perhaps even captious) judgement. The result of understanding may well be to debunk. But critical judgment may also open our eyes to previ- ously unnoticed virtue. (Papert 1987, p. 22) I have seen very little writing o n educational technology that lives up to what Papert proposed here. 1 0 C r i t i c i s m i n the sense described here requires not just a.familiarity but a fluidity and fluency w i t h the issues, the discourses, and the practices of computing. It requires a sense of and appreciation of where these discourses and practices come from, historically and topologically—that is, beyond the disciplinary boundaries i n w h i c h they may immediately be seen. In a very important sense, Papert's computer criticism requires the breaking down of or at least resistance to the division of labour w h i c h sets technology and human aims apart. Rather, i n order to make sense, we have to be able to see how these positions relate to and give rise to one another. Multiple perspectives, blurred genres This, then, has been my starting place: my highest-order goal in this process is to attack or subvert the taken-for-granted division of labour that inscribes the boundaries of technology and which makes Papert's "computer criticism" next to impossible. Whatever potential for empowerment and democracy may come w i t h information technology is repeatedly under- 10. Papert's own students are, perhaps predictably, exemplary: e.g., Bruckman 1997; Goldman-Segall 1998; diSessa 2000. Chapter 2 : Positions and Approaches 5 2 mined by the reification of these positions: experts and engineers vs. end-users and consumers. That alternatives are possible—and indeed extant—is the case w h i c h I seek to make here. T o this task I bring my own background; as a student of education and culture, and also as a designer, creator, and teacher on the technical side. In what follows, I am trying to deliberately blur the lines between computing, social science, cultural criticism, and education. I am, to use Latour's (1993) vocabulary, interested i n actively proliferating the hybrids. O f course, this isn't a matter of so much conscious choice o n my part. M y personal exploration and growth i n understanding of all sides of these issues is itself varied and begins from multiple perspectives. Because of this I cannot choose sides or loyalties in the division of labour; I stand w i t h feet on both sides. A n d because of this, because of my posi- tionalities, I am faced w i t h real methodological constraints: I cannot honestly bracket out the issue of expertise, hide i n the inscription of an "outsider" analyst, nor pretend to naivete, as some have attempted i n the sociology of science. I have come to a place where I can inhabit neither the "miracle worker" role nor that of the "humanist" critic—for these are caricatures w h i c h are products of the very division I seek to break down. A n d so I have undertaken to do this research, and to write this work, from multiple active perspectives: from the standpoint of educational research and cultural criticism and also, simultaneously, from a standpoint of some technical sophistication, to walk the in-between space between the romantic and the ironic: to be more, and to write more, than caricatures. M y challenge, then, is twofold: i n the first place, my task here is not one of building— nor singing the praises of (that is, instructing)—another tool or toolset. I must aspire to a 'higher' analytic frame, as the literary critic does compared w i t h that of the author. In the second place, however, intellectual honesty demands that I not restrict my stance to that of observer; i n order to meaningfully reflect on what I am studying, I must bring to bear my full faculties, including my experience and positionality—my own history—as a designer and builder of systems. A t a theoretical level, this double role is of course the case i n literary crit- icism; a critic is of course also an author—of criticism if not of'literature' perse; how could it Chapter 2: Positions and Approaches 53 ) be otherwise? In a practical sense, though, this doubling is problematic i n the case of tech- nology—of "computer criticism"—because most of us do find ourselves on one side of the divide or other: designer or user? D o these two exclusives represent our full realm of possibility? In order to do effective criticism, i n Papert's sense, I have to transcend that divide—to strike a balance between criticism and advocacy. I have to find a line (or invent one, or at least spend some time mapping one out) between philosophizing about educational IT "from 30,000 ft"—that is, from a disengaged, 'outsider' perspective—and uncritically evan- gelizing a particular technology or technological vision from the inside. M E T H O D O L O G Y A N D T H E P R O B L E M O F . D I S T A N C E The study presented here is a historical one. M y intent is to present the story of the D y n a - book project and to trace its historical trajectories. M y challenge, thus, is to write history effectively, and to write effective history; as this is a cultural historical account, my method- ological concerns must also bear upon traditions of making sense of culture. A s a former student of anthropology, my unsurprising point of departure for the "inter- pretation of cultures" is the work of Clifford Geertz, who had ethnography i n m i n d when he wrote his groundbreaking essays o n the turn to a interpretive—hermeneutic—approach to the study of culture. Geertz' admonition that the analysis of culture is "not an experimental science in search of law but an interpretive one in search of meaning" (1973, p. 5) is as appli- cable to the business of historiography as it is to ethnography, as a body of work emphasizing cultural history shows. It is worthwhile tracing the "interpretive turn" back- wards from Geertz to French philosopher Paul Ricoeur, whose article "The M o d e l of the - Text" (1971/1991a) set the stage for the adoption of interpretive 'method' in the social sciences—particularly i n the English-speaking world—in the 1970s. Tracing back further from Ricoeur takes us to the major source of modern philosophical hermeneutics i n the work of German philosopher Hans-Georg Gadamer. Chapter 2: Positions and Approaches 54 Gadamer's argument i n his magnum opus, Truth and Method (1975/1999), is that it is only through participatory knowledge and interpretation that we are able to come to any kind of an appreciation or critique of what we study. This is significantly at odds w i t h the more traditional methodological quest for objectivity, and the tension here has been a central issue i n social sciences for the past half-century; it puts i n question the very idea of social "science" and is the crux of Gadamer's questioning of "method." The task of research- ing human life and meaning cannot possibly proceed according to the objectivist ideals of natural science, since the question of meaning can only be approached by engaging w i t h the subject at hand, intersubjectively, participatory, performatively. Gadamer wrote, "The concept of the life-world is the antithesis of all objectivism. It is an essentially historical concept, w h i c h does not refer to a universe of being, to an 'existent w o r l d ' " (1975/1999, p. 247). Jiirgen Habermas is perhaps most succinct i n his explication of this concept: Meanings—whether embodied i n actions, institutions, products of labor, words, networks of cooperation, or documents—can be made accessible only from the inside. Symbolically prestructured reality forms a universe that is hermetically sealed to the view of observers incapable of communicating; that is, it would have to remain incomprehensible to them. The lifeworld is open only to subjects who make use of their competence to speak and act. (Haber- mas 1984, p. 112) That the lifeworld is essentially historical makes a particular demand o n the inquirer. The challenge is not to make sense of social or cultural phenomena qua phenomena; that is, awash i n a vast and complex present tense, i n some vain effort to discern the structures there. Rather, it is to make sense—make meaning—by engaging w i t h the historicity of the things and ideas and patterns we seek to understand. It makes the writing of history—no less than the writing of ethnography—necessarily and inescapably reflexive. Gadamer's powerful vision of this process is centered around the concept of tradition (1975/1999 p. 284ff) which, following Heidegger's philosophy, makes the temporal unfolding of experi- ence paramount. Richard Bernstein summarizes: Chapter 2: Positions and Approaches 55 A s Gadamer sees it, we belong to a tradition before it belongs to us: tradition, through its sedimentations, has a power w h i c h is constantly determining what we are i n the process of becoming. W e are always already "thrown" into a tradition. W e can see how far Gadamer is from any naive form of relativism that fails to appreciate how we are shaped by effective history (Wirkungsge- chichte). It is not just that works of art, text, and traditions have effects and leave traces. Rather, what we are, whether we are explicitly aware of it or not, is always being influenced by tradition, even when we think we are most free of it. A g a i n , it is important to reiterate that a tradition is not something "naturelike," something "given" that stands over against us. It is always "part of us" and works through its effective-history. (Bernstein 1983, p. 142) The notion of effective history—and, as Gadamer extends it i n Truth and Method, "histori- cally effected consciousness"—implies the kind of engagedness that binds me as a researcher, and i n w h i c h I eschew methdological distancing. In fact, the opposite seems to be the rule: "Understanding is to be thought of less as a subjective act than as participating i n an event of tradition, a process of transmission i n w h i c h past and present are constantly mediated" (Gadamer 1975/1999, p. 290). Engagement w i t h and participation i n historically embedded situations—that is, w i t h i n traditions—implies making judgments about what is good, what is valid, what is just. This constructive engagement is used i n Goldman-Segall (1998) as a methodological tenet i n her strategically decentered ethnography. It is the basis of Rudolf Makreel's (2004) notion of "ethically responsive" history. Philosopher Alasdair M a c l n t y r e (1984) takes this thread from a history of philosophy into a theory of ethics. Jiirgen Habermas (1984) takes it to a theory of communication. In all of these variations, there is no appeal to external authority or source of rationality but the tradition itself, in w h i c h the researcher is necessarily—partially at least—embedded. Habermas writes: The interpreter would not have understood what a "reason" is if he did not reconstruct it with its claim to provide grounds; that is, if he did not give it a rational interpretation i n M a x Weber's sense. The description of reasons demands eo ipso an evaluation, even when the one providing the description feels that he is not at the moment i n a position to judge their soundness. One Chapter 2: Positions and Approaches 56 can understand reasons only to the extent that one understands why they are or are not sound, or why i n a given case a decision as to whether reasons are good or bad is not (yet) possible. A n interpreter cannot, therefore, interpret expres- sions connected through criticizable validity claims w i t h a potential of reasons (and thus represent knowledge) without taking a position on them. A n d he cannot take a position without applying his own standards of judgment, at any rate standards that he has made his own. (Habermas 1984, p. 116) I see the basic question, or problematic, i n historiography to be the distance between the storying of the historical subject and the storying of the historian; the distinction between primary and secondary sources is necessarily blurred. I do not take this to be a problem of validity—nor of endless layers of relativism—rather, i n line with the hermeneutical approach, I see this straightforwardly as the space of interpretation. O n the question of the relationship between lived experience and narrated story, I do not see a "discontinuity" between these, as Hayden W h i t e (1973) has famously claimed; rather, I believe, w i t h Gada- mer and his followers—notably Alisdair M a c l n t y r e (1984), David Carr (1998), and A n d r e w N o r m a n (1998)—that lives are lived and made sense of, i n the first person, narratively. The conception of the unfolding of experience, and the hermeneutic circle of its ongoing inter- pretation i n time is grounded i n the Heideggerian phenomenological tradition, and I take as a basic methodological principle the treatment of experience, and action, as text (Ricoeur 1 9 7 1 / 1 9 9 1 « ) . The historiographical challenge, then, is not one of gaining privileged access to or an unadulterated representation of what has gone before, but rather one of entering into a meaningful engagement w i t h the myriad layers of effective history w h i c h themselves produce the possibility of such an engagement. This is why the details of my personal history w i t h respect to computing and computing cultures are important to this account, as is my process of assembly and immersion i n the sources I bring to bear here. It is worth noting i n passing that nothing i n this framing would be out of place i n a contemporary exposition on ethnographic method, but the present study is not ethnography. Rather, I present here an analysis of a body of historical documents—to discern and document the Chapter 2: Positions and Approaches 57 traditions, genres, and apparent structures of discourse and practice. I w i l l attempt to outline the details of this approach presently.. Introducing genre theory Drawing on the historiography of M a r k Salber Phillips, and contrary to Hayden White's somewhat monolithic and ahistorical notion of emplotment, it makes sense to speak of historically contingent genres of lived narrative. Phillips argues that history is best seen as a family of historically contingent and often overlapping genres. This approach—building on White's opening of historiography to literary theory and concepts—points to a r i c h interde-. pendence of text, context, readers, and writers (Phillips 2000, p. 10). "Genre, of course, is not a self-contained system. It is a way of ordering and mediating experience, literary and extraliterary" (p. 11). Genre theory—as it appears i n historiography (Phillips 2000), i n literary theory (Jauss 1982; C o h e n 1986; M i l l e r 1994a; 1994b), i n linguistics and cognitive science (Swales 1990; Scollon 1992; Bazerman 1998; Lemke 2001)—opens up the possibility of considering the material i n question directly i n the context of the communities (scholars, audiences, markets) of people for w h o m the material is relevant. Swales is perhaps the most direct: "genres are the properties of discourse communities" (Swales 1990, p. 9). This works b i - directionally i n a sense; genres are defined by the communicative acts of discourse c o m m u - nities, who are, i n turn, constituted by communicative acts understandable w i t h i n certain genres. For this to work, we must ensure that we do not reify genres as sets of analytic char- acteristics or properties (Cohen 1986, p. 210). Bazerman puts it most eloquently: By genre I do not just mean the formal characteristics that one must observe to be formally recognized as correctly following the visible rules and expectations. Genre more fundamentally is a k i n d of activity to be carried out i n a recogniza- ble textual space. [...] Thus genre presents an opportunity space for realising certain kinds of activities, meanings, and relations. Genre exists only i n the recognition and deployment of typicality by readers and writers—it is the recognizable shape by w h i c h participation is enacted and understood. (Bazer- man 1998, p. 24) Chapter 2: Positions and Approaches 58 In putting the emphasis on genres as structures of activity and participation—rather than just of literary forms or bloodless "discourse communities"—Bazerman returns us to the continuity of narrative as lived and narrative as written: that the modes of interpretation of one's life and experience "typically" fall into various historically and culturally contingent patterns or categories (cf. a very similar framing i n M a c l n t y r e 1984, p. 212). Thus there is no essential difference between the treatment and interpretation of one's lived narrative and that of the subsequent "historical" narrative; both are literary/interpretive acts, both oper- ate according to conventional, historically conditioned genres and modes. Genres, then, are "perspectivity technologies" (Goldman & M a x w e l l 2003) that mediate between individual meaning and social practice. W h a t this means for the present study is that genres are among the fundamental units of analysis, and, following Phillips (2000), my intent is to focus on the rise and fall of particular genres i n historical context. In this sense, the subject of this study is the emergence, translation, and relative survival of a set of genres of computing and tech- nocultural discourse. History as politics The historian's task is "not merely a reproductive but always a productive activity as well" (Gadamer 1975/1999, p. 296). W h a t makes the writing (and reading) of history interesting and useful is the dynamic between these interpretive locations. The writing of history is thus not an attempt to nail down what happened i n the past, nor to explain the present by the past, but rather an attempt to "shape and re-shape our collective understanding" (Norman 1998, p. 162) of the past, the present, and, more importantly, the future. The writ- ing of history is therefore a generative, engaged, political process that is itself historically embedded. This very embeddedness, and not any methodological measure, is what keeps us from hopeless relativism. I am not—nor is any writer—in the business of telling a story i n the sense of one story among many; l e a n only write the story according to me, from my particular perspective. W h a t I must strive to do—and here is the role of methodology, disci- pline, and rigour—is increase where possible the number and variety of perspectives Chapter 2: Positions and Approaches 59 considered and repeatedly test the story against existing frameworks; that is to say, to maxi- mize the connectedness of my account. Lather's (1991) discussion of triangulation, construct-, face-, and catalytic-validity guidelines are directly instructive here, and serve the same overall end of connectedness. It is only possible for me, as an embedded subject, to tell one story—the story I am given, shaped by my historical and hermeneutical horizons—the story w h i c h emerges for me in the process of my investigation. I can speak of "a story" among many possible stories, but I can only honestly mean this plurality as a reference to the possibility of or invitation to other stories, w h i c h can only be evoked or juxtaposed, and never directly represented—to do so w o u l d require that I actually have some means of getting outside the stories, to live outside the lifeworld, outside of narrative. The story, then, is necessarily incomplete, its only hope for greater completeness is to be futher embedded. The Dynabook as/in history Studying a technological project isn't any harder than doing literary criticism. - Bruno Latour, Aramis T o frame my process and thus the scope of the story I w i l l tell, a brief overview of sources and my engagement w i t h them is i n order. M y starting point for this research is a r i c h reflec- tive document by A l a n Kay dating from the early 1990s, "The Early History of Smalltalk," prepared as a conference presentation i n 1993 for the A C M ' s History of Programming Languages II and subsequently published (88 pages worth) by the A C M 1 1 i n a large volume compiling formal articles on the sessions w i t h appendices, presentation transcripts and discussants' remarks (Kay 1996a). Kay's piece serves as a bird's-eye view of the first decade or so of his work, as well as a hub document for subsequent research. "The Early History of Smalltalk" is still referenced and recommended w i t h i n the development community as probably the single best articulation of Kay's vision and work, especially i n the 1970s. 11. The ACM—Association for Computing Machinery—is computing's pre-eminent professional association. It operates a press and an enormous library (digital and otherwise) of published material, proceedings, and literally dozens of periodical publica- tions running the gamut from newsletters to scholarly journals. In the world of scholarly communication, the ACM is an exem- plar, and is certainly a terrific resource to the historian. Chapter 2: Positions and Approaches 60 M o v i n g out from this point, I located two main bodies of literature which, though grouped chronologically, I distinguish contexually rather than as primary vs. secondary sources. The first, dating mostly from the 1970s, are conference papers and research reports from X e r o x P A R C , outlining the Dynabook vision, the details of the early Smalltalk imple- mentations', and a smaller amount of material o n educational research. In this literature, there are a small number of documents from the early 1970s, and then nothing until 1976, apparently due to a X e r o x crackdown on publicity i n the wake of a revealing article by Stew- art Brand i n Rolling Stone magazine i n 1972—an article w h i c h portrayed the P A R C researchers as long-haired "hot-rodders" (Brand 1972) and about w h i c h X e r o x executives were somewhat embarrassed (Kay 1996a, p. 536). After 1976 there is a wealth of material documenting research and development; this literature culminates i n the early 1980s w i t h the end of the original X e r o x P A R C teams and the release of the Smalltalk technology to the w o r l d beyond Xerox. A second body of literature I have identified follows on Kay's 1993/1996 publication, and comprises a large number of journalistic articles and interviews w i t h Kay as well as a substantial number of lectures and presentations by Kay (many of which, thanks to the Internet, are available online as digital video, audio, or text transcripts). This second body of literature roughly coincides w i t h the public release of the Squeak technology by Kay's team at A p p l e Computer and, later, Disney Corporation i n the 1990s. The Squeak technology was positioned as the re-realization of portions of the original Dynabook vision, and not surpris- ingly, the literature appearing i n the late 1990s and early 2000s is concerned largely w i t h the re-articulation of that vision and its key principles, often in m u c h more detail than the works from the 1970s, but certainly i n line w i t h them conceptually. G i v e n these two groupings—material from the X e r o x P A R C period (1971-1983) and then from the m i d 1990s onward—there is clearly about a decade i n w h i c h very little published material appeared. D u r i n g most of this time, A l a n Kay was at A p p l e Computer and working on educational research w i t h a very low public profile; I have had to do some digging to turn up a number of unpublished reports from this period 1 2 —a phase I think is Chapter 2: Positions and Approaches 61 critical to an appreciation of the project's re-emergence (with Squeak) i n the late 1990s, ostensibly a continuation of the original 1970s vision, but now appearing i n a very different world. A third "body of literature" I encountered is the archived Internet communications of the project teams and worldwide community surrounding the Squeak project since the 1990s; this comprises several mailing lists, a dozen or so websites, and the archive of the various versions of the software released o n the Internet since then. A number of instruc- tional and/or reference books on Squeak have been published since the late 1990s as well, and I include these i n this latter category. Having oriented myself to these three major document groups, it then became possible to interpret a large body of surrounding literature through the lenses of the Dynabook vision and its various research projects; this surrounding literature includes the documen- tation for the Smalltalk language (aimed at software developers rather than children or educators) from the 1980s; a wide variety of educational research and development work spanning three decades or more from M I T ' s M e d i a Lab, w i t h w h i c h Kay has been loosely associated over the years and w h i c h offers something of a parallel research agenda, particu- larly i n the work of Seymour Papert and his successors; and, more widely, a large body of literature on educational computing, computer science, and Internet culture over the past two or three decades. M y trajectory took me through much of this documentary material, and having absorbed and made sense of a good deal of it, at least o n a superficial level, I made personal contact w i t h several of the key individuals i n this story. In the spring of 20041 travelled to Glendale, California, to spend two days at the Viewpoints Research Institute w i t h A l a n Kay and K i m Rose, during w h i c h time serendipity allowed me to meet and talk to both Seymour Papert and Smalltalk developer D a n Ingalls. In the fall of that year, the OOPSLA '04 confer- 12. Alan Kay and Kim Rose at the Viewpoints Research Institute are to thank here for opening their substantial archives to me; Ann Marion, project manager with Kay while at Apple Computer, similarly deserves thanks for sharing her own archives. But while my research of this period serves to capture the main currents of research arid thinking, I make no claim to have exhaustively covered this period; there remain mountains of primary documents from the 1980s that I did not cover, including hundreds or thousands of hours of video. Chapter 2: Positions and Approaches 62 ence happened to be i n Vancouver, w h i c h brough Kay and a good number of the Squeak community to my own hometown, and w h i c h provided an excellent opportunity to talk w i t h many of these people—in particular, I had opportunity to interview T e d Kaehler, who had been part of Kay's team since the early 1970s, and whose exceedingly detailed and organized memory of the past thirty-five years proved invaluable i n answering myriad detail questions i n my construction of this narrative. In a similar vein, conversations w i t h K i m Rose and A n n Marion—both of whose relationships with the project date to the m i d 1980s—filled i n many gaps i n my understanding. I want to point out, though, that I do not consider this to have been an interview-driven research project; my primary thread has been the consideration and interpretation of writ- ten texts, and I see these many valuable conversations as supporting that primary documentary work. I spent the better part of two days talking w i t h A l a n Kay while I was i n Glendale, but there was very little of that wide-ranging and sublimely tangential conversa- tion that w o u l d be recognizable as 'interview'—a judgment Kay notably approved of. In retrospect at least, I feel there is something of the spirit of Jerome Bruner's "spiral curriculum" in my research method—a 'developmental' re-framing of the hermeneutic circle. I have gone around and around the literature, reading, re-reading, and adding to the collection as I have gone along; at each re-consideration and re-interpretation of my sources, and certainly through the process of writing and re-writing this account, I have achieved what I hope are both deeper and more broadly connected interpretations and framings of the various facets of the story. I Consider this to be an example of how herme- neutically informed historical inquiry ought to move: an iterative process of the "fusing of, horizons," of achieving a particular instantiation of the "unity" (as Gadamer w o u l d have it) of self and other. In the depth of time and interest I have engaged w i t h it, it is hopefully r i c h and at least defensible i n terms of its "validity." It is also entirely incomplete, inexhaustive, and open to debate. Whether this study succeeds I believe should be judged i n terms of the value and appeal of such debates. Chapter 2: Positions and Approaches 63 H o w Do W E K N O W A G O O D I D E A W H E N W E S E E O N E ? O n e of Donna Haraway's great themes is that none of us is innocent i n the realm of techno- science. It is only by virtue of our involvement and investment w i t h these issues that we are able to make any sense, or make any difference. A n d so we can only attempt to own and own up to our position(s). Haraway writes: The point is to make a difference in the world, to cast our lot for some ways of life and not others. T o do that, one must be i n the action, be finite and dirty, not transcendent and clean. Knowledge-making technologies, including crafting subject positions and ways of inhabiting such positions, must be made relent- lessly visible and open to critical intervention. (1997, p. 36) I bring this point to the foreground in order to add one more layer of context to the present study: to forswear the objective of critical distance. T o write an "agnostic" study of a case like the one I an writing here—the historical trajectory of A l a n Kay's Dynabook vision— w o u l d be to l i m i t the significance of the study to the 'conclusion' that a particular techno- logical project failed, or succeeded, or enjoyed market success, or withered away. It w o u l d not, on the contrary, be able to speak to the ethical dimension of such a trajectory and to suggest why we might care. This is to say, given the ambitions of this study as I have framed it for myself, it w o u l d not be possible to do the present inquiry while holding to an ideal of objectivity or agnosticism. Fpr the overarching aim of this study—and I take "aims" to be foundational in deciding methodological issues—is to break down the division of labour that we take for granted, to subvert the reified discourses of'experts,' 'engineers,' 'designers,' 'end-users,' 'miracle workers,' 'plain folks' and so on, an aim I think is possible at least by making the different threads and voices herein aware of one another: by building bridges, or at least by drawing attention to the always-already constructed boundaries inscribing our various relations to technoculture. I want the readers of my work to be able to read these discourses, evaluate these practices, w i t h a richer, more critical eye than the usual rhetoric surrounding technology (especially i n education) affords. This is precisely the goal of c r i t i - Chapter 2: Positions and Approaches 64 cism i n the large; it provides us with improved means for making practical judgements, for identifying the right thing, or the good thing—or at least the "better" thing. In order to answer the question, "how do we recognize a good idea when we see one?" we have to first recognize a good idea. This cannot be done while maintaining a stance of agnosticism or methodological distance. It can onlybe accomplished by engagement and then elaboration of the layers and spheres of meaning w h i c h can be generated w i t h i n that space of engagement—it demands, as Haraway says, being "finite and dirty." Personal computing/educational technology as a site of struggle Thus what essentialism conceives as an ontological split between technology and meaning, I conceive as a terrain of struggle between different actors differently engaged with technology and meaning. - Andrew Feenberg, Questioning Technology Philosopher of technology A n d r e w Feenberg's conception is a response to the burden of essentialism, inherited from centuries of European philosophy. Feenberg's stance— technol- ogy as site of struggle—is a starting point for me. I conceive of technology, information technology, educational technology, not as a thing or a discourse or a set of practices to be analyzed or judged, but as a contested ground, over w h i c h w i l l be fought the battles concerning democracy, education, and capitalism i n this century. Personal computing is "socially constructed"—it is facile to point this out. In its construction is found the site of a struggle for meaning and significance. The substructure of this struggle is i n the competing genres of understanding and articulation of what personal computing—and educational computing—are about. M y starting assertion is then that the Dynabook is a volley thrown into the midst of that struggle. It is not purely an 'historical' artifact whose day was i n the 1970s; rather, the thrust of the Dynabook project is at least as relevant today i n the 21st century as it was three decades ago. This is a battle for meaning, a battle to define what computing is and shall be, and it is far from decided. The stakes of this battle are, I believe, higher than we tend to admit, as our default instrumental- ist stance leads us to downplay the significance of this struggle. H o w computing is defined Chapter 2: Positions and Approaches 65 i n these early years (in its incunabula, Kay would tell us) w i l l have enormous consequence for how we live and work and learn and conduct our communities i n the future. I mean this i n the most concrete sense, but I am not interested here in evaluating any particular educa- tional technology i n terms of instructional efficacy. Rather, my interest is at a broader level of citizenship and democracy, i n the sense that John Dewey established: A democracy is more than a form of government; it is primarily a mode of asso- ciated living, of conjoint communicated experience, the extension in space of the number of invididuals who participate in an interest so that each has to refer his own action to that of others, and to consider the action of others to give point arid direction to his own... (Dewey 1916, p. 86) This may sound like an unabashedly Romantic framing of the issue—Kay has acknowledged as m u c h on numerous occasions, and so should 1.1 am not neutral; I am not here to play the disengaged observer. I am interested i n casting my lot for some genres and not others: genres w h i c h define and structure practice, interpretation, understanding as political and ethical patterns. Historian M a r k Salber Phillip's studied the rise and fall of genres of histori- cal writing/understanding i n 18th-century England. I see the present project as doing something related for the technocultural genres of the past three decades. M y task, then, is to demonstrate how and why technology is political—more specifi- cally, software as politics by other means. There is a superficial interpretation of this identification w h i c h concerns the way i n w h i c h software is designed, released, and licensed i n ways w h i c h to a greater or lesser extent constrain and direct a "user's" actions and poten- tial (Lessig 1999; Rose 2003); this is an arena of ongoing battles i n copyright law, emerging trends like free and open-source software, the reach of open standards, and the market dynamics of dominant corporations such as Microsoft, Adobe, Apple, and others. But a second and somewhat deeper interpretation of technology and software as p o l i - tics has to do w i t h a gradual (but by no means consistent nor even particularly widespread) democratization of computing over the past four decades. In this interpretation, it becomes possible to identify and map out the interplay of power and resistance w i t h i n a technical Chapter 2: Positions and Approaches 66 sphere like the Internet or even your desktop. This is the arena i n w h i c h much of A l a n Kay' project can be considered. This is the level at w h i c h the educational implications of computing are most acute, and, indeed, most accessible, if we take the time to look. A third, more systemic interpretation requires a foray into philosophical theories of technology, locating the political aspects of technology at their lowest and most general level. It is to this topic that I turn next. Chapter 2: Positions and Approaches Chapter 3: Framing Technology In order to do justice to a treatment of technological development, I want to first establish a philosophical position w i t h respect to technology. In doing so, my aim is to set up a theoret- ical scaffolding upon w h i c h I can hang particular reflections i n my examination of A l a n Kay's vision of personal computing. M y intent here is not to provide a broad-strokes "theory of technology"—neither by means of reviewing a hundred years' worth of philosophizing o n the topic nor by attempting to construct a water-tight model. W h a t I w i l l try to present here is a provisional framing that draws attention to a number of particularly interesting charac- teristics of technology. ' The framing I have i n m i n d rests o n a foundational concept of technology as media and as mediation; Building on this, I w i l l introduce the "sociology of translation" as advanced i n the work of Bruno Latour and M i c h e l C a l l o n , and I w i l l elaborate the implications of trans- lation as a principal metaphor for the dynamics of technocultural systems. This semiotic aspect of technology leads to a discussion of what I have called the "mechanics of text," and here I wish to present a latter-day framing of the k i n d of media ecology M a r s h a l l M c L u h a n outlined i n The Gutenberg Galaxy, but w i t h digital software given the central role, rather than print. The emphasis o n software itself leads to a consideration of simulation as the paradigmatic practice of dynamic digital media, and I w i l l argue that this is an essentially hermeneutic process, something w h i c h should focus our critical attention on its situated- ness. Finally, I w i l l discuss some methodological implications raised by this framing of technology, and put the spotlight on the ethical and political considerations of a cultural treatment of technology. T E C H N O L O G Y A S M E D I A M y starting point is to treat technology as mediation—or as media, this w o r d bringing w i t h it a particular set of connotations and references (largely set i n their current form by M c L u - han i n the 1960s). T o treat technology as media is to establish a perspective probably . Chapter 3: Framing Technology 68 distinct to the "information age." W h e n the artifacts provoking discussion are the Internet and computing devices, a decidedly different tone is set than w i t h industrial technology such as steel mills and power stations. This distinction immediately brings forth the herme- neutic aspect of technology (Feenberg 1999; 2004), by w h i c h I mean the complex of forces and processes w h i c h govern the significance of particular technologies to particular people i n particular times and places. A n d it is the hermeneutic question that I mean to foreground here, rather than any suggestion of an essence of technology. In sum, differences i n the way social groups interpret and use technical objects are not merely extrinsic but also make a difference i n the nature of the objects themselves. What the object is for the groups that ultimately decide its fate determines what it becomes as it is redesigned and improved oyer time. If this is true, then we can only understand technological development by studying the sociopolitical situation of the various groups involved i n it. (Feenberg 2004, p. 216) A treatment of technology as media or mediation lends itself also to the exploration of a media ecology. N o w , by ecology I do not mean anything green (at least not directly). Rather, what I mean by ecology is, after Postman (1970), a dynamic, evolving system i n w h i c h actor and environment are inseparable and mutually constitutive, i n w h i c h both people and cultural artifacts are considered, and in w h i c h responsibilities and ethics are emergent and situated, in w h i c h the content-context distinction itself is problematic and should probably be avoided, for everything is the context for everything else. But, lest I risk characterising the whole issue as a sea of aporia, let me appeal back to mediation and hermeneutics—which provides a vocabulary for dealing precisely w i t h this sort of contextualism. F r o m a hermeneutic perspective, we are our mediations; mediation is primary, and not something that happens to already-existing entities. W h a t this implies is that, as i n M c L u - han's famous aphorism, the medium is the message (and that practically speaking, it is an error to attempt to distill one from the other), we human beings are effectively inseparable from our technology/material culture. Is this so radical a stance? It would not be terribly controversial to claim that human beings are inseparable from our language(s). I mean to Chapter 3: Framing Technology 69 claim for technological mediation this same primacy, to position technology as language, or at least to suggest that it be treated similarly. I make this claim not as part of a characteriza- tion of modernity, but as a fundamental part of what being human is about. Putting mediation first dissolves the debate between the idea that language expresses pre-existing thoughts and the notion that we are trapped within the limits of our language; or, similarly, whether culture is something internal or external to individuals. Michael Cole's influential book, Cultural Psychology: A Once and Future Discipline (1996) draws upon the Russian cultural-historical school (after Vygotsky) to elaborate a theory of medi- ated, contextualized action which "asserts the primal unity of the material and the symbolic in human cognition" (p. 118). In Cole's version of mediated action, artifacts are simultaneously ideal (conceptual) and material. They are ideal in that their material form has been shaped by their participation in the interac- tions of which they were previously a part and which they mediate in the present.... Defined in this manner, the properties of artifacts apply with equal force whether one is considering language or the more usually noted forms of artifacts such as tables and knives which constitute material culture, (p. 117) If we are our mediations, then certainly we cannot posit that ideas precede expression or mediation; nor can we accept that we can only think what our language of expression makes possible, for we can invent languages (and indeed do). So with our technologies: we do not exist in some essential way prior to technological mediation, nor are we subsumed within a technological trajectory. We invent tools, as we do languages, and subsequently our experi- ence and agency is shaped by them. But to say this is merely to ape McLuhan; it is the historicity of these 'inventions' that is the particularly interesting story. Precisely in the dynamics of this relationship and how 'we' (I will put that in qualifying quotes this time) see it—in terms of historicity, class, gender, politics, and so on—are found the interesting and important stories; that which is most worthy of study and the invest- ment that leads to deeper understanding. Far from seeing technology as something that augments or diminishes—or indeed qualifies—the human, my starting place is that human- Chapter 3: Framing Technology 70 ity is itself technologically defined, and in myriad ways.1 Donna Haraway's cyborg trope is a particularly eloquent and evocative address to this notion: There are several consquences to taking the imagery of cyborgs as other than our enemies. [...] The machine is not an it to be animated, worshipped, and dominated. The machine is us, our processes, an aspect of our embodiment. We can be responsible for machines; they do not dominate or threaten us. (Haraway 1991, p. 179) There is a decidedly historical character to such framings and perspectives. How this rela- tion presents itself to us today in the age of the "cyborg" is not what it would have been in the "machine age" of steam and steel; nor would it have the same character in the 13th century, with its 'machinery' of iconography, horsecraft, and emerging bureaucracy. But what is the same in all these cases is the central role of technological mediation. What is mediation, then, or media? I want to spend some time teasing out the implica- tions of these concepts by going down two different—but complementary—routes. The first route is the work of architect Malcolm McCullough; the second is by way of the sociol- ogy of technoscience of Bruno Latour and Michel Callon. McCullough's Framing of Media To turn to the more specific formulation, then, what is a medium7. To my mind, nobody answers this better—and in a definitively active, constructive, and contextualist mode— than Harvard architecture professor Malcolm McCullough, in his 1998 book Abstracting Craft. McCullough writes: Tools are means for working a medium. A particular tool may indeed be the only way to work a particular medium, and it may only be for working that medium. Thus a medium is likely to distinquish a particular class of tools. [...] Sometimes a medium implies such a unique set of tools that the whole is referred to without differentiation. Painting is a medium, but it is also the use of specific tools and the resulting artifact: a painting. The artifact, more than 1. Bruno Latour's Pandora's Hope (1999, pp. 202-213) traces a possible history of technocultural mediation which begins even before the primordial "tool kit" of stones and sticks: with the very idea of social organization—as technology. Chapter 3: Framing Technology 71 the medium in w h i c h or tools by w h i c h it is produced, becomes the object of our work. [...] Artifact, tool, and medium are just different ways of focusing our attention on the process of giving form... In many refined practices, the perception of a medium surpasses any perception of tools. If a medium is a realm of possibilities for a set of tools, then any immediate awareness of the tools may become subsidiary to a more abstract awareness of the medium. ( M c C u l l o u g h 1998, pp. 62-63) M c C u l l o u g h positions media i n the midst of the "process of giving form"—that is, practice. H i s framing integrates material and conceptual resources equally, suggesting that these become distinct as we focus our attention differently. M c C u l l o u g h clearly echoes Heidegger's famous ontology of the hammer as ready-to-hand. He also, as we w i l l see, paral- lels Latour's vocabulary of articulation and network; the hand and tool and medium become a network, aligned i n the articulation of the work. M c C u l l o u g h ' s exploration of the subtle- ties of craft—and particularly handcraft—moves from tool to medium to artifact seamlessly, drawing his attention to the spaces of tension and grain w i t h i n each, letting his attention fall away where it is not needed, according to the dynamics of actual, situated practice. Note the emphasis i n M c C u l l o u g h ' s account o n work, and i n particular form-giving work. This is a participatory stance, not a spectatorial one; it bases the ontology of media i n the actor, and not i n the spectator or consumer. Compare M c C u l l o u g h ' s framing w i t h that of Peter Lyman, writing on the computerization of academia: [M]ost fundamentally, most people only want to 'use' tools and not to think about them; to'nonexperts, thinking about tools is a distraction from the prob- lem presented by the content of the work [...] Whereas a machine has a purpose built into its mechanism, and a tool requires the novice to acquire skill to real- ize this purpose, a computer is a field of play only to an expert. (Lyman 1995, p. 27) Lyman's analysis makes 'tools' into rather ahistorical black boxes while appealing to higher- order goals—which may prove to be something of a false economy. H o w a particular tool comes to be neatly integrated i n a particular practice—to the point where it becomes subsumed into the practice—is a profoundly historical and political process (Franklin Chapter 3: Framing Technology 72 1999). This notion is often troublesome, but there exists a case i n w h i c h nearly everyone recognizes the historicity of'practice': the relationship between a musician and her instru- ment. Both M c C u l l o u g h and L y m a n touch o n musicianship, but treat it rather differently. L y m a n goes to far as to give special status to this case, claiming that the musician/instru- ment connection transcends tool use: "In performance, the musical instrument and player interact i n a manner that cannot accurately be described as a human-tool relation" (Lyman 1995, p. 29). M c C u l l o u g h ' s treatment is m u c h more involved: "acute knowledge of a medium's structure comes not by theory but through involvement" ( M c C u l l o u g h 1998, p. 196). This awareness or knowledge has two faces: one is the familiar falling away of intermediaries to allow consciousness of the medium or practice itself. In the other are the traces of culture, knowledge, and history that become wrapped up i n our tools, media, and artifacts. The guitar may disappear from the consciousness of the musician, such that she becomes aware only of the music, but over time, the instrument w i l l bear the marks of her playing, w i l l show wear from her hands, be stained by the moisture from her skin; conversely, her hands and her musicianship i n general w i l l bear the complementary wear patterns. O u r tools, media, and artifacts are no less situated than we are, and they share the temporality of our exist- ence. They have history, or historicity, or horizons that we merge w i t h our own. The social structure of this shared historicity is something akin to literacy, a topic I w i l l return to at length. Chapter 3: Framing Technology 73 Latour's Mediation: Articulations and Translations Bruno Latour entered the consciousness of English-language social science w i t h his 1979 book w i t h Steve Woolgar, Laboratory Life, w h i c h has some claim to .being the first real ethnography of a scientific laboratory. Latour and Woolgar were interested in explicating the process of scientific investigation, and the tack they took was one w h i c h w o u l d largely define Latour's career: science as inscription—that is, the turning of things into signs, and, therein, the application of semiotics (in the mode of A . J. Greimas) to science and technology theory. Latour is now more famous for his association w i t h colleague M i c h e l C a l l o n and the so-called "Paris school" of science and technology studies, and w i t h a school of thought about science and technology called "actor-network theory" (later reified simply as A N T ) . Actor-network theory has been notoriously controversial (see, for example, C o l l i n s & Yearly 1992; Bloor 1999) and, I think, broadly misunderstood—in part simply because the wrong elements are emphasized in the moniker "actor-network theory." Latour's repeated call has been for symmetry in how we treat human and nonhuman agency, the social and technical, and his attempts at making this clear have required constant re-iteration and Chapter 3: Framing Technology J a m e s ' C h o o - c h o o s (December, 2003) At 19 months, my son lames is "enthralled with trains. He has a • small wooden train set", andjie i's fascinated • by it, by the way, the cars go. together, the way they' go' around the, track, the way the track goes together. He looks for trains out in the world and in books. He sees things, like fences, and says, "choo rlioo," presumably seeing them as if they're tracks Trains are,educational mediator James. Here's how: we cannot assume that a train or trainset is for.him what it is.fdr us'.'A train cannot be said,to simply, be"; a train is the product of heavily layered interpretation, . tradition, enculturation. But for James, j encountering the world for the first time, a ' ; train is something new; he has no idea what a 'real', train : is or. what it is about—how could He? .What' a train:, is; i s ' so ^under- determined, i n ; his c a s e - t h a t - h i s under- standirigiqf .the; significance of what a train is must be completely .different from', say, mine. We can talk to him about it, because ; there is a common referent, but its significancejn his world is—must be—so very different from mine. James at, 19 months is just beginning to •' develop' an overall; sense of the .world; as opposed, to*- knbwihgr fragmentary- things here and there!-,the episode's of immediate experience. Now he is beginning to I systematize, t h e trainset— and his railroad i trope more' generally—is one of the first systems of things that he's really engaged with. The train and its microworld is a whole, system:, if.has,'a grammar, a'set of rules,' ., constraints,'and'possibilities: James use's the train set as a model, or a frame, to look at the rest of the world. The trainset is a language, a symbolic system, and in the way he uses, it as lenses with which to see the 1 rest: o f ; t h e - ; world, it is ' almost" entirely-'; , metaphoric._Of> course, trains are meta- phors for adults too, but in a much different,: and perhaps Jess dynamic way. "•:-. As he grows,up7"6ther things will come to' take the place of the trainset as his lens. He . learns systems of classifications (animals, , colours,, weather, etc.) and media like pictures, and text. Books and print are'- : already iri.Jirie to become powerful symbolic' tools for.his.^understanding'the world, but .• not yet; a b*qok is still: merely a container of stories- fof^him; rather than a'"personal dynamic medium" like the choo-choo train. \ clarification over the past two decades. H i s first theoretical book i n English, Science in Action, (1987) made the first broad strokes of a general picture of his ideas. The book was influential but its bold constructivist claims were seen by many as dangerously out on a limb (e.g., Bricmont & Sokal 2001), such that Latour has published substantial reworkings and reframings, notably the excellent essay, "Where are the M i s s i n g Masses? The Sociology of Some Mundane Artifacts" (1992), We Have Never Been Modern (1993), and Pandora's Hope: Essays on the Reality of Science Studies (1999). There is a general trend i n these works from the initial semiotic approach Latour and Woolgar took towards a m u c h more all-encompassing ontological stance, one that bears some resemblance to phenomenological theory. Since the early 1990s, Latour's works have been characterized by his use of a special vocabulary aimed at getting around semantic ambiguities raised by his theorizing, and influenced by A . N . Whitehead's event-oriented process philosophy? In Pandora's Hope (1999) Latour provides a comprehensive treatment of technical mediation, presenting the vocabulary of Paris-school science studies—associations, delega- tion, detours, goal translation, interference, intermediaries, programs of action, shifting in and shifting out—all of w h i c h elaborate Latour's theme of the alliance and alignment of resources, both human and nonhuman, and the ongoing construction of technosocial systems—or networks—thereby. Technical artifacts are as far from the status of efficiency as scientific facts are from the noble pedestal of objectivity. Real artifacts are always parts of institu- tions, trembling i n their mixed status as mediators, mobilizing faraway lands and people, ready to become people or things, not knowing if they are composed of one or of many, of a black box counting for one or of a labyrinth concealing multitudes. Boeing 747s do not fly, airlines fly. (Latour 1999, p. 193) A key motif i n Latour's recent writing is that of crossing the boundary between signs and things. T w o entire chapters of Pandora's Hope are devoted to working through this process 2. A.N. Whitehead's 1929 book, Process and Reality is the touchstone here; one seemingly picked up by Alan Kay in his early writ- ings as well (see Kay 1972). Chapter 3: Framing Technology 75 i n the natural sciences—how, by degrees, raw soil samples become quantified, comparable, publishable inscriptions. A n d conversely, Latour also attends to the ways i n w h i c h signs are articulated as things: he draws attention to the lowly speed bump, w h i c h the French call a "sleeping policeman." The speed bump's original admonition, "slow down so as not to endanger pedestrians," becomes by stages translated into, "slow down so as not to damage your car's suspension," and beyond, the message being articulated not i n words but in asphalt topography: The translation from reckless to disciplined drivers has been effected through yet another detour. Instead of signs and warning, the campus engineers have used concrete and pavement. In this context the notion of detour, of transla- tion, should be modified to absorb, not only... a shift i n the definition of goals and functions, but also a change in the very matter of expression. (p. 186) But, Latour notes, i n anticipation of the 'humanist' critique, W e have not abandoned meaningful human relations and abruptly entered a world of brute material relations—although this might be the impression of drivers, used to dealing w i t h negotiable signs but now confronted by nonnego- tiable speed bumps. The shift is not from discourse to matter because, for the engineers, the speed bump is one meaningful articulation w i t h i n a gamut of propositions [which have unique historicity]. Thus we remain in meaning but no longer in discourse; yet we do not reside among mere objects. Where are we? (p. 187) Latour's foregrounding of articulation and translation as key movements i n the relation- ships between us and our material realities makes mediation foundational, and, as w i t h M c C u l l o u g h ' s practice-oriented framing, it reminds us that mediation is something ongo- ing, rather than a discrete step or a qualifier of otherwise stable entities. Stability i n Latour's writings is an effect, not a starting point; i n Pandora's Hope he is careful to distinguish between the idea of "intermediaries," w h i c h look like objects, and "mediations," w h i c h produce them. By working out a vocabulary capable of making such distinctions, Latour Chapter 3: Framing Technology 76 goes m u c h farther than most i n giving us a comprehensive philosophical framework for understanding mediation. T E C H N O L O G Y A S T R A N S L A T I O N It is in the detours that we recognize a technological act; this has been true since the dawn of time.... And it is in the number of detours that we recognize a project's complexity. - Bruno Latour, Aram is M i c h e l Callon's article "Techno-economic Networks and Irreversibility" (1991) is probably the most lucid single articulation of the position that has come to be called "actor-network theory," w h i c h problematizes the individual actor by embedding it i n a network and a temporal flow. The network comes to be articulated by means of what C a l l o n calls "displacements"—that is, the re-arrangement and re-definition of various actors' goals, plans, and sub-plans such that these various elements come to be "aligned," forming a network or chain of articulations, and thereby allowing interactions of greater extent, power, and durability. The dynamics of this process are what C a l l o n and Latour spend most of their time explicating; it is a time-consuming business, because each such displacement and alignment depends upon a packaging of complexity into an apparent "black box" of relative stability and dependability. The Paris school locates the methodology of science studies i n the unpacking of these boxes, and hence, i n the analysis of these networks. A n d yet, to focus solely on the nouns here—actors, resources, intermediaries, networks—is to miss m u c h of the point. Callon's writings also provide a better slogan, one w h i c h better focuses o n the process of articulation and alignment: the sociology of translation (Callon 1981; 1986; C a l l o n & Latour 1981). In taking a complex articulation (it could be the reading of data, the negotiation of fund- ing, the assembly of particular material apparatus or group of people) and rendering it as a resource to be used i n a larger assembly (with larger/different goals), that particular articula- tion is translated into something more o r less different. It has been re-framed and thus re- contextualized, embedded i n a different practical or discursive context; i n doing so, its Chapter 3: Framing Technology 77 meaning or significance and the work it does changes. This is what is meant by translation.3 Latour explains: In addition to its linguistic meaning (relating versions in one language to versions i n another one) [translation] has also a geometric meaning (moving from one place to another). Translating interests means at once offering new interpretations of these interests and channelling people i n different direc- tions. 'Take your revenge' is made to mean 'write a letter'; 'build a new car' is made to really mean 'study one pore of an electrode'. The results of such rendering are a slow movement from one place to another. The main advan- tage of such a slow mobilization is that particular issues (like that of the science budget or of the one-pore model) are now solidly tied to m u c h larger ones (the survival of the country, the future of cars), so well tied indeed that threatening the former is tantamount to threatening the latter. (Latour 1987, p. 117) The extent to w h i c h these ties are "solid," or, for that matter, "irreversible" is the matter of some discussion w i t h i n the literature. Callon's 1991 article suggests that successful network alignments (and thus translations) are indeed irreversible, and some of Latour's early writ- ings seem to support this (his use of the term chreod—Greek for 'necessary path'—taken from biologist Waddington, confirms this reading. See Latour 1992, p. 240). Later, however, Latour seems to turn from this view, arguing that irreversibility is only the product of continual energy and organization (see Latour 1996 for a full, treatment of this theme), that systems or networks may undergo crises at any point which tend to misalign these same resources, turning tidy black boxes (the building blocks of more complex assemblies) back into complexities themselves. Translation is thus more a process than a product; it is "the mechanism by w h i c h the social and natural worlds progressively take form" (Callon 1986, p. 224). The framing of technology as mediation I offered earlier means, to use Latour and Callon's r i c h keyword, technology as translation: the (re-)articulation of the w o r l d i n new forms and contexts, thereby effecting transformations of its 'Being'. N o w , if technology is translation, isn't something always "lost i n translation"? O f course it is. M y appropriation of 3. Latour and Callon credit French philosopher Michel Serres with this usage of "translation." Chapter 3: Framing Technology 78 the term in the service of this exposition is intended to do a particular k i n d of work; it fore- grounds some aspects of a theory of technology and deprecates others. A quote from C a l l o n and Latour shows this i n all its political and metaphorical richness. By translation we understand all the negotations, intrigues, calculations, acts of persuasion and violence, thanks to which an actor or force takes, or causes to be conferred on itself, authority to speak or act on behalf of another actor or force. (Callon & Latour 1981, p. 279) Let me leave this particular line suspended for a moment, however, while we return to a mundane example i n order to work through some of the ways i n w h i c h technology as trans- lation can be articulated. Let us take the old shopworn example of the hammer. 4 A translation-oriented way of looking at the hammer is that it is the technology that translates a nail into a fastener. This lands us mise-en-scene i n a network of other articulations: a pointy stick of steel is translated into a nail; a nail becomes a fastener of wood; it translates two pieces of wood into a construction. N o w that we have those pieces, the hammer can translate the arm into a nail-driver; the hammer-wielder is translated into a carpenter; a stack of lumber is translated into a house-frame; and so on. A whole concert of temporal displacements and ontological shifts occurs i n the rendering of a hammer and a nail into 'functional' pieces. Some of these translations are more permanent than others: the house frame hopefully has some stability; the carpenter's professionalism and income are likely dependent on that translation haying some extent i n time. The hammer variously translates us into builders, the things it hits into fasteners, and the things we hit nails into into arti- facts. Heidegger's point about a different ontology being revealed when we hit our thumbs is true, but what the hammer does when we hit nails is much more important. A s Latour takes pains to point out, what is at stake are longer and more durable chains of associations. N o w , as we have all heard, when all one has is a hammer, everything looks like a nail, and this is nowhere so true as when musing about technologies. Lest we treat all technology as a 4. Heidegger's famous rendering makes the hammer "ready-to-hand" in the practice of hammering. McLuhan's framing makes a similar move, making the hammer an extension of the arm (or fist, depending which political nail one is trying to strike).. Chapter 3: Framing Technology 79 hammer, remember that the hammer and nail example is but one set of articulations; long pants and thatched roofs and texts and numbers are all technologies, too, as are personal computers and nuclear reactors. Each is a particular articulation of networks of varying degree. In each case, different translations are effected. The c o m m o n thing among these is not the k i n d or scope of changes w h i c h are brought about, but that the fundamental dynamic is one of translations, and chains of translations. Latour goes so far as to decon- struct the divide between so-called 'modern' and 'pre-modern' societies along these lines; what we see, rather than some k i n d of quantum difference, is a difference i n degree, largely expressed i n the scope and scale of translations w h i c h can be effected, and thus the relative length of the networks that can be sustained (Latour 1987,1993). Latour offers the example of the French explorer Laperouse, who visited the east A s i a n island of Sakhalin briefly i n 1787, ascertained from the people living there that it was i n fact an island (and not a penin- sula extending from Siberia, as the French suspected), and was able to record this knowledge i n descriptions sent back to France (Latour 1987, pp. 215-218). There is no qual- itative divide between the thinking of Laperouse and the people of Sakhalin (who, LatcW notes, drew maps in the sand for the French), but there is a considerable difference i n their relative ability to translate knowledge into "immutable, combinable, mobile" forms (p. 227) w h i c h in turn facilitate longer and longer-lasting chains of resources. W r i t i n g , cartography, and celestial navigation made up part of the technology of trans- lation for 18th-century French explorers; so too d i d muskets, cannons, and square-rigged ships. Latour's account of'modernity'—though he explicitly forswears the very notion—is one of proliferations of translating technologies, and the longer and longer networks that result. If there is an essence of'modern' technology, it surely has this characteristic: that it is more deeply intertangled and has more components than seen before. But, Latour insists, this is a difference of degree, not of kind. Translations have been the business of human culture from the Tower of Babel o n forward, and the proliferation of them i n the modern w o r l d is nothing new, i n essence. Chapter 3: Framing Technology 80 This is a point upon w h i c h Latour departs from many technology theorists who hold that modern technology is oppressive, totalizing, or one-dimensional. The intersection of these ideas yields interesting questions; i n particular, Heidegger's classic examination of the essence of technology, "The Question Concerning Technology" (1953/1993), could be seen as a translation-oriented approach, w i t h Heidegger's concept of "Enframing" as the master translation, i n w h i c h human agency is itself translated into a means-and-ends rationality. But Latour rejects Heidegger's conclusions on anti-essentialist grounds (Latour 1993, p. 66), claiming that Heidegger grants far too m u c h power to "pure" instrumental rationality (which Latour characterizes as a myth of modernity), and that actual practice is far more complex than essentialist philosophy admits. Let alone an essence of technology to w h i c h we have succumbed, he writes, The depth of our ignorance about techniques is unfathomable. W e are not even able to count their number, nor can we tell whether they exist as objects or assemblies or as so many sequences of skilled actions. (1999, p. 185) The alignment of networks and the maintenance of translations is difficult work, Latour argues, but it applies across all facets of culture and society: "...it is no more and no less diffi- cult to interest a group i n the fabrication of a vaccine than to interest the w i n d i n the fabrication of bread" (1987, p. 129). In his extended narrative on the aborted development of Aramis, an ambitious new rapid-transit system i n Paris, Latour waxes eloquent about the difference between the actual and the potential: The enormous hundred-year-old technological monsters [of the Paris metro] are not more real than the four-year-old A r a m i s is unreal: they all need allies, friends, long chains of translators. There's no inertia, no irreversibility; there's no autonomy to keep them alive. Behind these three words from the philoso- phy of technologies, words inspired by sheer Cowardice, there is the ongoing work of coupling and uncoupling engines and cars, the work of local officials and engineers, strikes and customers. (1996, p. 86) Ongoing work is what sustains technosocial systems over time, what makes them necessarily collectives of both human and non-human actors, and what makes them complex (if not Chapter 3: Framing Technology 81 genuinely chaotic), and thereby eluding essentialist reduction. This is what makes "transla- tion" a better watchword than "actor-network." But remember that translation is fraught w i t h power dynamics; the business of arranging the world into longer chains of mobilized (and therefore transformed) actors exacts a price. Standardization and the Tower of Babel In the sociology of translation, the key dynamic i n the extension and sustenance of techno- social networks/systems is the translation of heterogenous and complex processes and articulations into seemingly simple "black boxes" w h i c h can be effectively treated as single, stable components. C a l l o n wrote that "the process of punctualisation thus converts an entire network into a single point or node in another network" (Callon 1991, p. 153). It remains possible to open these black boxes and reveal the complex details within, but what makes for durable (or, by extension, "irreversible") associations is the extent to w h i c h we gloss over their internals and treat a whole sub-process or sub-assembly as a single object. Latour expresses it by making a distinction i n his vocabulary between "intermediaries" and "mediations" (Latour 1996, p. 219; 1999, p. 307). Intermediaries, w h i c h appear as actors, are neat black boxes; mediations, which appear as processes, are open, complex, and irreducible to a particular role or function. The two terms are different aspects of the overall process of articulation, viewable as if from above and from below. The process of making complex details conform to stable black boxes w h i c h can then be combined and exchanged is precisely that of standardization—a Janus-faced concept 5 that has been the travelling companion of translation since the prototypical technological project, the Tower of Babel. The double loop of translation into black boxes, standardiza- tion, and subsequent translations has a particularly interesting implication for technology: it introduces (or reveals) a semiotic character to the technical. A l l technology—on this view— is information technology, or, to put it i n the converse, information technology is the paradigm for all consideration of technology. 5. See Bowker & Star's 1999 Sorting Things Out: Classification and its Consequences, for a treatment of this dynamic. Chapter 3: Framing Technology 82 This view of technology puts it firmly i n a linguistic frame, rather than, say, an 'economic' one concerned w i t h commodification, or a 'political' one concerned w i t h d o m i - nation. It is a view very much influenced by Latour's explication of the "circulating reference" of signs and things. It is (or should be) recognizably McLuhanesque, insofar as it once again puts mediation first. It is a view w h i c h puts the history of technology i n a partic- ular light: the material rendering and transformation of any artifact is inseparable from its symbolic renderings and transformations. Technology is, i n this light, a language of things. This is but one way to look at technology, but it is one w h i c h I think has m u c h to offer to the present study. Technology is fundamentally and essentially about translation: the symbolic and mate- rial rendering of something into something else. Note that the symbolic is the primary term here, the material is secondary; this makes me an idealist and not a materialist, I suppose, but I mean it this way: technologies are symbolic means of re-ordering the world; i n this sense they are just like language. T H E M E C H A N I C S O F T E X T Lewis Mumford has suggested that the clock preceded the printing press in order of influence on the mechanization of society. But Mumford takes no account of phonetic alphabet as the technology that had made possible the visual and uniformfragmentation of time. - Marshall McLuhan, Understanding Media The view of technology I am elaborating here puts the development of the phonetic alpha- bet as the defining technology—at least the defining technology of the Western cultural tradition I can claim to inherit. Language, writing, and standardization Of print are all advances i n terms of greater and greater translatability; the phonetic alphabet is itself the longest-lived and farthest-reaching of all such technologies. Since its original development by the Phoenicians i n the third millennium B C , it predates most living languages, has spanned the lifetimes of any number of particular writing systems, and is by far the most influential standardizing principle in Western history. The alphabet underlies all our most Chapter 3: Framing Technology 83 important machines: from books and clocks to computers and networks ( M c L u h a n 1962; 1964). A theory of technology that locates its essence i n translation—that translatability is the telos of all technology—is i n principle alphabetic, i n the various senses that M c L u h a n outlined: the alphabetic paradigm leads to systematization, organization, ordering schemes. The alphabet is the prototype for standardization: for interchangeable parts and mass production. The alphabet anticipates not just literacy and printing and empire, but mathe- matics, algebra, mechanization, the industrial revolution, the scientific revolution, and, par excellence, the information revolution. O n l y alphabetic cultures have ever mastered lineal sequences as pervasive forms of psychic and social organization. The breaking up of every k i n d of experience into uniform units i n order to produce faster action and change of form (applied knowledge) has been the secret of Western power over man and nature alike.... C i v i l i z a t i o n is built on literacy because literacy is a uniform processing of a culture by a visual sense extended in space and time by the alphabet. ( M c L u h a n 1964, pp. 85-86) W h a t M c L u h a n realized so very early o n was that all technology is information technology; by extension, digital technology is the epitome of technology, because digital technology makes the relationship between texts and machines real i n real time. I would like to take a moment to explain what that means. The equation of texts and machines is more than just a convenient metaphor. I hope to show that it is (has always been) 'literally' true, and that this becomes clearer and clearer w i t h the development of digital technology. I have argued that technologies are essentially about translation—about the symbolic and material rendering of something into something else. This is true of hammers and pillowcases and tea cozies; but it is especially true (or, more accurately, it is especially apparent) of technologies of representation. In late alpha- betic culture, we have developed an enormously powerful toolkit of representational technologies: from narrative, we have expanded the repertoire to include accounts, algo- rithms, arguments, articles, equations, mappings, proofs, tables, theorums, theories, Chapter 3: Framing Technology 84 transactions, and so forth. A l l of these technologies are representational, obviously, but to put it more forcefully, they are all technologies of translation; they are all machines for capturing and changing the rendition of the world i n some measure. W e don't think of them as machines because they operate relatively 'quietly'; i n our post-industrial imagination, steam power is still our paradigmatic case of mechanization, desite this particular technol- ogy's relatively brief lifespan! But perhaps the old trope of the "mechanistic universe" is giving way to one of a textual universe. In the latter part of the 20th century, biologists began to recognize technologies for translation i n the mechanisms of cells, and the field of b i o i n - formatics has sprung up around this. Here is another order of information-rendering machines, built of proteins instead of ink or silicon, but recognizable if not yet/quite interpretable.6 Language, writing, and standardization (and therefore mechanization) of print are all advances i n terms of greater and greater translatability. Numbers, mathematics, and espe-. cially algebra are enormously powerful technologies of translation. For example, trigonometry is the translation of the idea of a circle into a number of relationships between parts of triangles such that both the circle and the triangle can be seen and related i n new ways—facilitating the extension of technosocial networks of greater extent (quite literally to the m o o n and back). On 'Abstraction' I want to take a moment here to address the popular notion that information technologies lead to (or are achieved by) greater and greater abstraction. Jean Lave (1988, p 40ff; Lave & Wenger 1991, p. 104) and Bruno Latour have gone to lengths to directly warn against this simplistic concept. Rather, the concept of translation renders abstract and concrete as different languages; it does not set these up i n an absolute hierarchy; to do so is to revert to 6. The qualifier here is not intended to soft-pedal a technological determinism, nor to problematize our relative proximity to instrumental power over the genome. Interpretation is always not yet/quite possible. Genomics and bioinformatics have shifted this dynamic to new terrain, but I am not at all sure that the core challenge is so different from interpreting written literature. Haraways' extensive (1997) critique of "gene fetishism" makes the same kind of contextualist argument against literalism that literary critics have mounted against determinate formalism. See e.g. Fish 1980. Chapter 3: Framing Technology 85 the old vertical structuralist logic of signifier/signified that I want specifically to avoid. Abstraction is a dangerous metaphor, as Latour notes: The concrete work of making abstractions is fully studiable; however, if it becomes some mysterious feature going on in the mind then forget it, no one will ever have access to it. This confusion between the refined product and the concrete refining work is easy to clarify by using the substantive "abstraction" and never the adjective or the adverb. (Latour 1987, p. 241) Technologies like the alphabet and the computer don't work because of abstraction; if anything, they are effective because of their greater concreteness. It can be argued that the turning point in the development of the digital computer, bringing together the thinking of Alan Turing and George Boole, was when Claude Shannon finally achieved a sufficiently concrete means of representing logical relationships. That there exists a pervasive fetishism of abstraction, especially in technoscience, is not to be forgotten, however—to the point where Turkle & Papert (1991) were led to argue for a "revaluation of the concrete." The concrete was of course there all along, despite the mythology of'abstraction.' The point I want to underscore here is that the dynamics of translation in alphabetic culture are not between brute concreteness and fluid abstractions, but rather between different forms of concreteness, lending themselves to different kinds of practices. Chapter 3: Framing Technology 86 Digital translations Different media are variously translatable; different genres are variable i n their affordances too. W r i t t e n text has been the very fount of translatability (in all senses of the word); image less so, and performance less still (hence the oft-quoted^and as oft-misat- tributed—"talking about art is like dancing about architecture"). The alphabet lends itself to transla- tion by reducing the material of representation to a couple of dozen glyphs that can be assembled and reassembled i n a famously infinite number of ways. Digital computing extends this reduction by limit- ing the material of representation to just two states. The result is that 'everything' is renderable digitally (hence, in a crude sense, "multimedia"): a step beyond what could be accomplished w i t h writing, w h i c h can only practically render what could be spoken or counted. But the extent to w h i c h text, image, and performance can be "digitized" is dependent upon the facility w i t h w h i c h we can interpret (that is, translate) these as digital patterns: arithmetic systems were the first to be digitized, back i n the 1940s; text-based media came next, and has been so successful as to suggest a revolution i n reading, writing, and publishing. Still images are now i n widespread digital form; music (both as digital recording and as digital notations like M I D I ) has presented few challenges—the digital representation of music is by now the default format. M o v i n g image (video) has been tech- nically difficult (owing largely to the sheer volume of bits that it requires), and the digitization of performative genres like dance has been the least widespread digital form, though certainly not untried. W h a t is Digital? •:"Digital" simply refers to digits,, and what • aredigitsbutfingers? Digitalliterally«means , counting on your fingers, assigning a.finger to each, thing counted. In the sense of a "computer, this is exactly true, except the computer has only one finger, so it counts in i s and os. A commonly encountered criticism of computers and digital technology is that everything is • reduced to an either-or .distinction. This is as vacuous asrsayingthat everything in English literature is:reduced to • 26 letters. But this simplistic critique is based on a misplaced correspondence theory .of meaning. If we remember that meaning is in the interpretation, ratherthan , the ̂ representation, we quickly get beyond this-into more interesting terrain. But even mechanically, we make up more complexirepresentations than one/zero or yes/no by collecting bits 'into? larger patterns, or in establishing prior context— exactly as we do with words and sentences. As the layerings proliferate, we.gain more and more expressive power, and more demands are made on the interpretive system—just like with written literature. And, as we will see in the concept of "late binding," interpretation can be postponed almost indefinitely—just like, in post- structuralism! Chapter 3: Framing Technology 87 W h a t is not so obvious here is whether a particular medium lends itself to digital repre- sentation as a result of the material affordances of that medium versus our ways of thinking about particular genres. For instance, i n the past few years, academic journal articles have been largely re-realized in digital rendition and distributed via digital networks, to the point of near ubiquity today. But the relative acceptance of such a shift i n this particular genre is i n marked contrast to the well-hyped 'e-book' idea, w h i c h has promised to move novels and other popular literature into digital formats. That the latter has failed to appear on any significant scale has nothing to do w i t h the material aspects of the medium—novels are composed of the same stuff as journal articles—the difference is cultural: of genres and practices. Genres, of course, do not operate only i n the space of literature; they also form the substructure of technological practice. W h a t is interesting is the examination of how both literary genres and technical ones wax, wane, transform, and persist variously i n response to the dynamics of media ecology. But despite the endless (and endlessly interesting) vicissitudes of genre, digital repre- sentation implies that everything is translatable into everything else, since the underlying representational mechanisms of all kinds of digital media are c o m m o n and as simple as possible. The poststructuralist credo that there is nothing outside the text becomes true i n a very literal sense: everything becomes a 'text' because everything is, i n a literal sense, a text; the genome is but one famous example. The translation of phenomena to textual renderings (which can then be further translated) is the core of what we call science, says Latour; his detailed description of the life sciences show a complex but systematic process of move- ment along the continuum "between signs and things" (Latour 1999). By this reading, science is a particular form of reading and writing. Indeed, to anyone's eye, science has certainly spawned particular forms of reading and writing; most 'modern' document genres, whether they pertain to matters of biochemistry or to journalism, owe their existence to the kinds of translation processes that have been developed i n the sciences. Conversely, docu- ments themselves are a very interesting k i n d of technology, and they do a very particular k i n d of work (see John Seely B r o w n & Paul Duguid's 1996 "The Social Life of Documents"). Chapter 3: Framing Technology 88 Documents operate at a different level than texts per se, i n that they are technologies that operate on audiences, rather than on individual readers and writers. A n d yet, i n every docu- ment is a text doing its own work. The very idea that there can be numerous layerings of text and document (a complex w h i c h we commonly call "discourse") underscores the notion that there are large-scale networks of meaning-making at work. The semiotic analogy is doubled and returned: not only do we have tools operating as signs, we have signs that act like tools as the machinery of text meets the semiotics of publication. Software Thanks to computers we now know that there are only differences of degree between matter and texts. - Bruno Latour, Aramis In their book, The Machine at Work: Technology, Work, and Organization, K e i t h G r i n t and Steve Woolgar make the first explicit equation of texts and machines that I have been able to find i n the sociological literature (though the idea has older roots i n literary criticism; see Landow; Aarseth; etc.). W h a t falls out of this equation is that the analogy of reading and writing becomes possible w i t h reference to machines, rendering machines "hermeutically indeterminate" (Grint & Woolgar 1997, p. 70). This is all very well, and no doubt lends some valuable light to the study of various technologies. But G r i n t and Woolgar decline to take the next step: they persist i n talking (as do most technology theorists) of machines i n the "steam-engine" sense: as physical mechanisms, made out of hard stuff and powered by shovelling coal or at least plugging i n the power cord. A l t h o u g h computing technology figures i n G r i n t and Woolgar's analysis, their attention sticks w i t h the plastic-and-metal object on the desk. It does not venture inside, to software. Even while making a case for the absolute blurriness of the line between texts and actions, Latour too holds tight to the conventional divide between steel and words: W e knew perfectly well that a black box is never really obscure but that it is always covered over w i t h signs. W e knew that the engineers had to organize their tasks and learn to manage the division of their labour by means of millions Chapter 3: Framing Technology 89 of dossiers, contracts, and plans, so that things wouldn't all be done i n a slap- dash manner. N o t h i n g has a bigger appetite for paper than a technology of steel and motor o i l . . . . Every machine is scarified, as it were, by a library of traces and schemas. (Latour 1996, p. 222) If there's anything that's been shown i n a half century of computing, it is that machines are not reliant on steam and steel. Machines are pattern-processors. That one particular pattern is i n steel and another i n fabric and another i n bits is inessential. Where Latour and G r i n t & Woolgar neglect to go is precisely where I do want to go: the machine is text—and this is not just an analogy that makes literary techniques applicable. Machines are literature, and soft- ware makes this clear. This may not be yet/quite apparent i n the public imagination, owing largely to our collective hardware, or gadget, fetishism. But the argument for placing the focus of our technological inquiries at the software level rather than at the hardware level is very strong. Pioneering computer scientist Edsger Dykstra wrote, i n 1989: W h a t is a program? Several answers are possible. W e can view the program as what turns the general-purpose computer into a special-purpose symbol manipulator, and it does so without the need to change a single wire... I prefer to describe it the other way round. The program is an abstract symbol manipu- lator w h i c h can be turned into a concrete one by supplying a computer to it. (Dijkstra 1989, p. 1401 [italics added]) The efficacy of this perspective has been apparent w i t h i n computer science since the late 1950s, and i n particular, since the advent of John M c C a r t h y ' s computer language Lisp ( M c C a r t h y 1960), hailed as doing "for programming something like what E u c l i d did for geometry" (Graham 2001).7 The significance of Lisp and the ways of thinking Lisp ushered i n have been obscured by the popular rendering of Lisp as "an A I language" and therefore subsumed w i t h i n the quest for artificial intelligence. But Lisp's connection w i t h A I is an "accident of history" (Graham 1993), one w h i c h I w i l l not dwell on here. W h a t I do want to foreground here is the idea of textual constructions—programs and programming languages—acting as machines i n their own right. O f course it is possible to quibble w i t h 7. Alan Kay called McCarthy's contribution the "Maxwell's equations of software" (Kay & Feldman 2004) Chapter 3: Framing Technology 90 Dijkstra's formulation and strike a hard materialist stance,8 insisting that the electronic circuitry is the machine and that programs are "superstructure." W h a t M c C a r t h y ' s Lisp provides is a solid example i n w h i c h it clearly makes more sense to view it the other way around: that the real machine is the symbolic machinery of the language and the texts composed i n that language, and the details of the underlying hardware are just that: details. A half-century of L i s p 9 provides considerable evidence that the details steadily decrease i n importance. In 1958, when the first implementation was made, it was of course a matter of enormous resourcefulness and innovation on the part of the M I T A r t i f i c i a l Intelligence lab, and so it remained, i n rarified academic circles well into the 1970s, when implementations began to profilerate o n various hardware and as Lisp became core computer science curric- u l u m at M I T and other institutions (see Steele & Gabriel 1993; Abelson & Sussman 1996). Today, downloading and installing a Lisp implementation for a personal computer is a matter of a few moment's work. But more importantly, as Paul Graham's (2001) paper "The Roots of Lisp" demonstrates for modern readers (as it is a reworking of M c C a r t h y ' s original exegesis), Lisp's core simplicity means it doesn't require a 'machine' at all. Graham's paper (it is important to dwell for a moment on the w o r d "paper" here) explains how, i n the definition of a dozen or so simple functions—about a page of code—it is possible to create a system w h i c h is a formal, functional implementation of itself. The machinery of Lisp works as well i n the act of reading as it does i n digital circuitry. 8. Friedrich Kittler famously made this move in his (1995) essay, "There is No Software,"—ignorant of both Dijkstra and McCar- thy, as far as I can tell—in which he argues that everything is indeed reducible to voltage changes in the circuitry. Kittler's argu- ment is clever, but I don't find that it actually sheds any light on anything. It is rather reductio adabsurdum, leaving us with no better grasp of the significance of software—or any means of critically engaging with it—than a study of letterforms offers to the study of English literature. 9. That Lisp has been around for half a century provides us with a near-unique perspective on the evolution of computing cultures; hence my argument for cultural history of computing. Chapter 3: Framing Technology 91 T h e j s i m p l i o t y o f L i s p has to d o w i t h its s t r a i g h t f o r w a r d a l p h a b e t c nature L isp w a s preatorsTwere; mathematicians, but, Lisp ] sfabout symbol manipulation rather than aiithmetic«calculation.* McCarthy used" mathematical tormalisms*as languages and I r p ^ s e n t - d a y « i j s p % ' ' ^wr 1 teT?18scll t- s>, not 5 ?someth 1 ng./^ | ^ n u ^ ^ s ^ ) m e ^ ^ g « • : intrinsicallytfPaV" * o u f t , " l f o ^ < you .r> o 4 The significance of digital computing—that w h i c h systems such as M c C a r t h y ' s Lisp make so clear—is not how computers have been brought to bear o n various complex information processing applica- tions (calculating taxes, computing ballistics trajectories, etc.). N o r am I about to claim that the digital revolution brought forth A I and electronic autopoeisis, nor w i l l I allude to any other such Frankensteinian/Promethean narrative. The far greater significance of digital computing is i n the use of alphabetic language as a k i n d of "bifurcation point" (to borrow the language of systems theory), at w h i c h a different level of order emerges from the existing substrate. The advent of digital computa- tion marks the point at w h i c h the text/machine equation becomes literally and manifestly real. Invoking systems theory i n this way implies that the elements of this shift are largely internal to the system; it was not the advent o f the semiconductor, or the pocket protector, or any such isolable, exter- nal factor that led to this shift. Rather it has more to do w i t h the ongoing configuration, reconfiguration, and 'refactoring' of alphabetic language. M c C a r t h y ' s original innovation is described i n a conference paper (1960) of thirty-odd pages. In it, there are no circuit diagrams or instructions for constructing electronic devices. Rather, it is a concise work of symbolic logic, describing the elements of a formal notation for describing recursive systems w h i c h are, interestingly, capable of describing themselves. M c C a r t h y ' s work on the language i n the late 1950s largely preceded any working implemen- not> in (•arthy, g u a m ; othe%tasJ| !RllIll£ a x i o m i z e 1 c o m p 1 OR r A r n m i i n S ^ g u a ^ ' s t i 11 in u?e**kxJay; ' the o l d L ^ ^ Q R J R A N y i B M ' s l a n g u a g e fo. n u m e r i c a l c a l c u l a t i o n * s t i l l u s e d in s c i e n t i f i c c o m p u t i n g a o g l i c a i i o n s 1 he - 'str u o u r e W a n d y s ' y n t " a x » ' o f * " Lis piN uiuv;rdirjbii^deceptiwly:.simple: everything.!, ^^^^^L^^^o^^r^r^L^VProce'^si^^,'j> W f e n ^ ^ ^ ^ ^ w r t ^ ^ ^ ^ ^ n theses f u n c ^ ^ S j l e c j | S f c *p/b\, d e s a m c c h a n . s m f o r i ^ m t i n B K ^ j l f i i s t i t e m f i o m - j list 1 ccT»isilveT^fpRjyjng'sii<.hi a f u n c t i o n a l l o w s o n e X t ^ » t j ? t e r s a E ? s ' l s * (and lists o f lists. ' u r v n m ^ ^ ^ i n a e f i n i t t d e p t h s o f n e s t i n g ) O t n e M ^ r S & u ^ ^ o n s ; * p r o v i d e m e a n s o f J c o m S n n g J n T e ' ^ e ^ a T u a t i o n s of terns a n d for ma'KmgEgpnditional s ' i t e m e n t s A n d t h e n , i i a l m o s t ^ t e r a l l y X e v c ' r y t h i i i i ' e k e is c i e a t e d f r o r ^ p h e s e v p r i m i t i v e b u i l d i n i ; b l o c k s , b> l a y j e r m g l l a n d p l a y r t i n g larger a n d larger 1 e v a l u a t i o n s t r u c t u r e s Chapter 3: Framing Technology 92 tation: "I decided to write a paper describing Lisp both as a programming language and as a formalism for doing recursive function theory" (1981). M c C a r t h y was after a realization of A l a n Turing's formal theory of computability, and Lisp was a success i n this regard; it was a practical implementation of the use of recursive functions as an equivalent to the T u r i n g machine (which, although logically complete, is not a practical system—see Graham 2001, nS). Lisp's legacy is thus not i n electronics so m u c h as i n logic and formal systems—in language. The Semiotics of Standardization It would be trite to say that this innovation had been i n the works for some time. I have mentioned A l a n Turing, but other significant contributions—such as those of logician George Boole and the famous team of Charles Babbage and Augusta Ada—were required i n order for the story to unfold. These three are key to this telling of the story precisely because their contributions predate any workable physical instantiation of the machine/text that their names would come to symbolize. Babbage is particularly interesting i n this respect, precisely because his works were not practically realized despite his best efforts. One of the most intriguing subplots i n the history of computing is that it was indeed practically impossible^in Babbage's day, to create a calculating machine of the complexity he required, owing to the relative lack of manufac- turing sophistication. Specifically, it was not yet possible, i n the m i d 19th century, to manufacture the thousands of gears required for Babbage's "Difference" and "Analytical" engines w i t h sufficient precision—that is, to sufficiently close tolerances—that such a machine could actually run. W h a t was required was the advent of standardization of gear cutting and manufacturing apparatus, a fact not lost on Babbage, who published significant works i n this area. Interestingly, the standardization of gear cutting (and also screw threads) was largely pioneered by Joseph W h i t w o r t h , an engineer who had worked w i t h Babbage, but the required precision was ultimately achieved i n the 20th century, not the 19th. It has thus become possible—even relatively economical—to create working instances of Babbage's Chapter 3: Framing Technology 93 machines (indeed the Science M u s e u m i n L o n d o n , England has done so), given modern-day manufacturing (Doyle 1995). In considering this side-story, I want to draw the focus again to the symbolic rather than the material. The difference between gear cutting i n the 19th century and in the 20th isn't merely that the tolerances are finer; rather, the key is standardization, mass production, and the translation of the artifact from individual incarnation to its status as a commodity. W h a t happens when you standardize production is that you shift the artifact semiotically. It stops being a isolated signifier of its own, and starts being a neutral component—that is, a "black box"—that can be assembled into larger systems, just like the letters of the alphabet. The glyph itself, like the gear, stops being a thing-in-itself and starts being part of a larger semiotic apparatus. This, according to Havelock (1980), is precisely what happened when the Greeks began to distinguish between consonants and vowels i n written language, thereby making a more analytically complete mapping of morphemes to glyphs. The result was that the individual letters became unimportant i n comparison to the words, sentences, and paragraphs that were built out of them—this is evident i n constrast with, for instance, the letter-oriented focus of the Kabbalistic tradition, i n w h i c h significant meaning is vested w i t h the individual letters themselves (Drucker 1995, p. 129). This is also, arguably, paral- leled i n the evolutionary shift from single-celled to multi-celled organisms, i n w h i c h the activities and identitiy of individual cells becomes subsumed i n the structure of the larger organism. Standardization of alphabets, of currencies, of machinery, even of living struc- tures, all effect this k i n d of shift towards a (analytically) higher-level assemblages of order and agency. There is clearly a moral question w h i c h emerges here, since we are not merely speaking of dumb objects, but conscious subjects too. Bowker & Star problematize standardization thusly: W e know from a long and gory history of attempts to standardize information systems that standards do not remain standard for very long, and that one person's standards is another's confusion and mess [...] W e need a richer Chapter 3: Framing Technology 94 vocabulary than that of standardization or formalization w i t h w h i c h to charac- terize the heterogeneity and the procedural nature of information ecologies. (Bowker & Star 1999, p. 293) Bowker & Star's plea, however, comes at the end of their excellent book on "categorization and its consequences" and not at the beginning, and so we are left with no more than an opening made toward the study of process, articulation, negotiation, and translation. These dymanics are invoked i n a clearly political mode, for these are the dynamics of standardiza- tion and classification themselves. Compare D o n n a Haraway's description of the digitization of the genome and the subsequent emergence of the field of bioinformatics: Yet, something peculiar happened to the stable, family-loving, M e n d e l i a n gene when it passed into a database, where it has more i n c o m m o n w i t h L A N D S A T photographs, Geographical Information Systems, international seed banks, and the W o r l d Bank than w i t h T . H . Morgan's fruitflies at C o l u m b i a University i n the 1910s or U N E S C O ' s populations of the 1950s. Banking and mapping seems to be the name of the genetic game at an accelerating pace since the 1970s, i n the corporatization of biology to make it fit for the N e w W o r l d Order, Inc. (Haraway 1997, p. 244) W h a t is at issue, it seems to me, is not whether standardization is good or bad (or any s i m i - larly framed substantivist argument), but, as Haraway points out: What counts?For whom? At what cost? S I M U L A T I O N A S I N T E R P R E T A T I O N "To know the world, one must contract it." - Cesare Pavese, quoted by Alan Kay The preceding discussion of technology and translation, machine and text, is intended to create a frame for a topic which I w i l l introduce here, but w h i c h I want to revisit a number of times i n the pages that follow: simulation. Simulation is a paradigmatic application of information technology, something often hidden by our tendency to instrumental reason, but w h i c h the sociology of translation helps Chapter 3: Framing Technology 95 to illuminate. I mean this i n the sense that simulation can be used as a general motif for viewing and understanding a wide variety of computing applications, from mundane 'productivity applications' to the more stereotypical systems simulations (weather, fluid dynamics, etc.). A l a n Kay and Adele Goldberg put it generally and eloquently: Every message is, i n one sense or another, a simulation of some idea. It may be representational or abstract, isolated or i n context, static or dynamic. The essence of a medium is very m u c h dependent on the way messages are embed- ded, changed, and viewed. A l t h o u g h digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. (Kay & Goldberg 1976) What's interesting and compelling about computing is not the extent to w h i c h models, simulations, representations are true, real, accurate, etc., but the extent to w h i c h they fit— this is'the lesson of Weizenbaum's infamous E L I Z A , the uncomfortable thesis of B a u d r i l - lard's Precession of Simulacra, and the conclusion of a growing body of literature on virtual reality. It is also, to step back a bit, one of the central dynamics of narrative, especially i n the novel and myriad other literary forms. "Simulation is the hermeneutic Other of narratives; the alternative mode of discourse," writes Espen Aarseth (2004). If effect is the important point, then this is by definition an anti-formalist argument. But simulation is not merely reducible to surface and appearances at the expense of the deeper 'reality'—it reflects rather the deeper aspect of Baudrillard's "simulacra"—precisely where we live i n today's world. But where Baudrillard's take is bitter and ironic, I have always been fond of Paul Ricoeur's hermeneutic version: Ultimately, what I appropriate is a proposed w o r l d . The latter is not behind the text, as a hidden intention w o u l d be, but in front of'it, as that w h i c h the work unfolds, discovers, reveals. Henceforth, to understand is to understand oneself in front of the text. (Ricoeur 1991&, p. 88) Chapter 3: Framing Technology 96 Madeleine Grumet similarly works w i t h Ricoeur's framing i n her discussion of theatre, the "enactment of possible worlds;" ...performed i n a middle space that is owned by neither author nor reader. Constructed from their experience and dreams, this l i m i n a l space cannot be reduced to the specifications of either the author's or the reader's w o r l d . [...] - Performance simultaneously confirms and undermines the text. [...] M i m e s i s tumbles into transformation, and meaning, taken from the text, rescued from the underworld of negotiation, becomes the very ground of action. (Grumet 1988, p. 149) Rather than some sinister command-and-control reductivism, this is the space of simula- tion. If we go further, and look at simulation—model building—as a hallmark of science and the modern w o r l d , Ricoeur's stance presents itself as a refreshing alternative to the two horns of objectivism and relativism (Bernstein 1983). W e needn't get ourselves tied i n knots about our access to 'reality,' since our business—in science, technology, literature, politics— is fundamentally about invention and not discovery, and we can avoid the spectre of relativ- ism, because it is the world w h i c h we are building, not just arbitrary constructions. 1 0 "The point is to cast our lot for some ways of life and not others," Haraway admonishes. Thus, it matters greatly which constructions we choose; the process of creating them and deciding upon whether or not to embrace them is fundamentally political (Latour 1999). It is not so m u c h dominated by issues of power but the very crucible wherein power is exercised and contested. Simulation—that is, model building—is essentially hermeneutic. Its process is that of the hermeneutic circle, the merging of the horizons of modeller and world, the part and the whole, and is determined by R . G . Collingwood's "logic of question and answer" (Gadamer 1975/1999, p. 362ff). The model—which is constructed, and therefore concrete—poses the questions to w h i c h the 'world'—ultimately inaccessible and therefore uncomfortably 10. Latour's "The Promises of Constructivism" (2003) makes a similarly positive argument. Chapter 3: Framing Technology 97 'abstract'—is the answer. It is not, thus, analytic or reductive, nor is it definitive; rather it is, ideally at least, dialogic. ' W r i t i n g of the dynamic between simulation and narrative i n games, Espen Aarseth says, If you want to understand a phenomenon, it is not enough to be a good story- teller, you need to understand how the parts work together, and the best way to do that is to build a simulation. Through the hermeneutic circle of simula- tion/construction, testing, modification, more testing, and so forth, the model is moved closer to the simulated phenomenon. (Aarseth 2004) This is decidedly not to say that the model is ever actually complete, but that our "forecori- ception of completeness" (Gadamer 1975/1999, p. 370) is a necessary precondition of participation and engagement. Once again, I want to avoid the vertical correspondence logic of classical structuralism (signifier/signified; model/reality) and instead pursue a vision wherein the elaboration of the model is its own end; it succeeds or fails not by being more or less faithful to 'reality', but by being better connected (to invoke Latour and Callon's network model once again). There is undoubtedly enormous danger lurking i n the seduc- tion of the model, and we undoubtedly forget again and again that the map is not the terrain, becoming literalists once again. That this is a danger does not imply that we should avoid making models, though, just that we must strive to avoid taking our fetishes and "factishes" (Latour 1999) too literally. W r i t i n g on the mapping of the genome, Haraway notes: Geographical maps can, but need not, be fetishes i n the sense of appearing to be nontropic, metaphor-free representations, more or less accurate, of p r e v i - . ously existing, "real" properties of a w o r l d that are waiting patiently to be plotted. Instead, maps are models of worlds crafted through and for specific practices of intervening and particular ways of life.... Fetishized maps appear to be about things-in-themselves; nonfetishized maps index cartographies of struggle or, more broadly, cartographies of noninnocent practice, where every- thing does not have to be a struggle. (Haraway 1997, pp. 136-137) Chapter 3: Framing Technology 98 It is important to remember that Haraway's argument is not against cartography—it is against the temptation to think that we and our constructions are somehow pure or inno- cent. There is no shortage of literature warning of the dangers of simulation; Kay himself wrote that "as w i t h language, the computer user has a strong motivation to emphasize the similarity between simulation and experience and to ignore the great distances that symbols impose between models and the real w o r l d " (Kay 1977, p. 135). But to contrast simulation with something like "local knowledge," as Bowers (2000) does, is to badly miss the point and to essentialize (and caricature) both simulation and local knowledge. L o c a l knowledge is mediated knowledge too. "Situated" knowledge is nothing if not mediated. W e w i l l return to simulation, at length, later. T H E E T H I C S O F T R A N S L A T I O N We are responsible for boundaries; we are they. - Haraway, Cyborgs, Simians, and Women The new media and technologies by which we amplify and extend ourselves constitute huge collective surgery carried out on the social body with complete disregard for antiseptics. - McLuhan, Understanding Media The foundation of my argument is that human culture is fundamentally and essentially technological—that is, technologically mediated. It makes no sense to attempt to isolate what is 'human' from what is 'technical'. A s Latour has taken great pains to point out (esp. 1993), the attempt to isolate and purify these—that is, to come up w i t h a society sans tech- nology or a technology sans society—has been at best fruitless. A n d yet it is a powerful temptation, as we have inherited a weighty intellectual tradition devoted to just such a proc- ess of purification. The words we use readily betray it: culture, society, technique. I begin to think that it rarely makes sense to talk of culture—or society—at all; Latour's refocus o n the "collective" of humans and nonhumans is really the only sane construction. Chapter 3: Framing Technology 99 Back to the Tower of Babel Given the core notion of technology as translation—as delegation, as i n one of Latour's r i c h metaphorical turns—the means of translating or transforming the w o r l d is, i n a trivial sense, about power. But if the fundamental transformation is o n the symbolic level rather than the physical, then this is even more important, for we are speaking of the power to shape people's reality, and not just their physical landscape. Technology is thus the medium of power/knowledge par excellence, for it simultaneously establishes and enforces power/knowledge structures (by very definition) and also provides the means for their subversion. This is politics. Haraway, again, has said it most eloquently: In short, technoscience is about worldly, materialized, signifiying, and signifi- cant power. That power is more, less, and other than reduction, commidification, resourcing, determinism, or any other of the scolding worlds that m u c h critical theory would force on the practitioners of science studies, including cyborg anthropologists. (Haraway 1997, p. 51) A g a i n , the admonition is to stop worrying about purification of essences, about what the world w o u l d be like without the polluting effects of technology, pining for an unmediated reality i n some nostalgic reminiscence of a simpler age. Asymmetry i n theory and practice is the order of the day—asymmetry of ways of seeing and drawing the world, asymmetry of access to ways and means, asymmetry of expression. But this isn't to be avoided or solved or redeemed i n Gordian-knot fashion. Rather, it is to be confronted. Something is "lost i n translation" because translation is always interpretation. The only remedy is further inter- pretation, the ongoingness of the dialogue. So it is w i t h all things political. Chapter 3: Framing Technology 100 Our responsibility to technology Located in the belly of the monster, I find the discourses of natural harmony, the nonalien, and purity unsalvageablefor understanding our genealogy in the New World Order, Inc. Like it or not, I was born kin to Pu[239J and to transgenic, transspecific, and transported creatures of all kinds; that is the family for which and to whom my people are accountable. - Donna Haraway, Modest_ Witness When Everything Looks Like a Nail T h e d a n g e r o f t r a n s l a t i o n s c a n be s i m p l y r e c o g n i z e d in the o l d c h e s t n u t : W h e n all y o u have is a h a m m e r , e v e r y t h i n g l o o k s like a , n a i l . M c L u h a n ' . s j s a y i n g , " w e s h a p e o u r t o o l s a n d thereafter, o u r t o o l s s h a p e u s " is a m o r e f o r m a l a r t i c u l a t i o n o f t h i s b a s i c p o i n t , w h i c h is interestingly a b s e n t f r o m H e i d e g g e r , even t h o u g h it is a c o m m o n - p l a c e for u s . H e i d e g g e r s e e m s g e n u i n e l y w o r r i e d that e v e r y t h i n g in the w o r l d ( i n c l u d i n g h u m a n b e i n g s ) has b e g u n t o look like n a i l s , but he d o w n p l a y s — t o the peril- o f his a r g u m e n t — t h e h a m m e r ' s generative role in t h i s . Larry W a l l , d e v e l o p e r o f the i m m e n s e l y p o p u l a r o p e n - s o u r c e p r o g r a m m i n g l a n - g u a g e Perl (called the " d u c t t a p e o f the Internet"), m a d e the f o l l o w i n g i n s i g h t f u l c o m m e n t : - You've all heard the saying: if all you have is a hammer, everything starts to look like a nail. That's actually1 a Modernistic saying. The postmodern version is: If all you have is duct tape, . everything starts to look like a duct. Right. When'sthe last time you used duct tape on a duct? (Wall 1999) The challenge, with respect to our fundamental relationship to technology and technological change, and the asymmetries w h i c h result, is one of literacies. By "literacy" I do not trivially mean the ability to read and write, but rather the ability to enter into and participate actively i n ongoing discourse. If, as I have argued, technology is about the symbolic realm at least as much as the physical, then the importance of literacy is not confined to the written word. It is the matter of partic- ipating i n the discourses w h i c h surround us; discourses of power. Technologies—things— are as much a part of discourse as are words. The task before us, i n the early decades of a digitally mediated society, is to sort out the significance of a digitally mediated discourse, and what the implications are for actual practice. Consider the stakes. W i t h o u t print literacy—in the sense i n w h i c h we readily acknowl- edge it today as being fundamental to democracy and empowerment and social change— print technology would be nothing but an instrument of oppression. The written w o r d itself, without the concept of a widespread (if not mass) print literacy, represents a terribly asymetrical technology of domination over the w o r l d . But it is this literacy, i n the broad, distributed, bottom-up sense of the word, that saves the written w o r d from being a truly oppressive development for humanity. Further, that print literacy is instead taken as the Chapter 3: Framing Technology 101 instrument—and the symbol—of liberation speaks to a dynamic not accessible from an examination of the material or cognitive features of reading; rather, literacy as an agent of emancipation—which then translates the written w o r d into an agent of e m a n c i p a t i o n - operates on the larger socio-cultural level. But at this point i n the Western world, "techno- logical literacies" are nowhere near as widely spread i n practice as their print-oriented conterparts, nor are they held up symbolically as agents of emancipation and democratic participation. Fullan and Hargreaves (1996), writing of school reform, say, "Teaching is not just a technical business. It is a moral one too" (p. 18). Technology is not just a technical business either, and the longer we collectively pretend that it is, as we are wont to do as long as we remain inside our instrumentalist frame, the less we are able to grapple w i t h it, politically and morally. But as instrumental logic blinds us to the political and moral implications of our medi- ated practices, so too the rhetoric of technological determinism provides a crippling apparatus w i t h w h i c h to reach a critical awareness of technology. In the shadow of our yearning for a pure, unmediated past is the parallel idea that technological mediation is leading us down a tragic path 'no longer' of our own choosing. O f late, the tendency is to see digital media as the handmaiden of globalization (e.g., Menzies 1999). Here the argument is that the rendering (commodification) of all human activity into an easily translatable digital currency is at the expense of "local" culture, of the less privileged, and i n the interests only of the corporate sector. It is an easy argument to make, and the argument borrows much from the connection between literacy and colonialism (e.g. W i l l i n s k y 1998). However, where the latter argument succeeds is i n a far more sophisticated appreciation of the detail: literacy is certainly an agent of homogenization and indeed domination, but it is also, i n countless cases, an agent of differentiation and emancipation (e.g., N e w L o n d o n G r o u p 1996). Note that this is not the same thing as resistance i n the sense of resisting the onslaught of print media or computerization. "Resistance" is a troublesome term, because it suggests an "either-or," a one-dimensional problem (as i n G . W . Bush's " Y o u are either w i t h us, or you Chapter 3: Framing Technology 102 are against us"). Re-shaping, re-direction, and re-figuration, as i n Haraway's r i c h repertoire, are more apt. I suggest that print or alphabetic literacy is not inherently an instrument of domination precisely because the alphabet is open; i n contrast, where state power has controlled who has access to reading and writing, it is not open, and for this very reason, modern democracies enshrine the institutions of mass literacy: public education, freedom of the press, and so on—lest you think my argument is for the perfection of these institutions, I mean here to point to these as shared ideals. But even the curious logic of liberal capitalism seems to real- ize (on odd days, at least) that i n order to be effective, languages and communications must be open—private languages cannot thrive. A n d i n openness is the possibility of refiguration. A student of mine, Bob Mercer, made this point about the unreflexive conceit of the "end of history:" If consumption i n an industrial society is of industrial goods—cars, refrigera- tors, televisions, computers—what then is consumed in an information society? Information, surely, and some of that information takes the form of ideas. A n d some of those ideas i n turn challenge the consumer society. (Bob Mercer, 2003, "Blogging at the End of History") Technology, like language, is not the instrument of power; it is the crucible of power/the very setting where the contest for one way of life or another takes place. Feenberg's framing of technology as a site of struggle means that the debate is ongoing; it is not an argument to be w o n or lost, but to be continually engaged. In putting the emphasis on technology as translation—of representations, of agency, of apparent worlds—I hope to open up this theoretical arena to what follows, which is an examination of a very particular and very ambitious project to develop a technological infrastructure capable of engagement w i t h high-level philosophical, educational, and political themes. Chapter 3: Framing Technology 103 Chapter 4: Alan Kay's Educational Vision A l a n Kay's project rests upon a number of substantial philosophical and theoretical founda- tions. A n examination of these w i l l be helpful i n the analysis of the trajectory of the Dynabook project over the past three decades. The following treatment draws less from the 'primary' documents of the early 1970s as from Kay's o w n reflection and exegesis—espe- cially that of recent years, w h i c h is substantial and w h i c h reveals a definite historical self- awareness i n Kay's work. Kay's own sense of his place i n history is a theme w h i c h emerges repeatedly i n his writ- ings, from early grand ambitions of "paradigm shifts" to his more studied reflections o n a fledgling digital age i n comparison w i t h the advent of print i n Europe four or five centuries before. Throughout, Kay establishes his subject position i n a decidely Romantic mode—to briefly invoke Hayden White's schema (1973) of historical emplotment—but i n the first person and of the first order. I would like here to present an overview of each of the major themes elaborated i n Kay's writings and talks. These are, i n brief: 1. T h e vision of computers for children, and the early and foundational influence of Seymour Papert's innovative research w i t h the Logo programming language; 2. Systems design philosophy, drawing on insights borrowed from cell biology and A m e r i c a n political history; 3. The Smalltalk language and the object-oriented paradigm i n computer science, Kay's most important and lasting technical contribution; 4. The notion that Doing with Images makes Symbols, a phrase w h i c h embodies an application of the developmental psychology of Jerome Bruner; 5. Narrative, argumentation, and systems thinking; different modalities for expressing truths about the world; Chapter 4: Alan Kay's Educational Vision 104 6. A particular conception of literacy that broadly includes technological mediation as a cultural and historical force. C O M P U T E R S , C H I L D R E N , A N D P O W E R F U L I D E A S : T H E F O U N D A T I O N A L I N F L U E N C E O F P A P E R T The image of children's meaningful interaction with computers in the first place evokes the image of M I T mathematician and computer scientist Seymour Papert, his work w i t h the Logo programming language for children, and his influential writings o i l the role of comput- ing i n education. Papert's research began i n the late 1960s w i t h W a l l y Fuerzig, Danny Bobrow, and Cynthia Solomon at the Massachussetts Institute of Technology and private contractor B B N 1 (Chakraborty et al. 1999). Papert was one of the founders of the A r t i f i c i a l Intelligence Lab at M I T , but had previously studied children's epistemology w i t h Jean Piaget in Geneva—though his pedagogy drew heavily on John Dewey and M a r i a Montessori as well. Papert's idea that computers could be used to help children gain an embodied or concrete understanding of what is more commonly taken as a symbolic, formal mode of thought—mathematics—draws directly from Piaget's developmental psychology. This basic idea was a major current—perhaps the major current—in educational computing by the early 1980s, and Papert's Logo programming language embodied this particular philos- ophy and method. Papert and A l a n Kay had an early influence o n one another, and their ideas have been entwined for over three decades, though Papert is clearly the more famous of the pair. Since the Papert side of the story has been interpreted and reinterpreted by generations of scholars already,21 w i l l approach the story from Kay's perspective. A l a n Kay visited to see Seymour Papert and his team i n 1968; the meeting was report- edly a life-changing one for h i m . Papert, Cynthia Solomon, and W a l l y Feurzig had begun 1. Cambridge, MA-based BBN (Bolt, Beranek, & Newman) is wrapped up intimately in the history of computing. Most famously, the company managed the implementation of the original ARPAnet in 1969. 2. Papert's own writings (1980a; 1972/1980b; 1987; 1991; 1992) are the core texts of the Logo philosophy, a literature fleshed out extensively by his colleagues and graduate students (e.g., Turkle 1984; Solomon 1986; Harel & Papert 1991; and many oth- ers). A mid-1980s "Logo backlash" appeared, summarized in the edited collections by Sloan (1985) and Pea & Sheingold (1987), though substantially re-addressed in Noss & Hoyles 1996, as well as Papert's own later writings. Latter day reflection and criticism can be found in Chakraborty et al. 1999; diSessa 2000; and Aglianos 2001. Chapter 4: Alan Kay's Educational Vision 105 working w i t h children i n schools i n Lexington, M A w i t h the Logo language. A t the time, Logo was used to control a robotic "turtle"—a half-metre plexiglass dome on three wheels that drew lines as it crawled around big pieces of butcher paper. Papert's concern was w i t h "teaching children thinking" (1972/1980b)—that is, "really thinking about what they do" (p. 161). Papert's project was to make mathematics real for children, instead of a difficult formalism. In 1972, he wrote: Mathematics is the most extreme example. M o s t children never see the point of the formal use of language. They certainly never had the experience of making their own formalism adapted to a particular task. Yet anyone who works w i t h a computer does this all the time. (p. 162) The process of working systematically w i t h Logo to make the turtle draw geometric patterns, said Papert, had the effect of getting children to "think mathematically." They were thus able to get inside—to physically embody, as it were—the mathematical and geometric constructs. 3 A l a n Kay was impressed: "Here were children doing real program- ming," he remembered, and this "finally hit me w i t h the destiny of what personal computing really was going to be" (Kay 1996a, p. 523 [italics added]). The insight that struck Kay was that children would be the critical users of personal computers when they ultimately became available; therefore children were who they should be designing for. 3. Papert specifically wrote of "the development of an ego-syntonic mathematics, indeed, of a 'body-syntonic' mathematics" (1980a, p. 205). "Like the child, it is always at the center of its universe," wrote Kay (1990, p. 194) of the Logo turtle's local co- ordinate system (as opposed to a Cartesian one). Chapter 4: Alan Kay's Educational Vision 106 In his 2002 lecture, The Computer Revolution Hasn't Happened Yet, Kay noted: Papert had kids doing math—real math that you use in science, for ten-year-olds, via play- ing, Montessori-style This was just the best idea anybody had ever had for computing. Papert's thing had real zen behind it. N o t just math and science, but the zen of math and science Y o u don't automatically get zen by learning how to program in Logo, but Papert correctly saw that one of the greatest vehicles anybody ever came up w i t h for getting enlightened was the computer. (Kay 2002a) Kay's work i n the late 1960s had been dominated by his graduate research project o n The Reactive Engine (Kay 1968), an embryonic personal compu- ter called the " F L E X Machine," w h i c h attempted to generalize the master and instance architecture of Ivan Sutherland's computer graphics research from the early 1960s,4 and to elaborate a vision of what might be the successor to the dominant computing paradigm of the day: time-sharing terminals connected to a central, large mainframe computer. 5 Some Logo Examples H e r e is s o m e L o g o c o d e for d r a w i n g stars w i t h the turtle, a n d the trail o f a d i s c o v e r y . r e p e a t 5 [ f o r w a r d 70 r i g h t 1 5 0 ] r e p e a t 5 [ f o r w a r d 70 r i g h t 1 4 0 ] r e p e a t 5 [ f o r w a r d 70 r i g h t 1 4 4 ] r e p e a t 5; [ f o r w a r d 70 r i g h t 7 2 0 / 5 ] ' t o " f i v e P o i n t S t a r r e p e a t ! 5 1 [ f o r w a r d 70 .* r i g h t 7 2 0 / 5 e n d f i v e P o i n t S t a r 4. Ivan Sutherland was Kay's graduate supervisor at the University of Utah in the 1960s. Sutherland was famous for his work on SKETCHPAD, the original interactive graphics application (Sutherland 1963). A major feature of Sketchpad was its capability to treat a complex diagram as a "master," from which "instances" or copies could be instantly created, each one bearing all the features of the original; a change made to the master drawing would be immediately reflected in all instances of it; this is the prototype for the "class-and-instance" architectural model of most object-oriented systems'from Kay's Smalltalk forward. Chapter 4: Alan Kay's Educational Vision 107 But the F L E X Machine's intended users were adult professionals, and Kay later called its user interface "repellent" (Kay 1987). W h e n Kay was recruited to X e r o x P A R C by ex- A R P A director Robert Taylor, his first project was the design of a computer he called KiddiKomp, a prototype desktop-based machine based around a 12" Sony T r i n i t o n screen. W i t h his newfound sense of the "real audience" for personal computing, K i d d i K o m p drew not only on Kay's work on the F L E X Machine, but also o n Papert's Logo. Logo's influence on Kay's software designs i n the early 1970s is considerable; there have been turtle graph- ics i n nearly every Kay-designed software i n the past 30 years (Goldberg & Kay 1976; Gillespie 2002). But Kay's version of computers for kids goes beyond turtle geometry (as does Papert's, despite popular conceptions). In 1972, Kay spoke of the breadth of his vision: This new medium w i l l not "save the world" from disaster. Just as w i t h the book, it brings a new set of horizons and a new set of problems. The book did, however, allow centuries of human knowledge to be encapsulated and transmitted to everybody; perhaps an active G e n e r a l i z i n g by a d d i n g a v a r i a b l e n: t o s t a r n • r e p e a t n [ "• ,; Id 100 . " • . - ' r t 720/n iiijSiiiiKf' end s L a r ' s t a r 5 s t a r 7 s t a r 9 s t a r 8 H m m . . . why»not a n 8 - s i d e d star? 5. Time-sharing systems formed much of the context for computing research in the late 1960s,especially within the ARPA project. This notion of several interactive terminals (teletype units;later keyboards and screens) connected to a single, powerful main- frame computer was an enormous shift from the batch-processing model which preceded it; in time-sharing, the computer shared its users simultaneously rather than one-at-a-time as in batch processing. Time-sharing systems (and their recognizable terminals) are still very much with us today, especially in point-of-sale systems. Chapter 4: Alan Kay's Educational Vision 1 0 8 medium can also convey some of the excitement of thought and creation! (1972, p. 1) Where some people measure progress i n answers-right/test or tests-passed/ year, we are more interested i n "Sistine-Chapel-Ceilings/Lifetime." (p. 4) This 1972 paper, entitled " A Personal Computer for C h i l d r e n of A l l Ages" is Kay's original manifesto for computers in education. It is the first published work outlining his newly oriented project and the ideas underlying it. It begins with a discussion of schools and school reform, but switches quickly to a scenario involving two young kids, Jimmy and Beth, who embody the kind of exploratory, constructivist learning he saw as ideal for the coming century. Jimmy and Beth, working with their "Dynabooks"—lightweight portable comput- ing devices—stumble upon a question about gravity while playing a video game. This i m a g e has b e e n r e m o v e d because of c o p y r i g h t restrictions. Figure 4 . 1 : J i m m y and Beth with their D y n a b o o k s . (from Kay 1 9 7 2 ) Jimmy and Beth proceed to consult their teachers, a networked library of documents, and their o w n simulation models in the pursuit of understanding. The article invokes the psychology of Piaget and Bruner and the educational models of Montessori and Suzuki while also laying out the technical details of how such a device could be constructed, and what w o u l d make it ideal for kids. Chapter 4: Alan Kay's Educational Vision 109 This i m a g e has b e e n r e m o v e d because of c o p y r i g h t restrictions. Figure 4 . 2 : C a r d b o a r d m o c k u p circa 1 9 7 1 - 1 9 7 2 (from Kay & G o l d b e r g 1 9 7 6 ) M u c h of Kay's work i n the early 1970s was devoted to creating the machine described i n the 1972 article. By 1976, software prototypes had been i n use w i t h groups of children "of all ages" from the Palo A l t o area for three or four years. In a 1976 report called Personal Dynamic Media, Kay and P A R C colleague Adele Goldberg wrote: Aside from the potential future for education implied by getting kids to program, we realized that many of the problems involved i n the design of a metamedium for creative thought, particularly those having to do w i t h expres- sive communication, were brought strongly into focus when children down to the age of six were seriously considered as users. W e felt then that the next time we tried to design a personal metamedium it should be done w i t h children strongly i n m i n d . W e decided to expand our horizons to include studies into the nature of the learning and creative proc- esses, visual and auditory perception, how to teach thinking, and how to show children the challenges and excitement of doing art and science. (Kay & G o l d - berg 1976, p. 9) The scope implied by putting children at the centre of this "metamedium" drove the team's thinking to both enormous generalities and minute detail. In the early 1990s, Kay reflected on their evolving vision of personal computing: Chapter 4: Alan Kay's Educational Vision 110 N o t a personal dynamic vehicle, as i n Englebart's metaphor opposed to the I B M "railroads," but something m u c h more profound: a personal dynamic medium. W i t h a vehicle one could wait until high school and give "drivers ed," but if it was a medium, it had to extend to the world of childhood. (Kay 1996a, p. 523) Kay's focus on children, once established, has never wavered; despite numerous changes i n direction—both w i t h i n the Learning Research Group at X e r o x P A R C and i n subsequent research and corporate contexts for his work—Kay has continued to design for children. But, unlike the cliches of children's software we have come to know—bright primary colours and cutesy icons—Kay's team took this task seriously enough to question first . principles: Early on, this led to a 90-degree rotation of the purpose of the user interface from "access to functionality" to "environment i n w h i c h users learn by doing." This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children's curriculum to a "curriculum of user interface." The particular aim of L R G was to find the equivalent of writing—that is, learning and thinking by doing i n a medium—our new "pocket universe." (1996, p. 552) Kay's ambition evidently ran o n a pretty grand scale: not content simply to find educational applications for computers, Kay and his team self-consciously set out to redefine computing itself i n educational terms. A c h i l d prodigy turned computer visionary, Kay was i n his element; and if any environment were to prove fertile for such a wide-ranging undertaking it was the generously funded X e r o x P A R C of the 1970s, a research centre hosting the cream of A m e r i c a n computer science and w i t h little or no clear corporate mandate from X e r o x itself. Kay's ego and passion had opportunity to stretch out i n an unparalleled way. "[W]e were actually trying for a qualitative shift in belief structures," he wrote i n 1996, "—a new K u h n i a n paradigm i n the same spirit as the invention of the printing press—and thus took highly extreme positions w h i c h almost forced these new styles to be invented" (p. 511). Kay's enthusiasm and unbridled romanticism is captured i n his best-known quotation: "The Chapter 4: Alan Kay's Educational Vision 111 best way to predict the future is to invent it." Kay's future included children, personal computers, and a new k i n d of literacy. " L A T E B I N D I N G " A N D S Y S T E M S D E S I G N But how would you do it? H o w would you set about to "invent" personal computing, almost from scratch, w i t h very scant pre-existing w o r k 6 to draw on i n terms of working models or even research? A couple of possible strategies come to mind: • Y o u could survey your potential audience of 'users' to gather specific information about their needs, and then build a system so as to meet these needs most closely. • Alternatively, you could develop a comprehensive vision of what personal comput- ing 'should' be, develop a system according to these guidelines, and then work hard to train people to use it. Neither of these approaches is foreign to us today; the first is k n o w n as "user-centred design," taking pains to compile lists of requirements and following up w i t h extensive test- ing that the system indeed works the way the users want it to. The second, more hubristic method is unfortunately all too common, w i t h the boundary between designers and users inscribed deeper w i t h every painful training session. Both approaches assume that it is possible to know in advance what the system w i l l be like; either by drawing from users or by innate knowledge. There are no doubt elements of both of these i n the early Dynabook project, but the more profound—and influential—note Kay struck i n his early approach to inventing personal computing was one of humility. If the target audience for personal computing was adults—business people, for instance, as it was w i t h the F L E X Machine—then a user- centered design approach or even brute-force training might have been plausible; Kay has a 6. I am, of course, stretching this for rhetorical purposes; Kay repeatedly points to the important precursors to his work: McCar- * thy's Lisp; Sutherland's Sketchpad; the JOSS system, Logo, RAND's Grail tablet interface; Englebart's augmentation project. But while these examples all contribute importantly to the idea of "personal" computing, none can claim to be a total personal computing environment in the sense that Kay's project aspired to be. Chapter 4: Alan Kay's Educational Vision 1 1 2 l o t t o s a y a b o u t p r i o r e x a m p l e s o f s y s t e m s d e s i g n e d f o r p a r t i c u l a r u s e r c o m m u n i t i e s i n h i s 1 9 9 6 p a p e r . B u t s i n c e h i s v i s i o n o f p e r s o n a l c o m p u t i n g p u t c h i l d r e n a t t h e c e n t r e , a n d b e c a u s e i t w a s t o b e f u n d a m e n t a l l y transformative—in t h e s a m e s e n s e t h a t t h e p r i n t i n g r e v o l u t i o n w a s t r a n s f o r m a t i v e i n e a r l y m o d e r n E u r o p e ( K a y 1 9 9 6 a , p . 5 2 3 ; 2 0 0 0 & ) — t h e s t a r t i n g p o i n t h a d t o b e o n e o f a c k n o w l e d g e d ignorance. H o w d o e s o n e b u i l d a s y s t e m t h a t c a n g r o w i n t o s o m e t h i n g y e t u n f o r e s e e n b y e i t h e r i t s u s e r s o r i t s d e s i g n e r s ? K a y t o o k t h r e e e x a m p l e s , o r m e t a p h o r s , f o r g u i d a n c e . T h e f i r s t m e t a p h o r w a s c e l l b i o l o g y , w h i c h K a y h a d s t u d i e d a s a n u n d e r g r a d u a t e i n t h e e a r l y 1 9 6 0 s ( o n t h e h e e l s o f s o m e o f t h e m o s t s i g n i f i c a n t a d v a n c e s i n t h e f i e l d a n d t h e i d e n t i - f i c a t i o n o f D N A i n t h e 1 9 5 0 s ) . M y b i o l o g y m a j o r h a d f o c u s e d o n b o t h c e l l m e t a b o l i s m a n d l a r g e r s c a l e m o r p h o g e n e s i s w i t h i t s n o t i o n s o f s i m p l e m e c h a n i s m s c o n t r o l l i n g c o m p l e x p r o c e s s e s a n d o n e k i n d o f b u i l d i n g b l o c k b e i n g a b l e t o d i f f e r e n t i a t e i n t o a l l n e e d e d b u i l d i n g b l o c k s . ( 1 9 9 6 a , p . 5 1 6 ) I n s t e a d o f t r y i n g t o b u i l d t h e c o m p l e x a r t i f a c t s f r o m s c r a t c h — l i k e t r y i n g t o b u i l d l i v i n g t h i n g s c e l l b y c e l l — m a n y o f t h e m o s t i m p o r t a n t p r o j e c t s b u i l t a k e r n e l t h a t c o u l d grow t h e a r t i f a c t a s n e w k n o w l e d g e w a s g a i n e d — t h a t i s : g e t o n e c e l l ' s D N A i n g o o d s h a p e a n d l e t i t h e l p g r o w t h e w h o l e s y s t e m . ( 2 0 0 4 a ) K a y s a w t h a t i n b i o l o g i c a l s y s t e m s , t h e r e i s a d i f f e r e n t o r d e r o f m e c h a n i s m a t w o r k t h a n i n t h e N e w t o n i a n m o d e l o f t h e u n i v e r s e . " T h e r e ' s a n o t h e r k i n d o f m a c h i n e r y t h a t ' s v e r y v e r y d i f f e r e n t t h a n t h e c l o c k w o r k k i n d , " h e r e f l e c t e d ( 1 9 9 6 & ) . T h i s i n s i g h t i s n o d o u b t p a r t o f a l a r g e s c a l e i n t e l l e c t u a l s h i f t e v i d e n t i n t h e 2 0 t h c e n t u r y ( a n d o n g o i n g ) t o w a r d a " s y s t e m s v i e w o f t h e w o r l d " ( L a z s l o 1 9 7 2 ; P r i g o g i n e & S t e n g e r s 1 9 8 5 ) . K a y p o i n t s o u t s e v e r a l f a c t s : t h e l o n g e s t a n y a t o m r e s i d e s i n y o u r b o d y i s 7 y e a r s ; b l o o d c e l l s l i v e l e s s t h a n 1 0 0 d a y s ; m o s t o f y o u r b o d y i s l e s s t h a n t w o w e e k s o l d . W e a r e c o m p o s e d o f 1 0 0 t r i l l i o n c e l l s , 6 0 b i l l i o n l a r g e i n f o r m a t i o n m o l e c u l e s , i n c o n s t a n t m o t i o n , w i t h p a t t e r n m a t c h e s o c c u r i n g e v e r y m i c r o s e c o n d o r s o . W e a r e n o t m a t e r i a l s o m u c h a s patterns: w e s t a y a l i v e b y r e b u i l d i n g t h e p a t t e r n , d i s c a r d i n g t h e d i s a r r a y e d s t u f f ( 2 0 0 2 a ) . T h i s i s a k i n d o f m e c h a n i s m t h a t i s q u a l i t i a - Chapter 4: Alan Kay's Educational Vision 113 tively different from any technology humankind has come up with, one capable of amazing things: A baby is able to get six inches longer about ten times i n its life and you don't have to take it down for maintenance. [...] Imagine trying to make a 747 six inches longer! (19966) Kay's second guiding example is 'man-made': the U n i t e d States Constitution, ...because the people who designed it realized that it w o u l d be very difficult to write laws for how people should live 50 years from their time and place, so they wisely made most of the Constitution a way of dealing w i t h error situa- tions and a way of keeping bad things from propagating. (Kay 1999) The Constitution itself is a set of principles for building a very complex dynamic structure that should last for centuries whose "parts" (that is, us!) come and go and are only somewhat intercooperative. (Kay 1995) In his address to the History of Programming Languages II conference (1996a), Kay told his audience of computer scientists that the "best book on complex systems design" is H a m i l - ton, M a d i s o n , and Jay's The Federalist Papers, w h i c h comprise the extended arguments supporting the (surprisingly concise) U S Constitution's form and method, and w h i c h elabo- rate the whole idea of checks and balances, divisions and mixtures of powers, and mechanisms for amendment that are arguably the U S Constitution's most important contributions (Hamilton et al. 1788/1987). Nowhere is the spirit of liberal humanism more clearly evident i n the history of personal computing than here. The third example is something m u c h nearer to Kay's own experience: the develop- ment of the early A R P A n e t , foundation of today's Internet. Kay witnessed first-hand, as a graduate student at U t a h i n the 1960s, the design and decision making processes behind it. Today, the Internet's status as a unique technological accomplishment is overshadowed by its immediate and more practical impacts i n most of our lives, but, as Kay points out, the Internet has recycled all of its atoms and all of its bits probably twice now without being stopped—it's the only human artifact that has done that. (2002a) Chapter 4: Alan Kay's Educational Vision 114 W h a t all this has to do w i t h creating personal computing i n the 1970s is this: one can't possibly anticipate i n advance what the system is going to evolve into. Therefore, one's responsibility at the outset is to build i n an open-ended fashion, to not commit oneself prematurely or to foreclose on possibilities by driving to a predefined end-goal; rather, to build for growth and for evolution. Note how close this is to the A R P A management strat- egy, to fund directions, not goals. Kay and Goldberg make the practical case thusly: The total range of possible users is so great that any attempt to specifically anticipate their needs in the design of the Dynabook w o u l d end in a disastrous feature-laden hodgepodge w h i c h would not be really suitable for anyone. W e have taken an entirely different approach to this problem, one w h i c h involves the notion of providing many degrees of freedom and a way for any user to communicate his or her own wishes for a specific ability. Some mass items, such as cars and television sets, attempt to anticipate and provide for a variety of applications in a fairly inflexible way; those who wish to do something different w i l l have to put i n considerable effort. Other items, such as paper and clay, offer many dimensions of possibility and high resolution; these can be used i n an unanticipated way by many, though tools need to be made or obtained to stir some of the medium's possibilities w h i l e . constraining others. W e w o u l d like the Dynabook to have the flexibility and generality of this second k i n d of item, combined w i t h tools w h i c h have the power of the first kind. Thus a great deal of effort has been put into providing both endless possi- bilities and easy tool-making through a new medium for communication called Smalltalk. (Kay & Goldberg 1976, pp. 7 - 8 [italics added]) In computer science, there is a term w h i c h refers to this general idea, and it has become one of Kay's watchwords: late binding. Late binding, i n the strict sense, means delaying the asso- ciation of named entities i n a program w i t h actual computations (that is, w i t h runtime objects) until the last minute for the sake of the dynamic interpretation of the program and its environment. Taken to extremes, the opposite concept w o u l d be "hard wired," i n w h i c h there is no scope for timely interpretation of contexts. Note that most popular bogeyman images of computers correspond to the latter. Kay's usage of "late binding" is m u c h broader, extending to the very architecture of the system being fluid: Chapter 4: Alan Kay's Educational Vision 1 1 5 [late binding] has some deeper and more profound properties that include abil- ities to actually change both the structure and metastructure of the language itself. Thus an important new idea can be assimilated into the constantly evolv- ing process that is the system. Another aspect of late-binding is the ability to change one's m i n d about already instantiated structures that are already doing work. These can be changed automatically o n the fly without harming the work they are already doing. (Kay 2003a) This i m a g e has b e e n r e m o v e d because o f c o p y r i g h t restrictions. F i g u r e 4 . 3 : C a r t o o n b y T e d K a e h l e r , f r o m G o l d b e r g & R o b s o n (1983) H o w different is this from the world we've come to inhabit? O u r systems are a source of frustration, specifically because they are not fluid. Yet this fluidity was the cornerstone of Kay's strategy for building a personal computing paradigm for children. Implicated i n this issue is the dynamic of means and ends and the rhetoric of instrumental rationality. Kay notes that adults, and businesses i n particular, are especially prone to instrumental ration- ality—With, the "judgement of any tool or idea solely in terms of the current goal structure of that person." The antidote, then, "the way to get ahead is to think about kids, not business. [...] Adults have way too m u c h context—the enemy of real qualitative improvement" (Kay, 2003a). Chapter 4: Alan Kay's Educational Vision 116 By thinking of children as the target audience, and setting out to create a system with growth and evolution as its primary feature, A l a n Kay and the Learning Research G r o u p at X e r o x P A R C began to build their vision of personal computing. S M A L L T A L K — " A N E W M E D I U M F O R C O M M U N I C A T I O N S " Smalltalk is without a doubt the most lasting of all of A l a n Kay's technical contributions. Considering that a great number of now-ubiquitous features are attributed to him—from overlapping windows and pull-down menus to laptop and notebook comput- ers—this is no small statement. Smalltalk can be seen as the still-evolving evolving embodiment of a deep vision of computing, a vision w h i c h has only partly been realized. Smalltalk is at once the core of Kay's educational vision and the core of his contri- bution to 'serious' computer science. It is also the Research Notes from Viewpoints Research Institute f r o m m y trip to V i e w p o i n t s R o s e a u h • l n s : i t u t e (VPRI) in ( j l e n d j l c ( A n A p r i l 2 0 0 4 V P R I , d n o n p r o f i t o ' r ' s j j n i z j t i o n ' is the c u n e n t home for A l a n KJV S t e j m a n d . . • ' Kim Rose tt scftwart. enfant fling hat, art you'' - and soys' No I nit an that as a. pi rsonalitv Unit Vri^inaj^mqllfalk'. Jt S i g r u 1 Dan infills S f f S B p i t i > w \ . is a "lu'tt'binding ' guy^^^^^oming to visit to'monow, something wc just fcundtout today atabWrnRKI. Alan late bindin" in bnth thi traditional Havn'g'sptnt sive/al Hours sifting talk about my ust of thtst >'' doc umi nts • manv.,- of. ivhn h ' 'an • ''[T' r it cturt sand talks Alan has aivt no • . tht 'yfuis Kim difrs to Alan, whd" Wt Loin't s nu to takt t onit s of whaltvtt I likt but asks that I not d'rfctly quott- anything marked draft . Rut tvtrything, s marktd . ft ' " ? obstrvt-'that the 'draft • iJin'a Alan acknowifd^t s as much crucible w i t h i n w h i c h all the other inventions that make up personal computing came to their earliest fruition. Just what Smalltalk is is difficult to capture i n just a few words. It is, strictly speaking, a computer programming language. But more importantly, it is the embodiment of a self- consciously defined paradigm of computing (Kay repeatedly called it a "communications medium" rather than a programming language), one w i t h profound historical implications for computer science and current software engineering practices, and one w i t h even more profound—as yet largely unrealized—implications for literacy i n a w o r l d interconnected by digital networks. Smalltalk has a substantial claim to being the original language for "object- Chapter 4: Alan Kay's Educational Vision 117 oriented programming" ( O O P ) 7 , w h i c h is arguably the dominant professional program- m i n g paradigm today. Smalltalk's significance to personal computing, however, went right over most people's heads; when Apple Computer's Steve Jobs paid his famous visit to X e r o x P A R C i n 1979, according to the unofficial "origin myth" of the Macintosh, he was shown three things: the graphical, overlapping-window interface; networked personal computers; and Smalltalk. A n d they showed me really three things. But I was so blinded by the first one I didn't even really see the other two. (from Cringley 1996: " T r i u m p h of the Nerds, Part 3," PBS) In the m i d 1980s, Apple began to sell computers w i t h graphical user interfaces. By the m i d 1990s, networked personal computers were becoming commonplace. But Smalltalk and the paradigm it represents remains a behind-the-scenes component of modern computing, despite Kay's ambitions for the Dynabook. Far from just a software engineering tool, Small- talk was to be "the exemplar of the new computing, i n part, because we were actually trying for a qualitative shift i n belief structures—a new K u h n i a n paradigm i n the same spirit as the invention of the printing press" (Kay 1996a, p. 511). Objects and messages The Smalltalk language more or less defined the "object-oriented paradigm," as it was the first language system to fully embrace the idea of message-passing objects as the basis for software. N e w ideas go through stages of acceptance, both from w i t h i n and without. F r o m within, the sequence moves from "barely seeing" a pattern several times, then noting it but not perceiving its "cosmic" significance, then using it opera- tionally i n several areas; then comes a "grand rotation" in w h i c h the pattern becomes the center of a new way of thinking, and finally, it turns into the same k i n d of inflexible religion that it originally broke away from. (Kay 1996a, p. 514) 7. Purists insist that Ole-Johan Dahl and Kristin Nygaard's Simula language, dating from the mid 1960s, is the original OOP lan- guage. However, Simula was an attempt to add object-oriented ideas to an existing procedural language (Algol), while Small- talk was designed from the ground up around the idea of objects and message-passing. ' Chapter 4: Alan Kay's Educational Vision 118 A s a new graduate student at the University of U t a h i n 1966, Kay was exposed to two systems that carried the seeds of object-oriented thinking: the first was Ivan Sutherland's Sketchpad, the first interactive drawing system; the second was Simula, a Norwegian adap- tation of the A l g o l programming language for creating simulations. Both systems had employed a "master" and "instance" architecture, w i t h instance-objects inheriting both attributes and behaviour from the master-objects they are derived from. For Kay, this idea resonated with a number of other ideas he had encountered: This was the big hit, and I have not been the same since. I think the reasons the hit had such impact was that I had seen the idea enough times i n enough differ- ent forms that the final recognition was in such general terms to have the quality of an epiphany. M y math major had centered on abstract algebras w i t h their few operations applying to many structures. M y biology major had focused on both cell metabolism and larger scale morphogenesis w i t h its notions of simple mechanisms controlling complex processes and one k i n d of building block being able to differentiate into all needed building blocks. The 220 file system, the B5000, Sketchpad, and finally Simula, all used the same idea for different purposes. Bob Barton, the main designer of the B5000 and a professor at Utah, had said i n one of his talks a few days earlier, "The basic p r i n - ciple of recursive design is to make the parts have the same power as the whole." For the first time I thought of the whole as the entire computer and wondered why anyone w o u l d want to divide it up into weaker things called data structures and procedures. W h y not divide it up into little computers, as time sharing was starting to? But not i n dozens. W h y not thousands of them, each simulating a useful structure? (p. 516) Kay's graduate work was an exercise in fleshing out this idea—which ultimately came to be known as object orientation—in a workable system. Following his meeting w i t h Seymour Papert and seeing a possible future w i t h millions of personal computers run by children— Kay's work at the new X e r o x P A R C focused on the design of an extensible, late-binding system based on objects and message-passing. Smalltalk grew out of Kay's early work on the F L E X Machine, and began to be fleshed out i n the "Dynabook" vision. A s a language and computing environment, Smalltalk would share much w i t h Logo: it would be straighttfor- Chapter 4: Alan Kay's Educational Vision 119 ward and simple and allow a user to define her own language structures. But Smalltalk was founded on a different set of founding principles than Logo (which was, historically, "Lisp for kids"). Smalltalk's promise was that objects would make computing "child's play". The Design of Smalltalk Kay and the Learning Research G r o u p at P A R C set about trying to identify just what Smalltalk's found- ing priciples properly were, "during this state of grace (before any workable implementation)... trying to understand what 'beautiful' might mean w i t h reference to object-oriented design" (1996a, p. 529). Smalltalk went from pencil-and-paper to working implementation by 1972, by w h i c h time Kay had distilled his idea into six foundational premises: • Everything is an object. • Objects communicate by sending and receiving messages (in terms of objects) • Objects have their own memory (in terms of objects) Smalltalk's LISP Legacy * Logo was a direct derivative of-LISP. While the syntax of the two languages differs— Logo was designed to be simple for young children to pick up quickly—the fundamental functional architecture is the same. Kay had enormous: admiration for Lisp, which he got to know intimately during a brief stint at the Stanford Al* Lab (SAIL) in 1 9 6 9 - 1 9 7 0 , where John McCarthy had been based. Kay "could hardly believe how beautiful and wonderful the idea of LISP was," but h a d ' serious criticisms of the* actual language, which he felt compromised the fundamental idea at the language's core (Kay 1 9 9 6 a , p. 525). This critique was part of Kay's inspiration with Smalltalk. . ...this started a line of thought that* said "take the hardest and most , profound thing you need to do, make it great, and then build every easier thing out of it" [...] needed' was a • . i better "hardest and most profound" thing. Objects should be it. (p. 525J Kay's claim that the "most powerful . language in the world" could be written in "a page of code" was directly inspired by LISP, which could be described/implemented in itself in about a page. Kay took this as design goal foi early Smalltalk. • Every object is an instance of a class (which must be an object) • The class holds the shared behaviour for its instances (in the form of objects i n a program list) • T o evaluate a program list, control is passed to the first object and the remainder is treated as its message. (Kay 1996a, p. 534; see also Schoch 1979, p. 64) 8. The idea of classes and instances derived from them has very definite Platonic roots. John Schoch, one of the LRG team, wrote of the relationship of classes and Platonic forms in his paper on Smalltalk-72 (Schoch 1979, p. 70-71). Chapter 4: Alan Kay's Educational Vision 120 These six main ideas are the result of Kay's attempt to find the underlying factors that make a system both simple and capable of great complexity: A fertilized egg that can transform itself into the myriad of specifications needed to make a complex organism has parsimony, generality, enlighten- ment, and finesse—in short, beauty, and a beauty m u c h more i n line w i t h my own esthetics. I mean by this that Nature is wonderful at both elegance and practicality—the cell membrane is partly there to allow useful evolutionary kludges to do their necessary work and still be able to act as components by presenting a uniform interface to the world. (1996a, p. 530) The cell and cell membrane are key metaphors—analogous to the "black boxes" of A c t o r - Network Theory—in understanding what Kay means by object-orientation, but they are also somewhat misleading. O u r temptation i n looking at systems of almost any sort is to focus on the entities that we see, rather than on the relationships, interactions, and transfor- mations that occur among them. So it is w i t h the noun-centric A c t o r - N e t w o r k Theory, as opposed to the more process-oriented "sociology of translation." Kay has repeatedly expressed his regret that he chose the term "object-oriented" instead of the more relational concept of "message-oriented." W h a t is important about biological cells i n Kay's systems- theory rendering of them isn't what they're made of, but rather their modes of interacting. The cell membrane is here read as a black-boxing mechanism i n precisely Latour's sense; it means we can ignore the inner workings of the cell and focus our attention instead on the interface that the cell presents as a means of creating larger structures (like tissues and organs). Kay's point is that the building blocks can be—should be—transcended i n a sense, putting the emphasis rather on the messages and interactions that i n turn generate the many layerings of structure of complex systems. This shift of emphasis is, to my m i n d , akin to the semiotic shift described in Chapter 3 above: when the components of a s y s t e m - alphabetic glyphs, gears, bits—reach a level of simplicity and standardization, the produc- tion of meaning no longer accrues to the individual component, but to their larger arrangement; we do not interpret the individual letters of the Roman alphabet as the Egyp- tians did w i t h their hieroglyphics; we look instead for meaning i n the larger literary Chapter 4: Alan Kay's Educational Vision 121 structures of which they are built. T h i s same theme is apparent in Kay's systems design exemplars: it is not the cell itself but its interrelations which gives rise to complex biological systems; the U S Constitution does not itself dictate anything about how an individual should behave, but rather sets up a process for how individuals can organize their interrela- tions; the A R P A n e t , like the Internet, dictates nothing about the characterisitics of connected systems; it only provides a means for their intercommunication. Late binding in Smalltalk The Smalltalk 'origin myth' involves a bet Kay made w i t h L R G colleagues D a n Ingalls and T e d Kaehler that "the most powerful language i n the w o r l d " could be described i n "a page of code" (1996a, p. 533). Ingalls and Kaehler reportedly called his bluff and so Kay spent the next several weeks working between 4 and 8 A M each day o n the bet. The result was the design for Smalltalk-72, the first working implementation of the language. In order to fit the entire design for a powerful programming language i n less than one page (McCarthy's Lisp was describ- able in less), Kay's challenge was to define only the absolute foundational structures from w h i c h every- thing else can be grown. Such a language design has no 'features' as such; it is an exercise i n the utmost parsimony and abstraction. T o go from a single page description of the foundations of Smalltalk to a working implementation and then to the first prototypes of a graphical user interface (such as we all now use on our personal computers), the first paint program, structured document editing, music capture and editing, animation, and the Chapter 4: Alan Kay's Educational Vision 122 L R G ' s research with children as end-users—all of w h i c h emerged w i t h the Smalltalk-72 system—is testament to the power of a late-binding system. A l l of these 'features' were created i n the Smalltalk-72 environment once it was up and running (after D a n Ingalls had - implemented the basic design, Kay wrote, "there was nothing to do but keep going.") The core of the research conducted w i t h kids was towards playing w i t h and extending the tools and 'features' i n the environment. Kay's team at P A R C worked for 4 more years o n the Smalltalk-72 environment, o n X e r o x ' Alto minicomputers—which Kay's team called "interim dynabooks."9 Smalltalk-72 was ultimately overhauled for a variety of reasons. One of w h i c h was that it was i n a sense too late-binding. The original design had each object i n charge of defining the syntax of messages passed to it. For instance, a number object would specify how arith- metic operations should be called, and a text-string object w o u l d define how its operations would be called, and there was no necessary parallel or congruity between these. A s users defined a larger and larger superstructure of tools and applications over time, the ways i n w h i c h the Smalltalk system worked grew more varied and complex. This fit well w i t h the ideal of an individual user gradually building the system to her own liking, but it made collaborative projects difficult—the way one user had constructed something might not make sense to another user (Goldberg & Ross 1981), and it was difficult to grow programs to large scale and complexity. The substantial result is that Dan Ingalls re-built Smalltalk again to create Smalltalk-76, w h i c h featured a m u c h more consistent approach to the design; everything i n Smalltalk-76 adhered very closely to the six main ideas outlined above: everything was an object, and the notion of objects being instances of classes w h i c h defined their behaviour led to a class hier- archy—which has become one of the fundamentals of object-oriented programming. The classic example (from Ingalls 1978) is this: 9. The Alto, designed and constructed by PARC's Computer Science Lab, is often described as the first "personal computer."Far from the portable laptops of Kay's imagination, the Altos were the size of bar fridges, and sat under a desk, generating a lot of heat. Nevertheless, they were constructed in the thousands in the 1970s and were used as personal workstations by staff at PARC. (Hlltzik 1999) Chapter 4: Alan Kay's Educational Vision 123 d P ' P l l M l l _rctc w i n c h t r u c t u r o d m 't ( b o t t o m i 1. A rectangle object knows how to display itself onscreen, i n response to messages specifying its size and position; 2. A window object is a k i n d of rectangle that acts as a frame for interac- tions and content. It receives messages generated by the movement of the mouse and its button clicks, if the pointer is w i t h i n the window's boundaries; 3. A document window object is a k i n d of window w h i c h is designed to hold some text. It can respond to mouse-click messages as positioning the insertion point or selecting text. This k i n d of hierarchical structure led to what , Reflections o n A , " E p i s t e m o l o g i c n l P l u r a l i s m ' 1 Adele Goldberg called "incremental design" (1979): Kj'y ind G o l d b e r g ' s n'ot.on o f k d e s m o r s is j n i n t r i g u i n g dlterna that a user can create tools and media objects by two t r u , o r ' c o g n i t i v e s a l e s ' ide i Tuikle a n d .•• p j p e t r ' s (19^1) taking pre-existing building blocks and then creat- , c p i s t e n ing new versions of them that have added h a r d ' ( fclneeriing): functionality or features (by "subclassing"—that is, ; I O ' J ' O I V I r c n d i ' r . n i specializing the behaviour of object classes i n new ' ' ' USHmP^ classes derived from them). The message-sending ., whi syntax i n Smalltalk-76 was made more consistent, j • .' ait w h i c h led to more readable programs (one "learns to write by reading, and to read by writing," wrote Goldberg i n 1998), which, i n turn, led to a greater • mu emphasis on learners as designers, rather than just tinkerers—they were able to create more complex 1 t ™ and multi-layered constructions. • Smalltalk went through at least one more substantial revision, culminating i n the Smalltalk- 80 that was released to the world beyond X e r o x • .' ^ ^ ^ ^ P A R C i n the early 1980s (a series of articles in nonxi'i txn'orawrv i m c i s l,n-ordci to stt what thfv 1 - 'WV»". - 1 it the frnmru. • 1 . r - < 'eWclassi • noise BYTE, August 1981; Goldberg & Robson 1983). But it is important to bear i n m i n d Kay's Chapter 4: Alan Kay's Educational Vision 124 repeated admonition that, true to its very design, Smalltalk isn't ever finished but continues to be a vehicle for getting to the next place—a new dynamic media environment for c h i l - dren. By this treatment, Smalltalk isn't thus a better language to teach programming qua programming; it is a comprehensive model for late binding and for complete end-user control. Thus, Smalltalk's ultimate and ongoing goal, Kay suggests, is to transcend itself. The Smalltalk environment A s Smalltalk evolved, especially after the 1976 design, the Smalltalk environment seems to have become more important than the language perse. This was, i n a sense, what Kay was after; the language itself was to merely be a vehicle for getting to a place where one could build one's own tools and media to suit the task at hand, a "language for building languages." Adele Goldberg described Smalltalk-76 as "a basis for implementing and studying various user-interface concepts" (Goldberg & Ross 1981). Smalltalk's contributions to user-inter- face concepts are undoubtedly its most widely-known facet—this is what Steve Jobs saw when he visited X e r o x P A R C i n 1979 (Goldberg 1998, p. 69-70). But Smalltalk's intent was not merely to make computers "user-friendly." Kay wrote: I felt that because the content of personal computing was interactive tools, the content of this new authoring literacy should be the creation of interactive tools by the children. (1996a, p. 544) The creation of interactive tools positions the computer as a media device. Kay's A R P A background significantly framed the notion of computers as communications media rather than calculating machines, but Smalltalk's orientation meant that "objects mean multime- dia documents; you almost get them for free. Early on we realized than in such a document, each component object should handle its o w n editing chores" (p. 538). Y o u get them "for free" because there isn't a foundational divide between data structures and procedures, as i n conventional programming models; once this conceptual limitation has been escaped, it is as straightforward to define objects that manipulate audio samples as the ones that manipu- late text. A s early as Smalltalk-72, the L R G team were experimenting w i t h bitmap painting Chapter 4: Alan Kay's Educational Vision 125 programs and animation, music sequencing and editing, and W Y S I W Y G document editing. The now-commonplace i d i o m of "authoring" and the toolbar motif that we find i n our w o r d processors and graphics applications were pioneered here. Kay was struck by the computer's power to represent any other media, be it textual, graphical, or aural. Kay and Goldberg wrote, Every message is, i n one sense or another, a simulation of some idea. It may be representational or abstract, isolated or i n context, static or dynamic. The essence of a medium is very much dependent on the way messages are embed- ded, changed, and viewed. A l t h o u g h digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new "metamedium" is active—it can respond to queries and experiments—so that the messages may involve the learner i n a two-way conversation. This property has never been available before except through the medium of an individual teacher. W e think the implications are vast and compelling. (1976, p. 4) These foundational principles, and their realization i n the evolving forms of Smalltalk that emerged during the 1970s, begin to define what Kay has called (2004a) the " P A R C genre" of personal computing, from w h i c h we have inherited certain elements today: authoring tools, direct manipulation interfaces, exploratory systems, and so on. W h a t we have not inherited is the ethic of mutability, wherein we would assume that every component of a system is open to be explored, investigated, modified, or built upon. The Smalltalk systems "crystal- lized a style of programming" (Kay 1978) i n w h i c h the entire system, top to bottom, was open to the user's exploration and interaction, i n which the line between tool and medium was deliberately blurred. " D O I N G W I T H I M A G E S M A K E S S Y M B O L S " Seymour Papert's research w i t h children had drawn heavily on the theories of developmen- tal psychologist Jean Piaget, w i t h w h o m he worked i n the late 1950s and early 1960s. Piaget's Chapter 4: Alan Kay's Educational Vision 126 developmental theory—which holds that children's cognition passes through four normal stages—sensorimotor, preoperational, concrete operational, and formal operational—has two important features w i t h special relevance to Papert's work. The first is Piaget's concep- tion of the child as epistemologist, actively building an understanding of the w o r l d and then reflecting upon that understanding (and, ultimately, on the process of building it). In his rememberance of Piaget i n Time's 1999 issue o n "The Century's Greatest M i n d s , " Papert wrote: C h i l d r e n have real understanding only of that w h i c h they invent themselves, and each time that we try to teach them something too quickly, we keep them from reinventing it themselves. (Papert 1999) There is more here than a simple rendering of constructivism; it speaks instead to Piaget's (and Papert's) concern that the business of 'teaching' children about the w o r l d may run counter to their ability to make sense of it for themselves, if i n teaching we attempt to over- lay a way of seeing the w o r l d more appropriate to adults. Piaget's insistence was to take seriously the idea that through the various developmental stages, children's ways of think- ing (or "ways of knowing") may be very different from adults'—not wrong or lacking, but different. Piaget thus saw children's thinking, children's epistemology, as intellectually interesting i n itself. The second feature of Piaget's theory of importance to Papert's work is the distinction between concrete and formal thinking w h i c h underlies Piaget's third and fourth stages. That there is a well-documented phenomenon wherein children cannot perform certain sorting and combining tasks (and which'Piaget's research uses to differentiate these stages) does not necessarily imply that young children cannot approach this k i n d of thinking. Rather, Papert argued, with the right kind of representational apparatus, they may; hence Logo. This is an application of the epistemological insight above, and perhaps the best way to express it is to note, as A l a n Kay did: Chapter 4: Alan Kay's Educational Vision 127 ...that young children are not well equipped to do "standard" symbolic mathe- matics u n t i l the age of 11 or 12, but that even very young children can do other kinds of math, even advanced math such as topology and differential geometry, when it is presented i n a form that is well matched to their current thinking processes. The Logo turtle w i t h its local coordinate system (like the child, it is always at the center of its universe) became a highly successful "microworld" for exploring ideas i n differential geometry. (1990, p. 194) The computer, i n Papert's conception, isn't a bridge to the formal operational stage so m u c h as it is a means to approach the ideas inherent in mathematics while still remaining w i t h i n the cognitive repertoire of the concrete stage—through the computer-mediated introduc- tion of a "body-syntonic mathematics" (Papert, 1980a, p. 205). This is what computers are good for, according to Papert. In a computer-rich culture, Papert wrote, "Children may learn to be systematic before they learn to be quantitative!" Kay worked forward from Papert's conception, but moved i n a slightly different direc- tion. The key idea for Kay was that children could use computing to gain a different sort of handle on the world. Kay, following M c L u h a n , felt that new media come w i t h new ways of thinking about the world. The relationship to Piaget perse was downplayed i n favour of the A n g l o - A m e r i c a n developmental psychologist Jerome Bruner, who i n the 1960s postulated three mentalities: enactive, iconic, and symbolic, and, rather than being strictly developmen- tal (i.e., stage-ordered), these mentalities developed i n response to environmental factors (Bruner 1966, p. lOff). The work of Papert convinced me that whatever interface design might be, it was solidly intertwined w i t h learning. Bruner convinced me that learning takes place best environmentally and roughly i n stage order—it is best to learn some- thing kinesthetically, then iconically, and finally the intuitive knowledge w i l l be i n place to allow the more powerful but less vivid symbolic processes to work at their strongest. This led me over the years to the pioneers of environmental learning: Montessori M e t h o d , Suzuki V i o l i n , and T i m Gallwey's Inner Game of Tennis, to name just a few. (Kay 1990, p. 195) Chapter 4: Alan Kay's Educational Vision 128 The w o r d "interface" here is the key one; prior to the 1970s, "user interface" had a different meaning. Kay's breakthrough idea—though he would no doubt say that he was drawing on precursors from the 1960s (Sketchpad, G R A I L , 1 0 and Logo)—was that 'computing' could and should be operating o n more than just the symbolic, abstract level; that it can and should have enactive (kinesthetic) and iconic (image-based) modalities, since computers were media, and not just tools. But the importance of enabling the enactive and iconic mentalities i n computing is not merely to allow computers to be accessible to children i n one or another cognitive 'stage,' but because, Kay said, "no single mentality offers a complete answer to the entire range of thinking and problem solving. User interface design should integrate them at least as well as Bruner did i n his spiral c u r r i c u l u m ideas" (Kay 1990, p. 195). The goal was not to make computers available to children, but "to find the equiva- lent of writing—that is, learning and thinking by doing i n a medium—our new 'pocket universe'" (1996a, p. 552). The important thing about language and reading and writing, Kay noted, is that very young children use the same English language as poets, scientists, and scholars; what differs isn't the language, but the context and the sophistication of the user. But as Piaget's and Bruner's work underscores, the relative sophistication of these users is not simply a matter of degree, but of style. Kay read a study by the French mathematician Jacques Hadamard (1954) o n the personal reflections of over 100 leading mathematicians and scientists; Hadamard found that very few of them reported thinking about mathematics i n a "symbolic" way (i.e. the way it is written, and the way it is taught i n school); most reported thinking i n terms of imagery, and a significant number (including A l b e r t Einstein) reported that their sensibilities about mathematics had a kinesthetic basis. Kay wrote that Hadamard's study indicates "strongly that creativity i n these areas is not at all linked to the symbolic mentality (as most theories of learning suppose), but that the important work i n creative areas is done i n the initial two mentalities—most i n the iconic (or figurative) and quite a bit i n the enactive" (1990, p. 195). O f course mathematics as a 10. GRAIL was an early but sophisticated pen-and-tablet graphics system developed at the RAND Corporation in the late 1960s. Chapter 4: Alan Kay's Educational Vision • 129 'language' is the very model of abstract, symbolic representation, and undoubtedly this is essential to the communication of its ideas; but this does not imply that mathematicians themselves necessarily think this way. Bruner's enactive, iconic, and symbolic are to be thought of as mentalities rather than hierarchical stages; they may develop i n sequence, but we do not move 'out' of the earlier stages.1 1 That is to say, what is 'developmental' about the three mentalities may be developmental only by virtue of the contexts of exposure and use. Kay sums this up nicely: The w o r l d of the symbolic can be dealt w i t h effectively only when the repeti- tious aggregation of concrete instances becomes boring enough to motivate exchanging them for a single abstract insight. (1984, p. 6) The implication for Kay's evolving notion of user interface is that "no single mentality offers a complete answer to the entire range of thinking and problem solving" (Kay 1990, p. 195). The Smalltalk environment, thus, had to be a system w h i c h could support interaction i n all three modes, and its development over three decades can be seen as a progression towards a greater embodiment of this thinking. The guidelight for this development was Kay's neat synthesis of Bruner's schema: "doing with images makes symbols." User of computer media, therefore, should be doing something, be it pointing, moving, or manipulating objects on screen; those objects should have a visual, or iconic representation. This has an obvious developmental application: users can begin by manipulating concrete representations (cf. Montessori's use of manipulatives) and gradually build the familiarity w h i c h allows more abstract, symbolic modalities. 11. Goldman-Segall's "epistemological attitudes" (1998, p. 244ff) is a related thought, itself in reaction to Papert & Turkle's 'hard' and 'soft' styles. That epistemological attitudes are conceived as "frames" puts a more contextual and dialogical light on it, rather than innate (or even learned, but stable) qualities. Chapter 4: Alan Kay's Educational Vision 130 W A Y S O F K N O W I N G : N A R R A T I V E , A R G U M E N T A T I O N , S Y S T E M S T H I N K I N G Kay's insight at the end of the 1960s was that a new age of personal computing was on the horizon, i n w h i c h ...millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner [...] early on, this led to a 90-degree rotation of the purpose of the user interface from "access to functionality" to "environment i n w h i c h users learn by doing." This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children's curriculum to a "curriculum of user interface." (1996a, p. 552) W h a t is a curriculum of user interface? Perhaps the best way to answer this is to look to another taken-for-granted medium—the printed book—and try to draw an analogy. A s M a r s h a l l M c L u h a n has eloquently shown—and scholars such as Jack Goody, Walter O n g , and Elizabeth Eisenstein have elaborated—alphabetic literacy of the k i n d nurtured by the printing revolution of the Early M o d e r n period has conditioned our thinking so deeply that we can barely imagine what it might have been otherwise. M c L u h a n ' s insights into alpha- betic culture underscore the notion of an alphabetic curriculum that has been keystone of Western education i n modern times to be sure, and really since the Greeks developed the alphabet i n the modern sense: we do not merely read by way of alphabetic constructs, we organize our very being according to the kinds of logic prescribed by alphabetic literacy. Alphabetic literacy as pioneered by the classical Greeks and especially print literacy as it appeared i n the early 17th century has had profound implications for how we know the world; how we represent it, and the kinds of assumptions we make about it. One of the most profound implications is the development of alternatives to narrative expression. Kay pays special attention to this dynamic i n the development of modernity. O f narrative and narra- tive-based ways of understanding the world, Kay writes: Chapter 4: Alan Kay's Educational Vision 131 W h e n one proverb contradicts another, it doesn't matter—just as it doesn't matter that the movie you liked last night contradicts the movie you liked last week. The important thing about stories is how good they are right now. Stories happen i n the here and now; they create their own environment. Even when they purport to be about general knowledge, what really matters is how well they satisfy the listener. (Kay 1997, p. 17) But we do not represent everything i n narrative form. Since the early 17th century, Kay argues, more and more of the most influential cultural expressions i n Western society have taken non-narrative forms: If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is i n story form! Newton's treatise o n the laws of motion, the force of gravity, and the behaviour of the planets is set up as a sequence of arguments that imitate Euclid's books on geometry. (Kay 1995) The most important ideas i n modern Western culture i n the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented i n narrative. Historians Hobart & Schiff- man identify the 16th-century French thinker Francois Viete, the developer of generalized algebra and geometry, as the turning point to modern analytic thought—the more famous Rene Descartes having built substantially on Viete's foundations (Hobart & Schiffman 1998, p. 123ff). Chapter 4: Alan Kay's Educational Vision 132 But more recent still are forms of argumenta- t i o n that defy linear representation at all: 'complex' systems, dynamic models, ecological relationships of interacting parts. These can be hinted at w i t h logical or mathematical representations, but i n order to flesh them out effectively, they need to be dynamically modeled. This k i n d of modelling is i n many cases only possible once we have computa- tional systems at our disposal, and i n fact w i t h the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the "regime of computation" inherent i n the work of thinkers like Stephen Wolfram, Edward Fredkin, and H a r o l d M o r o w i t z , N . Katherine Hayles explains: Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work i n chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so i n the social sciences and humanities. (Hayles 2005, p. 30) Kay's lament too is that though these non-narrative forms of communication and under- standing—both in the linear and complex varieties—are key to our modern world, a tiny fraction of people i n Western society are actually fluent i n them. John C o n w a y a n d A-Life I n . 1 9 7 0 , Martin ' G a r d n e r published :ai~ famous column in, .Scientific American celebrating a simple.isolitaire game o f . population dynamics created by British mathematician John Conway. The game was - called Life. It is, very simply, a set of very simple-algorithms that- produce'complex behaviour in cellular automata. The study of cellular automata pre-exists Conway's simulation, and has developed into a rapidly growing and complex .branch of mathematics (Stephen Wolfram's 1 2 0 0 - page A New Kind of Science was a popular bestseller in 2 0 0 2 ) . But Life was simple enough to be implemented in a variety of simple forms: the small number of algorithms meant that the simulation could;, be run by hand on a checkerboard; novice programmers could easily produce Life as software in a few lines of code (I recall- j writing one in BASIC' when I was an undergraduate). The important thing about Life is that it demonstrates powerfully how very simple systems can produce complex behaviour, and that the business of modelling them, '. mathematically, computationally, is trivially.--, easy. . • . • • ••• i. Chapter 4: Alan Kay's Educational Vision 133 , In order to be be completely enfranchised in the 21st century, it w i l l be very important for children to become fluent i n all three of the central forms of thinking that are now i n use. [...] the question is: H o w can we get children to explore ways of thinking beyond the one they're "wired for" (storytelling) 1 2 and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems "eco-logic?" (1996c) W e can learn many things as children i n a village culture. W e can learn how to live our lives successfully. W e can learn what the culture believes. W e can learn how to hunt and fish and farm. W e can learn a lot of things simply by watching others. But school was invented so people could learn the hard things, the things we don't learn naturally. School was invented so we could learn the created things that actually require us to change what's inside our heads, to learn what Seymour Papert calls powerful ideas. (Kay 1997, p. 18) In this we get Kay's argument for 'what computers are good for' (not to mention a particular notion of what schools might be good for). It does not contradict Papert's vision of c h i l - dren's access to mathematical thinking; rather, it generalizes the principle, by applying Kay's vision of the computer as medium, and even metamedium, capable of "simulating the details of any descriptive model." The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as the promise of personal computing, w i t h millions of users and millions of machines. The thing that jumped into my head was that simulation would be the basis for this new argument. [...] If you're going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a math- ematical equation. If, for example, you're talking about an epidemic, you can make claims i n an essay, and you can put mathematical equations i n there. Still, it is really difficult for your reader to understand what you're actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim i n the form of a working simulation, something that can be examined, and also can be changed. (2002&) The computer is thus to be seen as a modelling tool. The models might be relatively i mundane—our familiar w o r d processors and painting programs define one end of the 12. Kay is presumably drawing on Bruner's notion of a foundational" narrative construal of reality" (Bruner 1991; 1996) Chapter 4: Alan Kay's Educational Vision 134 scale—or they might be considerably more complex. It is important to keep i n m i n d that this conception of computing is i n the first instance personal—"personal dynamic media"— so that the ideal isn't simulation and modelling on some institutional or centralized basis, but rather the k i n d of thing that individuals would engage i n , i n the same way i n which i n d i - viduals read and write for their own edification and practical reasons. This is what defines Kay's vision of a literacy that encompasses logic and systems thinking as well as narrative. A n d , as w i t h Papert's enactive mathematics, this vision seeks to make the understand- ing of complex systems something to w h i c h young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a 'computer- science' or an 'information technology' curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools: So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects—which I think is enough complexity for anybody—gets them into that space i n think- ing about things that I think is more interesting than just simple input/output mechanisms. (1996a, p. 597) W H A T I S L I T E R A C Y ? The music is not in the piano. - Alan Kay The past three or four decades are littered w i t h attempts to define "computer literacy" or something like it. I think that, i n the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have w o n large numbers of supporters across the board. Kay's appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a m u c h more fruitful framing. H i s argument is thus not for computer literacy perse, but for systems literacy, of w h i c h computing is a key part. In a (2002a) lecture, Kay said, "Every idea, no matter how revolutionary it may appear, is built on previous ideas.... W h a t interests me ... is adding something more to literacy. A n d this is a Chapter 4: Alan Kay's Educational Vision 135 grand tradition." Drawing a profound example from the history of literacy i n Europe, Kay Wrote i n 1998 that ...we'll know if we have the first Dynabook if we can make the end-user experi- ence one of "reading and writing" about "powerful ideas" i n a dynamic form, and to do this i n such a way that large percentages of the bell-curve can learn how to do this. W h e n M a r t i n Luther was i n jail and contemplating how to get the Bible directly to the "end-users" he first thought about what it would take to teach Latin to most Germans. T h e n he thought about the problems of translat- ing the Bible to German. Both were difficult prospects: the latter because Germany was a collection of provinces w i t h regional dialects, and the dialects were mostly set up for village transactions and court intrigues. Interestingly, Luther chose to "fix up" German by restructuring it to be able to handle philo- sophical and religious discourse. He reasoned that it w o u l d be easier to start w i t h something that was somewhat familiar to Germans who could then be elevated, as opposed to starting with the very different and unfamiliar form of Latin. (Not the least consideration here is that Latin was seen as the language of those i n power and w i t h education, and w o u l d partly seem unattainable to many e.g. farmers, etc.) (Kay 1998a) That this is a massive undertaking is clear i n the Luther example, and the size of the chal- lenge is not lost o n Kay. Reflecting on the difficulties they faced i n trying to teach programming to children at P A R C i n the 1970s, he wrote that the connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. A n d it helps greatly to have some power- ful ideas under one's belt to better acquire more powerful ideas (Kay 1996a, p. • 545). Because literacy is about ideas, Kay connects the notion of literacy firmly to literature: W h a t is literature about? Literature is a conversation i n writing about impor- tant ideas. That's why Euclid's Elements and Newton's Principia Mathematica are as m u c h a part of the Western world's tradition of great books as Plato's Chapter 4: Alan Kay's Educational Vision 136 Dialogues. But somehow we've come to think of science and mathematics as being apart from literature. (20026) There are echoes here of Papert's lament about "mathophobia"—not fear of math but the fear of learning (Papert 1980, pp. 38-40) that underlies C P . Snow's "two cultures," and w h i c h surely underlies our society's love-hate relationship w i t h computing. Kay's warning that too few of us are truly fluent w i t h the ways of thinking that have shaped the modern world—logical argument and systems dynamics—finds an anchor here. H o w is it that E u c l i d and Newton, to take Kay's favourite examples, are not part of the canon, unless one's very particular scholarly path leads there? W e might argue that we all inherit Euclid's and Newton's ideas, but i n distilled form. But this misses something important, and I know I've missed something important i n my understanding of math and science. Kay makes this point w i t h respect to Papert's experiences with Logo i n classrooms: Despite many compelling presentations and demonstrations of Logo, elemen- tary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children i n a way that illuminates how we think about mathematics and how mathematics relates to the real world. (1997, p. 19) The problem, i n Kay's portrayal, isn't "computer literacy," it's a larger one of familiarity and fluency w i t h the deeper intellectual content; not just that w h i c h is specific to math and science curriculum. Kay's diagnosis runs very close to N e i l Postman's critiques of television and mass media (Postman was a member of the advisory board for the Viewpoints Research Institute until his death i n 2003); that we as a society have become incapable of dealing w i t h complex issues. Postman charges that public argument on the scale of that published i n and around the U S Constitution w o u l d be impossible today, because the length and depth of the argumentation simply w o u l d not fit i n a television format, newspapers w o u l d not print it, and too few people would buy it i n book format (Postman 1986). Being able to read a warning on a p i l l bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being Chapter 4: Alan Kay's Educational Vision 137 able to fluently read and follow the 50-page argument i n Paine's Common Sense and being able (and happy) to fluently write a critique or defense of it. (Kay 1996 p. 548) Another example of "literacy" that Kay repeatedly mentions is the ability to hear of a disease like A I D S and to recognize that a "disastrous exponential relationship" holds: M a n y adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like A I D S , or even compound interest on their credit cards. In contrast, a 12-year-old child i n a few lines of Logo [...] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that w o u l d ordinarily be consigned to c o m m o n sense (the worst of all reasoning methods), can now be tried out w i t h a modest amount of effort. (Kay 1994) Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? W e have enough trouble coping w i t h arguments presented i n print, let alone simulations and modeling. Postman's argument implicates tele- vision, but television is not a techno-deterministic anomaly w i t h i n an otherwisesensible cultural milieu; rather it is a manifestation of a larger pattern. W h a t is 'wrong' here has as m u c h to do w i t h our relationship with print and other media as it does w i t h television. Kay noted that "In A m e r i c a , printing has failed as a carrier of important ideas for most A m e r i - cans" (1995). T o think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, econ- omy, simplicity) make a real difference i n the lives of only a small number of individuals, even i n the Western w o r l d . Kay put it eloquently thus: "The computer really is the next great thing after the book. But as was also true w i t h the book, most [people] are being left behind" (1995). This is a sobering thought for those who advocate public access to digital resources and lament a "digital divide" along traditional socioeconomic lines. Kay notes, A s my wife once remarked to V i c e President A l Gore, the "haves and have- nots" of the future w i l l not be caused so m u c h by being connected or not to the Chapter 4: Alan Kay's Educational Vision 138 Internet, since most important content is already available i n public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wher- ever it may be found. (Kay 2000a, p. 395) W h a t is to be done, then? This sort of critique puts the education system i n the U n i t e d States (and most Western countries, by obvious extension) i n such bad light that many are tempted to depair. Kay's project is relentless, though: w i t h or without the school system, the attempt to reach children w i t h powerful ideas and the means to working with them is always worthwhile. Part of the key to seeing a way through this is to remember that educa- tion does not equal school, nor does television (or any other medium) represent an essential obstacle to education. "Television," says Kay, again recalling Postman's argument, "is the greatest 'teaching machine' ever created. Unfortunately, what it is best at teaching are not the most important things that need to be learned" (1995). But i n this are also the seeds of an alternative; how could different media be harnessed in such a way as to lead i n a more productive direction? H o w can children have any "embedded cultural experience" that encourages learning logic and systems thinking?'The answer isn't i n the design of any particular curriculum. Rather, M a r i a Montessori's vision inspires Kay: putting the emphasis o n children's "absorbent minds" and the freedom to play and explore. The objects in our system are instead a help to the child himself, he chooses what he wants for his o w n use, and works w i t h it according to his o w n needs,' tendencies and special interests. In this way, the objects become a means of growth. (Montessori 1972, p. 150) Note that this entire conception only makes sense if we include objects—that is, artifacts, technologies, things—as bearers of practice, discourse, and culture, and ensure that we don't abstract away from the things themselves. Chapter 4: Alan Kay's Educational Vision 139 V I S I O N : N E C E S S A R Y B U T N O T S U F F I C I E N T W e have here the elements of the thinking that produced the Dynabook vision and led to its prototypes at X e r o x P A R C w i t h Smalltalk and the "interim dynabook" A l t o computers. The Dynabook was not, by anyone's measure, a modest project. The fascinating thing is that while Kay did not succeed i n establishing a new educational model based on a new k i n d of systems literacy, his project did, i n a different sense, succeed: Kay's sense of a future popu- lated by millions of personal computers has indeed come true, and the accuracy w i t h w h i c h he predicted the shape of our computing experience is uncanny. The easily portable laptop computer, w i t h its graphic interface, connected via wireless network to a global information resource, and capable of storing and manipulating all sorts of media—as Kay described i n 1972—is precisely what I am using to compose this text. 1 3 But this is not the Dynabook. For all its capabilities, the machine I am sitting in front of as I write this—and, more impor- tantly, the set of genres governing my practices with it—rests upon a far more staid conventional understanding of literacy (and media) than Kay had i n m i n d . Similarly, professional programming is today heavily influenced by the object-oriented paradigm—largely defined by Kay's team. A n d yet, i n terms of actual practice, m u c h of it is still "a better old thing" rather than the "almost new thing" Kay had i n m i n d . A n d so despite his numerous important contributions, 1 4 little of today's computing, personal or otherwise, comes close to the revolutionary thinking that formed the core of Kay's work, especially his sense of computing as "child's play." Contrary to Kay's method, children today are taught computing by way of systems first designed for adults. A n d , true to his insights, there is little that is transformative as a result. It is c o m m o n for fans of Kay's project to simply claim that he was "ahead of his time"— but surely this is a simplistic and superficial analysis. The question at hand, for me as an historian, is what happened? W h a t happened to Kay's vision over the next three decades 13. In all seriousness, there is nothing prophetic about it; there are straightforward lines of influence (running largely through Apple Computer in the 1980s) leading directly from Kay's work in the 1970s to the machine I use today. 14. Kay's contributions are well recongized and indeed celebrated among computer scientists; among countless awards and dis- tinctions, Kay received the ACM's "A.M. Turing Award" in 2004, one of the field's highest honours. Chapter 4: Alan Kay's Educational Vision 140 that led certain elements to take hold and indeed revolutionize the w o r l d of computing, and other elements—perhaps those most important—to remain i n obscurity? Chapter 4: Alan Kay's Educational Vision 141 Chapter 5: Translating Smalltalk The vision has been clear all along but vision is hard to critique effectively. The various implementations we have done, on the other hand, are complete earthly artifacts, and thus admit of criticism both by ourselves and others, and this has helped to move us forward, both on the earth and in our vision. - Dan Ingalls, 2005 Ingalls' quote speaks to an important distinction i n this study: between the Dynabook and Smalltalk itself, between the vision and what Ingalls has called the image} The Dynabook vision emerged powerfully and clearly in Kay's writings i n the early 1970s, and he was able to coalesce a team of colleagues around h i m — P A R C ' s Learning Research G r o u p ( L R G ) — o n the strength of that vision. But we cannot follow the trajectory of the vision itself. If we are to follow the actors, i n Latour's phrase, we have to look for tangible or visible manifestations. Fortunately, i n the case of the Dynabook story, the tangible and visible is provided by Small- talk, the programming language Kay designed i n 1971 and w h i c h was soon after made real by Ingalls. Smalltalk is not merely an effect of or a spin-off of the Dynabook idea; it is i n many ways the embodiment of a major portion of the Dynabook—enormously conveniently so for this story. But, of course, Smalltalk itself is not the Dynabook: it is the software w i t h - out the hardware, the vehicle without the driver, the language without the literature. Nevertheless, Smalltalk and its well-documented evolution provide an enormously valuable vector for the telling of the Dynabook story. F r o m the very beginning, there seems to have been an essential tension w i t h i n Smalltalk and w i t h i n its community. The tension concerns Smalltalk as the articulation of an educa- tional vision—that is, its U t o p i a n idealism—vs. Smalltalk as a powerful innovation in computer programming and software engineering—that is, its sheer technical sweetness.2 That being said, among the key characters i n Smalltalk's history—Alan Kay, D a n Ingalls, 1. Interestingly—and almost undoubtedly coincidentally—a Smalltalk environment saves its data, state, programs, and entire memory in a file called an "image." 2. Arnold Pacey, in The Culture of Technology, wrote of the notion of technical sweetness, "the fact remains that research, inven- tion, and design, like poetry and painting and other creative activities, tend to become compulsive. They take on purposes of their own, separate from economic or military goals" (1983, p. 81). Chapter 5: Translating Smalltalk 142 Adele Goldberg, Ted Kaehler, and a host of others—it is difficult to label anyone clearly on one side or the other of this seeming divide. W h i l e A l a n Kay has remained overtly focused on the educational vision for 35 years now, there can be no denying his role as a computer scientist, both i n Smalltalk's early design and in any number of evolutionary moves since. Adele Goldberg, hired o n at X e r o x P A R C i n the early 1970s as an educational specialist, ironically became the chief steward of Smalltalk's trajectory into industry a decade later. Even D a n Ingalls, the programmer who actually built all the major versions of Smalltalk over the years, has written perhaps more eloquently than anyone about Smalltalk's purpose to "serve the creative spirit i n everyone" (Ingalls 1981). But at several key moments i n the project's history, the appeal of the educational or the technical ideal has pulled it i n one direction or another. W i t h each movement, Smalltalk has been translated somewhat, into a slightly new thing. T o trace these movements is to watch the expansion of Smalltalk's 'network' i n a wide variety of directions, but also to watch the translation of elements of the vision into more durable but decidedly different things. Arguably, the sheer variety of these translations and alignments—and the absence of any one clearly dominant thrust—has led to Smalltalk's marginality i n any of its realms. Arguably too, this variety is what keeps it alive. I w o u l d like to note and trace here a few key translations, and to take the opportunity w i t h each to point out the resulting conceptual "black boxes" that result and w h i c h go o n to set the conditions for subsequent shifts. Each translation represents the ecological shifting of aspects of the project—adding new allies; allowing for new inputs and influences; conforming or reacting to constraints and threats—and each of these shifts results i n a notable difference i n what Smalltalk is. A s Smalltalk changes, so subtly does the Dynabook vision. W e w i l l begin at X e r o x P A R C i n the m i d 1970s. O R I G I N S : S M A L L T A L K A T P A R C I N T H E E A R L Y Y E A R S By 1973, the Learning Research G r o u p at X e r o x P A R C had an "interim Dynabook" to serve as the basis of their research efforts. The Alto minicomputer—arguably the first "personal Chapter 5: Translating Smalltalk 143 computer"—had begun to be manufactured i n small quantities and distributed w i t h i n X e r o x P A R C . Kay remembered, "It had a -500,000 pixel (606x808) bitmap display, its microcode instruction rate was about 6 M I P S , it had a grand total of 128k, and the entire machine (exclusive of the memory) was rendered i n 160 M S I chips distributed o n two cards. It was beautiful" (1996, p. 534).3 D a n Ingalls ported the Smalltalk-72 system to the A l t o (they had been developing it previously on a minicomputer), thereby establishing a basic platform for the next six years' work. Kay's team originally had 15 A l t o computers, and they i m m e d i - ately put children in front of them, though this was difficult, owing to tensions between X e r o x corporate and the relatively chaotic atmosphere at P A R C . Kay writes: I gave a paper to the National C o u n c i l of Teachers of English o n the Dynabook and its potential as a learning and thinking amplifier—the paper was an exten- sive rotogravure of "20 things to do w i t h a Dynabook" By the time I got back from Minnesota, Stewart Brand's Rolling Stone article about P A R C (Brand 1972) and the surrounding hacker community had hit the stands. T o our enor- mous surprise it caused a major furor at X e r o x headquarters i n Stamford, Connecticut. Though it was a wonderful article that really caught the spirit of the whole culture, X e r o x went berserk, forced us to wear badges (over the years many were printed on f-shirts), and severely restricted the kinds of publica- tions that could be made. This was particularly disastrous for L R G , since we were the "lunatic fringe" (so-called by the other computer scientists), were planning to go out to the schools, and needed to share our ideas (and programs) w i t h our colleagues such as Seymour Papert and D o n N o r m a n . (Kay 1996a, p. 533) . T o compensate the L R G team smuggled A l t o computers out of P A R C (strictly against corporate regulations) and into a Palo A l t o school, and also brought local kids i n to work w i t h the machines (p. 544). 3. Compare the Alto's specs with Apple Computer's first-generation Macintosh, designed a decade later. According to PARC lore, the Alto was conceived—like Smalltalk—as a result of a bet, and the bravado of its creators. With money diverted from the LRG budget, Chuck Thacker from the PARC's Computer-Science Lab intiated the project while the executive in charge of the lab was away, having boasted that they could create a whole machine in three months (Kay 1996, p. 532). Chapter 5: Translating Smalltalk 144 f This i m a g e has b e e n r e m o v e d because o f c o p y r i g h t restrictions. Figure 5.1: Kids in front o f Alto c o m p u t e r (from G o l d b e r g 1988) Adele Goldberg writes: ' - - M o s t of the educational experimentation was done w i t h specially conducted classes of students ages 12-13. These classes were held i n cooperation w i t h a local high school's mentally gifted minors program. The students were driven to P A R C during the school day. Saturday classes were held for the children of P A R C employees. (Goldberg 1998, p. 62) Smalltalk-72 running o n the A l t o machines proved good enough for the first round of research. Kay and L R G colleague Diana M e r r y first worked o n implementing an overlap- ping-window mouse-driven screen interface, w i t h text i n proportional fonts. L R G team member Steve Purcell implemented the first animation system, and T e d Kaehler built a version of turtle graphics for Smalltalk. Larry Tesler created the first W Y S I W Y G page- layout programs. M u s i c synthesis had already been implemented before the A l t o , and so this was moved over and substantially developed on this first generation platform. Chapter 5: Translating Smalltalk This i m a g e has b e e n r e m o v e d because of c o p y r i g h t restrictions. Figure 5.2: O r i g i n a l o v e r l a p p i n g - w i n d o w interfaces (from Kay & G o l d b e r g 1976, p. 16). A l l of this work was considerably enhanced when Ingalls, along w i t h Dave Robson, Steve Weyer, and Diana M e r r y , re-implemented Smalltalk w i t h various architectural improve- ments (a version unofficially referred to as Smalltalk-74), w h i c h brought enormous speed improvements to the system (Kay 1996a, pp. 542-543; T e d Kaehler, personal communica- tion, N o v 2005). W i t h the Smalltalk-72 system, Adele Goldberg worked substantially o n a scheme merging turtle graphics and the new object-oriented style, using the simple idea of an animated box on screen (named "Joe"). The box could be treated like a Logo turtle—that is, given procedural commands to move around the screen, grow and shrink, and so on; but it could also act as a 'class' from w h i c h specialized kinds of boxes could be derived. Chapter 5: Translating Smalltalk 146 Figure 5.3: A d e l e G o l d b e r g ' s Joe Box in action (Kay & G o l d b e r g i 9 7 6 , p. 4 7 ) Kay later reflected, W h a t was so wonderful about this idea were the myriad of children's projects that could spring off the humble boxes. A n d some of the earliest were tools! This was when we got really excited. For example, M a r i o n Goldeen's (12 yrs old) painting system was a full-fledged tool. A few years later, so was Susan Hamet's (12 yrs old) O O P illustration system (with a design that was like the M a c D r a w to come). T w o more were Bruce Horn's (15 yrs old) music score capture system and Steve Putz's (15 yrs old) circuit design system. (Kay 1996, p. 544) Chapter 5: Translating Smalltalk This i m a g e has b e e n r e m o v e d because of c o p y r i g h t restrictions. Figure 5.4: M a r i o n ' s p a i n t i n g system (from Kay & G o l d b e r g 1976, p. 33) A s exciting as these early successes must have been (and frankly, as impressive as they still sound today), the limitations of this early work—appear- ing along two fundamentally different axes—would significantly shape further development and point it i n divergent directions. Educational limitations A serious problem Kay's team encountered was the extent to w h i c h children and novice users hit a "wall" or critical threshold i n the complexity of their designs and constructions. 4 Kay reflects: The successes were real, but they weren't as general as we thought. They wouldn't extend into the future as strongly as we hoped. The children were chosen from the Palo A l t o schools (hardly an average background) and we tended to be much more excited about the successes than the difficulties.... W e could definitely see that learning the mechanics of .Rress *»Stone«maeazine,- (Dec -'-'-ewaril " ^ ^ ^ J t U r l l i K f ' i c I / CdliSmiSlifa^ecP'* xJ-^ox ..PAF^^JI^y^and Jiis*colleagues are ' g-<_nie v IddffilFmmcally embodies the 1 n tefactive^n^worSrg&als^ftfe'AR PA The H B H H P one has ie it yet*Alan Kay is convinced a modest icewarl could be built cheapri followed by t.^6methln l̂ik^th'a'tfiff PrSvjdfaf S s l T o r t ^ program listing, tor/Spoceivor.'lbŷ Kay-f-in d§ifiallT-ggf.;- TTfltf "l With i t sSproc I a ma t i 6 n c o r a i n a CTranc SfnrfeolWemnll iphibtoric al documents on 'he whole ARPA ««PARC endeavour Xerox responded with a ' five year publu at'on ban on the tesearchers Chapter 5: Translating Smalltalk 148 the system was not a major problem. The children could get most of it themsleves by swarming over the Altos w i t h Adele's JOE book. The problem seemed more to be that of design. (Kay 1996a, p. 544) Kay agonized over the difficulty of teaching a group of "nonprogrammer adults" from P A R C to construct a simple rolodex-like information manager application i n Smalltalk. Kay noted how they moved more quickly than the children, but this critical threshold still appeared earlier than he had anticipated: They couldn't even come close to programming it. I was very surprised because I "knew" that such a project was well below the mythical "two pages" for end- users we were working w i t h i n . Later, I sat i n the room pondering the board from my talk. Finally, I counted the number of nonobvious ideas in this little program. They came to 17. A n d some of them were like the concept of the arch i n building design: very hard to discover, if you don't already know them. The connection to literacy was painfully clear. It isn't enough to just learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate mere language abilities. A n d it helps greatly to have some power- ful ideas under one's belt to better acquire more powerful ideas, (p. 545) Despite the intractibility of this problem, even three-and-a-half decades later, Kay puts the focus i n the proper place: the issue is a cultural one, rather that a technical problem that can be fixed i n the next version. This is an issue that still hasn't been significantly addressed i n educational technology community, despite any number of attempts to create computing environments for children or 'novices.' Goldberg's emphasis on design, as i n the "Joe box" example, seemed to Kay to be the right approach. But it was clear that the specifics of just how to approach the issue of design eluded them, and that they had a very long way to go. Technological limitations A s innovative and successful as Smalltalk-72 had proved, its weaknesses soon showed through: weaknesses inherent i n Kay's original parsimonious design. In Smalltalk-72, the 4. Ted Kaehler more dramatically called it "the cliff" (Kaehler, personal communication, July 7,2004). Chapter 5: Translating Smalltalk 149 actual syntax of the language i n any particular instance was defined by the code methods attached to the particular object receiving the message. "This design came out of our assumption that the system user should have total flexibility i n making the system be, or appear to be, anything that the user might choose" (Goldberg & Ross 1981, p. 348). The trouble w i t h this approach is that the flexibility of the system tends to preclude consistency. The team agreed that the flexibility of Smalltalk-72 was beyond what was desirable (Kay 1996a, p. 547). Adele Goldberg and Joan Ross explained: O u r experience i n teaching Smalltalk-72 convinced us that overly flexible syntax was not only unneccesary, but a problem. In general, communication in classroom interaction breaks down when the students type expressions not easily readable by other students or teachers. By this we mean that if the partic- ipants i n a classroom cannot read each other's code, then they cannot easily talk about it.-(Goldberg & Ross 1981, p. 348) A new Smalltalk seemed to be the next step. But the team were not i n total agreement about how exactly this should be accomplished. S M A L L T A L K ' S I N I T I A L T R A N S F O R M A T I O N A T X E R O X P A R C Translation #1: A Personal Computerfor Children of All Ages becomes Smalltalk-80 In early 1976, A l a n Kay worried that his project was getting off track, and concerns w i t h the design and implementation of the system were leading the L R G farther away from research w i t h kids. He wanted to refocus, and so he took his team o n a three-day retreat under the title "Let's B u r n O u r Disk Packs"—in other words, let's scrap what we have now, return to first principles, and begin again (Kay 1996a, p. 549). He explained his feeling w i t h reference to M a r s h a l l M c L u h a n ' s chestnut, "man shapes his tools, but thereafter his tools shape him," and wrote, "Strong paradigms like Lisp and Smalltalk are so compelling that they eat their young: when you look at an application i n either of these two systems, they resemble the systems themselves, not a new idea" (p. 549). Chapter 5: Translating Smalltalk 150 N o t surprisingly, the people who had spent the past four years building Smalltalk were not keen on throwing it all away and starting from scratch. D a n Ingalls, especially, felt that a new Smalltalk was indeed the right direction, and by now he had some well-developed ideas about how to do it better. Ingalls thus began the design of a major new version, called Small- talk-76. This proved to be a turning point, as Ingalls' new thrust with Smalltalk would generate enormous momentum, cementing the technical foundations of a whole paradigm of computer programming. A response to the technical limitations the L R G had found i n their work w i t h Smalltalk- 72, Smalltalk-76 clearly identified and established itself as the paradigm for object-oriented programming. The Smalltalk-76 language and environment was based on a cleaner and more consistent architecture than i n Smalltalk-72: here, everything i n the system was an object; objects communicate by passing messages; objects respond to messages sent to them via the code i n methods. Furthermore, every object is an instance of a class; "the class holds the detailed representation of its instances, the messages to w h i c h they can respond, and methods for computing the appropriate responses" (Ingalls 1978, p. 9). These classes are arranged i n a single, uniform hierarchy of greater and greater specialization. M u c h of the actual practice of programming i n such a system is the definition of new (lower) levels of this hierarchy: "subclassing," that is, taking a class w h i c h provides some functionality and extending it by defining a new class w h i c h inherits the old functionality plus some speciali- zation. Goldberg and Ross wrote: The Smalltalk-76 system was created primarily as a basis for implementing and studying various user-interface concepts. It gave the users, mostly adult researchers, further ability i n refining existing classes through the use of subclassing. This meant that the programmer could now modify a running model without creating a change to already existing examples of that model. Programming-by-refinement, then, became a key idea i n our ability to motivate our users. (Goldberg & Ross 1981, p. 354) Ingalls and his colleagues made the system into more and more of a working environment for themselves; more and more of the development of the system was done w i t h i n the Chapter 5: Translating Smalltalk 151 Smalltalk-76 system itself. A key point w h i c h is often overlooked i n glosses of Smalltalk's capabilities is that the system was completely live or "reactive" (Ingalls 1978), and so such changes to the system could be made on-the-fly, i n real time (imagine changing how M i c r o - soft W o r d works in the middle of writing a paragraph). In fact, parts of current Smalltalk environments were developed i n the Smalltalk-76 environment at X e r o x P A R C . 5 There emerged a unique community of practice w i t h i n X e r o x P A R C surrounding the Smalltalk-76 environment. Even Kay was "bowled over i n spite of my wanting to start over. It was fast, lively, could handle big problems, and was great fun." The m o m e n t u m of this community of practice firmly established some of Smalltalk's enduring and influential features: a multi-paned class "browser" w h i c h allowed one to quickly and easily traverse all the classes and methods i n the system; an integrated, window-based debugging environ- ment, i n w h i c h errors in the system pointed the user directly to the methods requiring fixes; and the first consistent, system-wide windowing user interface, the prototype for the ones we all use today. 5. The significance of this is easy to miss, but the claim about lineage is not a trivial thing. Nearly all programming environments distinguish between a program—normally treated as a static text—and its execution. Smalltalk, like Lisp before it, can be writ- ten while the program is running, which means that software can be modified from within. Since Smalltalk is a live computing environment in which code dynamically co-exists with transient objects like state information and user input, later versions of Smalltalk were created within a Smalltalk-76 environment, and more recent versions constructed within these. In a sense, the Smalltalk-76 environment has upgraded itself a number of times in the past thirty years. See Ingalls 1983, pp. 24-25. Chapter 5: Translating Smalltalk 152 'KK. FEI-Examples-Xll EEI-Kernel EEI-Plugin ElexibleVocabularies-Info| Eramework-Dovnload Graphics-Display Objects Graphics-Eiles Graphics-Primitives-Tests Graphics-Text Graphics-Text-Tests Graphics-Transformation; BitBlt Bitmap Color ColorMap Pen PenPointRecorder Point Quadrangle Reciangle. TranslucentColor instance class all accessing comparing rectangle functions testing truncation and round of! printing private EMP *nebraska-Morphic-Remo| *morphic-Postscript Canvj alignivnh: centeredBeneath: flipByicenterAt: newRectButtonPressedDo: ne^RectFrom: rotateBy:centerAt: scaleBy: scaleFromito: squishedVithin: translateBy: translatedAndSquishedToS centeredBeneath: e.Re>:tangle "Move the reciever so that its top center point coincides with the bottom center point of e.Rectangle. .V39/96 sw:" * self align: self topCenter with: aRectangle bottomCenter Figure 5.5: A Smalltalk " b r o w s e r , " s h o w i n g classes and m e t h o d s arranged in arbitrary categories. This browser is a direct d e s c e n d a n t o f the o n e s developed by Larry Tesler in Smalltalk- 7 6 . By the late 1970s, Smalltalk's generic qualities had begun to be evident: the consistency of the system and the design methodologies it encouraged had an effect on the practices of its users. I deliberately mean to treat Smalltalk as an actor i n this sense; here is an example of a tool which, once shaped, turns and shapes its users—or "eats its young," as Kay put it. This is the system w h i c h excited computer scientists about object-oriented programming, and the user-interface genre established by Smalltalk-76 was adhered to i n A p p l e and Microsoft's later systems. A broad set of practices and approaches coalesced around Ingalls' new design: W h e n programmers started writing class definitions i n these browsers, a new era of design began. The average size of a method tended to correspond to the screen space available for typing the method text (which consisted of around seven message expressions i n addition to the method header information). Software evolved (actually it felt like software was molded like clay). The programmer could write a partial class description, create an instance to try out its partial capabilities, add more messages or modify existing methods, and try these changes out on that same instance. The programmer could change Chapter 5: Translating Smalltalk 153 the behavior of software describing an instance while that instance continued to exist. (Goldberg 1998, p. 64) The educational research that followed on the development of Smalltalk-76, however, was not aimed at children, but at adult end-user program- ming. In 1978, Adele Goldberg led the L R G team through a watershed experience: they brought i n X e r o x upper management and taught them to construct a business simulation using Smalltalk. A key departure here was that these end-users were not exposed to the entire Smalltalk-76 environ- ment, but to a special simulation framework designed for the exercise (Kay 1996a, p. 556ff; Goldberg 1998, pp. 65-66). M u c h of the research i n the late 1970s seems to have taken the form of such domain-specific environments and focusing on design, rather than teaching the programming environment itself (Goldberg 1979; Goldberg & Ross, 1981). Parallel to the Smalltalk-76 development was Kay's work on a machine called the NoteTaker. This was the first "portable" computer i n the sense i n w h i c h we understand it today; it was a lot bigger than a laptop, but Kay writes that he did use it on W h a t ' s Different a b o u t Smalltalk A few key features distinguish Smalltalk from almost all other software development environments. In Smalltalk;, "everything is- an. object" • capable of sending and; receiving messages; there are no other conceptual constructs. Other object-oriented languages, such as C++ and Java use'objects along witr/other constructs like functions or specific data types drawn from mainstream pro- gramming traditions. Smalltalk's archite- cture and syntax are very simple, since everything in the system behaves in the same way. Smalltalk' is not a "compiled" language (like Pascal, C, and C++), in'whicKsource code',' files'- a r e / batch-translated into machine-readable executables. Nor is it an "interpreted" language (like BASIC, Perl, Python, in which source code files are "run" by an platform-specific interpreter; Instead, Smalltalk '(like; Java)." runs'-in. a.: "virtual machine," "bit-identically" on a wide variety of platforms. Java's virtual-machine archi- tecture was drawn from Smalltalk. Smalltalk is also its own development and runtime environment. Rather-than exec- uting'programs on top of an underlying operating system layer which maintains file input-output and a filesystem, Smalltalk runs in an "image"—a single, live "file" that manages its own memory use, reads and writes"' itself t o , disk transparently'1 when required, and which permanently maintains* the entire state of the environment. It'is this 'live', and persistent image which allows Smalltalk to be changeable "on the • . other languages require that one make changes to source code files and then re- compile or re-ruh the files in order to make a change. Smalltalk images from the days of Smalltalk-8o (and even Smalltalk-76) are still in use, having been bootstrapped into modern versions; an airplane (1996a, p. 559). This isn't just a foot- note; the important point about the NoteTaker, from the standpoint of Smalltalk's trajectory is that for the first time since the Alto's introduction, Smalltalk was made to run on a non-Xerox processor. Ingalls and colleague Bruce H o r n ported the Smalltalk-76 Chapter 5: Translating Smalltalk 154 system over to the NoteTaker (making Smalltalk-78), w h i c h was built w i t h the new, inex- pensive microprocessor chips (such as w o u l d appear i n the first microcomputers). So, despite the NoteTaker project being officially cancelled by X e r o x management i n 1978, Smalltalk had taken its first steps toward portability—that is, independence from X e r o x ' own hardware. Goldberg reports that further work toward making Smalltalk run on other vendors' hardware was key to its continued evolution (1998, p. 69ff), convincing the team that ...Smalltalk would run well on standard microprocessors. W e no longer needed to rely on microcoding X e r o x ' proprietary machines, so we decided it was time to expand the audience for Smalltalk. W e decided to create a Smalltalk that the rest of the world could use. [...] In 1979 we asked X e r o x for the right to publish Smalltalk, the language and its implementation and the applications we had built to test the Smalltalk model of computing. X e r o x officially gave this permission, remarking that no one inside X e r o x wanted Smalltalk. (Goldberg 1998, p. 71) Outside Xerox, there was interest, and the popular history of computing records the occa- sion of A p p l e Computer's Steve Jobs and his team visiting X e r o x i n 1979, and coming away w i t h substantial inspiration for their M a c i n t o s h project. Interestingly, what Jobs and his team really took away from their 1979 visit was the look of what Kay's team had designed and not so m u c h of how it worked; Jobs was so bowled over by the windows-and-menus interface that he ignored the dynamic object-oriented development environment and the local-area network connecting the X e r o x workstations. The symbolic importance of this event relates to these ideas 'escaping' from Xerox. The distillation of the Smalltalk-76 environment over the following three years into Smalltalk- 80 made this motif central. In preparing Smalltalk-80 for release, what was required was to abstract the language from any hardware assumptions i n order to allow implementations on any number of target platforms (Goldberg 1998, p 73). Chapter 5: Translating Smalltalk 155 Significantly, Kay was not a part of this devel- opment. In 1979, Kay took a sabbatical from Xerox, and did not return. Adele Goldberg led the group toward the definition of Smalltalk-80 and the publi- cation of a series of books (Goldberg & Robson 1983; Krasner 1983; Goldberg 1984) and a special issue of BYTE magazine (Aug 1981) with a cover illustration showing a colourful hot-air balloon ascending from a tiny island w i t h an ivory tower. But Smalltalk was escaping to where? Certainly not to schools and schoolchildren; rather, Smalltalk-80 was headed for professional systems programming—electronics, banking, shipping—and academic computer science research. The flexibility and elegance of Smalltalk's develop- ment environment w o n it a small but dedicated following of systems programmers; this w o u l d be what Smalltalk was k n o w n for i n programming circles. The resulting "black box" (in Latour's sense) was Smalltalk as an interesting dynamic programming environment for research and systems modelling, but far from the mainstream of either professional soft- ware development or personal computing. Translation #2: From educational research platform to software development tool X e r o x licensed Smalltalk i n 1980 to four hardware companies who had their own software divisions and could therefore participate i n the documentation of its implementation i n different contexts: Hewlett Packard (HP), D E C , Apple, and Tektronix. O f these, electronic instrument manufacturer Tektronix (an electronic equipment manufacturer rather than a computer company per se) did the most w i t h Smalltalk, offering it w i t h a short-lived line of research workstations, and also embedding i n the hardware of its popular line of oscilli- scopes (Thomas n.d.). Significantly, Smalltalk got a bigger boost i n the late 1980s w i t h the formation of a spinoff from X e r o x called ParcPlace Systems, i n w h i c h Goldberg and Xerox P A R C in the Press - II ' , After Stewart Brand's 1972 article appeared, there were no publications from-anyone in Kay's team until 1977, when Kay and Goldberg's paper, "Personal Dynamic Media," and Kay's Scientific American article, "Microelectronics and,the Personal Computer," finally revealed a glimpse of their research to the outside world. But there would be no real details about Smalltalk until the 1 9 8 0 s . A BYTE Magazine cover from 1978 made oblique reference to the "magical kingdom of Smalltalk," isolated on a rugged island, an image that would reappear on a 1981 cover, now showing Smalltalk escaping from the island via hot-air balloon (an image inspired by Jules Verne's The Mysterious Island, according to Dan Ingalls). • Chapter 5: Translating Smalltalk 156 colleagues commercialized Smalltalk, selling licenses to more companies and maintaining a portable base system w h i c h w o u l d hedge against the language's fate being tied to any one hardware platform (Goldberg 1998, p. 80ff). Ultimately, two Smalltalk licensees came to dominate: I B M and Digitalk—the latter was a spinoff from Italian business products company Olivetti, w h i c h ultimately merged w i t h Goldberg's ParcPlace Systems; the company was acquired i n 1999 by C i n c o m , a major software consulting house. If this historical detail sounds somewhat arcane and far removed from the trajectory of the Dynabook vision, it should. This later history of Smalltalk, from the early 1980s on, has a decidedly different character from that which preceded it. The focus had shifted entirely away from education (with a handful of minor exceptions6) and toward leading-edge computer science research and development. M u c h of the activity i n the Smalltalk c o m m u - nity was academic, centered around teams at the X e r o x P A R C of the 1980s as well as research at Universities of Massachussetts, Washington, Carleton, Tokyo, D o r t m u n d , and others worldwide (ibid.). Ironically, far from its origins i n personal computing, Smalltalk i n use is found i n the realm of big systems development: banking and finance, import- ing/exporting and shipping, health care, insurance, and so o n . 7 The website for C i n c o m Smalltalk boasts, "how the French fries you get at M c D o n a l d s are sorted by C i n c o m Smalltalk." Smalltalk's greatest impact on the computing world, however, was its role i n the estab- lishment of object-oriented programming and design, w h i c h by now has become one of the major genres of contemporary information technology. Smalltalk may have been the tech- nology at the core of this movement i n the 1980s, but it was quickly overtaken by m u c h larger populations of developers working i n the C++ language (which, strictly speaking was derived from the earlier Simula language rather than Smalltalk, and w h i c h added object and class constructs to the popular C programming language). C++ was a m u c h smaller concep- 6. Adele Goldberg and Joan Ross wrote an article.in the 1981 BYTE special issue on Smalltalk entitled " Is the Smalltalk-80 System for Children?"—the answer was a qualified yes, but it would appear that this article serves mostly to establish Smalltalk-80's intellectual tradition rather than to introduce new material. Smalltalk found favour as a teaching language in a few academic computer science departments, but remained very far from the mainstream. 7. See http://www.whysmalltalk.com Chapter 5: Translating Smalltalk 157 http://www.whysmalltalk.com tual and practical leap for mainstream programmers used to working i n static, procedural languages like C or Pascal; though the consequence of that 'shorter leap' also means that C++ has been called the worst of both worlds. Despite this, C++ grew i n the 1990s to be the dominant object-oriented language, and its popularity was such that object-oriented programming became the new mainstream. Consider, as a measure of this, the fact that the U.S. "Advanced Placement" curriculum i n computer science shifted to C++ from Pascal i n 1999. In 1996, Sun Microsystems released the Java language and development platform, an attempt to re-invent software development w i t h the Internet i n m i n d . Java is an object- oriented language much closer i n spirit to Smalltalk, at least i n that it was designed from the ground up w i t h objects i n m i n d (unlike C++, w h i c h was an adaptation of an older language and conceptual model), and w i t h a virtual-machine architecture like Smalltalk's to ensure portability across a wide variety of platforms. A c c o r d i n g to programmer mythology, 8 Sun Microsystems wanted "an industrial strength Smalltalk, written i n C++." They got neither, but what Java did represent after its late-'90s release was an enormous shift of programming practice—and, perhaps more importantly, discourse—away from C++ (The U S Advanced Placement curriculum abandoned C++ for Java i n 2003). Sun Microsystems spent the better part of the next decade fighting w i t h Microsoft over this shift and who would be i n control of it. Meanwhile, the Smalltalk community—consultants and developers at I B M , Digitalk, and a few others—continued on i n the shadows of these enormous efforts. It is worth noting that while millions of people work in C++ and Java and Microsoft's related . N E T on a daily basis, the almost ubiquitous characterization of these environments is that they are overly complex, badly designed and implemented, and a general curse on their users. The way Smalltalk developers talk of their chosen environment couldn't be more different. 8. My source for this 'mythology' is the collected wisdom and commentary on contemporary programming practice at the original WikiWikiWeb (http://c2.com/cgi/wiki). The WikiWikiWeb was begun in the mid 1990s by Ward Cunningham, a Smalltalk pro- grammer at Tektronix who wanted to host a collaboratively authored and maintained collection of software "design pat- terns"—a methodology inspired by architect Christopher Alexander's A Pattern Language (1977). Cunningham and various colleagues (none of whom are still atTektronix) became key figures in the OOP community, associated with the "design pat- terns" movement (see Gamma et al. 1995), and are also key figures in the newer "extreme Programming" movement (see, eg. Beck 2000). The WikiWikiWeb remains a central repository of commentary on these topics, boasting over 30,000 'pages' of col- . lected information. Chapter 5: Translating Smalltalk 158 http://c2.com/cgi/wiki The second major translation of Smalltalk, then, is from a research project—at its origin an educational research project—to its marginal place w i t h i n a m u c h larger current of industrial practice i n object-oriented programming. Smalltalk's status w i t h i n this larger current sometimes reads like an origin myth {"In the beginning, there was Smalltalk..."). The related black-boxing of Smalltalk i n the context of this historical shift relegates it to an intel- lectually interesting but ultimately 'academic' system, too far from evolving mainstream concerns to make m u c h practical difference. Far from being the revolutionary step Kay had hoped, Smalltalk was merely subsumed w i t h i n the emerging object-oriented paradigm. Translation #3: From "designers"to "end-users" It is against these large-scale, corporate systems trends that the Dynabook's trajectory through the 1980s and 1990s must be evaluated. After 1980, Smalltalk almost completely shed its educational connections; very little of Smalltalk-80 was ever seen by children. Iron- ically, it was Adele Goldberg, who came to X e r o x P A R C as an educational specialist and who led the research with children there for years, who now led Smalltalk's move into the wider w o r l d of professional programming. 9 It is important to reflect on just how far Small- talk had travelled from the Dynabook vision, and it would have been a reasonable observation i n the m i d 1980s that the two ideas had finally parted. Benedict Dugan's commentary on this shift invokes Frankfurt-school theories of "technical rationalization": Clearly, at some point, the original, idealistic goals of Kay and company became commercialized. By commercialized, I mean that the design focus shifted away from social and political concerns, to an interest i n efficiency. By exploiting the ability of class hierarchies to organize knowledge and share code, the designers created a language w h i c h was promoted for its ability to facilitate extremely efficient software engineering. Lost was the powerful notion of a programming system w h i c h w o u l d amplify the human reach and make it possible for novices to express their creative spirit through the medium of the computer. (Dugan 1994). 9. In 1984, Adele Goldberg became president of the Association for Computing Machinery, computing's largest professional asso- ciation. Goldberg did remain connected to education, however; her work in the 1990s with NeoMetron employed Smalltalk in the design of learning management systems (Goldberg et al. 1997). Chapter 5: Translating Smalltalk 159 Despite the finality of Dugan's statement, Smalltalk was far from finished; the contributions to computer science embodied i n Smalltalk are still being realized and reconsidered today. I w i l l not go into detail here on the long-term impact of Smalltalk on software engineering. Instead, I want to focus specifically on the rise of user-interface development. Conventionally, the graphical user-interface as we know it today descends more or less from the Smalltalk environments at X e r o x i n the 1970s, via Steve Jobs' visit to X e r o x P A R C and subsequent design of Apple's M a c i n t o s h computers; what X e r o x couldn't bring to market, Apple could, and i n a big way. The actual story is a little more complicated than this, not surprisingly. In the first place, X e r o x did attempt to commercialize some of the personal computing research that came out of P A R C . In the late 1970s, a machine called the X e r o x Star was developed and it was sold i n the early 1980s.1 0 The Star was an attempt to market what Kay calls the " P A R C genre" of computing: a pointer-driven graphical user interface, r i c h docu- ment-production tools ("desktop publishing"), peer-to-peer networking, and shared resources like a laser printer. The Star's failure commercially has probably more to do w i t h its selling price—close to $20,000 each, and they were sold i n clusters of three along w i t h a laser printer—in an era when the "personal computer" was being defined by A p p l e and I B M ' s machines costing around $3000. Nevertheless, the Star is the machine w h i c h first brought the modern graphical desktop environment to market; while overlapping windows and menu-driven interaction were pioneered i n Smalltalk, the process of turning these ideas into a packaged product and sold to an audience happened w i t h the development of the Star. Interestingly, the use of icons—not a feature of the Smalltalk interface—was pioneered i n the Star interface as a means of giving a direct-manipulation interface to mundane, hard- ware-defined things like disks and printers and files. I do not mean to give the impression that the X e r o x Star was completely distinct from the Smalltalk project—there was some significant overlap i n the personnel of the two 10. The Xerox Star was sold in reasonably large quantity for the day; the Wikipedia entry on the Star states that about 25,000 of the machines made it to market—as corporate office technology. http://en.wikipedia.org/wiki/Xerox_Star (RetrievedSept 15, . 2005). . , Chapter 5: Translating Smalltalk 160 http://en.wikipedia.org/wiki/Xerox_Star projects—but rather to point out yet another considerable translation w h i c h occured i n the packaging and marketing of the Star. In Kay's Dynabook concept, end-users were seen as designers and developers; system tools were accessible from top to bottom, and the "late binding" philosophy led to an expectation that the details of how a user actually worked w i t h a system would be defined ongoingly by that user. 1 1 This clearly presents difficulties from a business perspective; how o n earth does one market such a concept to a potential audience? It is far too vague. The X e r o x Star, then, was a distillation of one possible scenario of how typical office-based end-users w o u l d work. The Star operating system was not Smalltalk-based, so it would not be possible to easily change the configuration; instead, the Star's developers worked according to a now-commonplace model: usability research. They were able to draw upon several years of internal X e r o x use of the A l t o computers, w i t h and without Smalltalk—there were over 2000 of them i n use at X e r o x i n the 1970s—and they developed a detailed model of tasks and use cases: what w o u l d end-users want to do w i t h the Star, how w o u l d they go about it, how should the user interface be structured to enable this? The shift here is from a notion of participatory designers (in Kay's conception) to end- users as we understand the term now. For Thierry Bardini and August Horvath, i n their arti- cle, "The Social Construction of the Personal Computer User" (1995), this is the point where the end-user role is formally established. Employing the language of actor-network theory, they write, The first wave of researchers from SRI to P A R C helped i n opening the concept of design by the reflexive user, and the second wave got r i d of the reflexive user to create a methodology of interface design based on a user model and task analysis. In this last translation, the very utility of the reflexive user... was questioned. The result was a new set of principles for the design of the user interface and its new look and feel: icons and menus. The first step of this new methodol- ogy is also the last that we consider for this part of the history. Here begins the 11. Note that in this formulation, "user" refers to an actual individual, as opposed to an hypothetical "User" for whom the system has been designed. Chapter 5: Translating Smalltalk 161 negotiation w i t h real users over the script of personal computing. (Bardini & Horvath 1995 [italics added]) There are two substantial black boxes emerging here: first is Bardini & Horvath's notion of "real users;" second, and at least as influential, is the trope of "user friendliness," w h i c h is the target of user-centered design and possibly the key selling point of the microcomputer revo- lution, especially since the M a c i n t o s h . But this is all too pat, both as history and as pedagogy. Bardini and Horvath seem content to close the box and write the history as done at this point—what follows is the unfortunate popular history of personal computing w i t h its monolithic end-users and engi- neers. I am not prepared to close the box here, and neither are the community of people surrounding Kay and the Dynabook idea, as we shall see. In my reading of this history, it is imperative that we question whether the "realization" (in Bardini and Horvath's language) of the User and the attendant reification of the qualities of user-friendliness (ease of use, ease of learning, not demanding too m u c h of the user) is something w h i c h we are prepared to accept. It seems to me that the "reflexive users" of the early P A R C research are in fact more real than the hypothetical one(s) inscribed i n User-Centered Design—which performs a substitution not unlike what focus groups do for markets or audiences. The earlier reflex- ive users at least had agency i n their scenarios, as they actively shaped their computing media. The later Users, though i n vastly greater numbers, must be satisified w i t h being folded into a pre-shaped role. It is, of course, indisputable that this latter version has become the dominant one. But one of the consequences of this closure is the rise of a genre of computing literature (especially w i t h i n education) w h i c h diagnoses the problems stem- m i n g from the cultural disconnect between "engineers" and "end users" (e.g., see Shields 1995; Rose 2003). This diagnostic tendency is rarely constructive (see Papert's 1987 defense of computer cultures); rather, it more effectively serves to further reify these oppositional roles. It is significant, I think, that Kay's original (and ongoing) conception of personal computing (especially i n education) is an alternative to the now-commonplace notion of Chapter 5: Translating Smalltalk 162 e n d - u s e r s . B a r d i n i a n d H o r v a t h s u g g e s t t h a t K a y ' s " r e f l e x i v e u s e r s " a r e a n a r t i f a c t o f h i s t o r y — b a c k w h e n c o m p u t e r s w e r e o n l y f o r ' c o m p u t e r p e o p l e ' — b u t I a m n o t y e t / q u i t e c o n v i n c e d . N o r a r e t h e p r o p o n e n t s o f a m o v e m e n t i n t e c h n o l o g y d e s i g n a n d p o l i c y w h i c h e m e r g e d i n S c a n d i n a v i a i n t h e l a t e 1 9 7 0 s a n d e a r l y 1 9 8 0 s c a l l e d participatory design ( E h n 1 9 8 8 ) , w h i c h s a w a s p e c i f i c a l l y p o l i t i c a l d i m e n s i o n i n t h e d i v i s i o n o f l a b o u r — a n d p o w e r — b e t w e e n w o r k e r s ( i n c r e a s i n g l y s e e n a s e n d - u s e r s ) a n d t h e d e s i g n e r s a n d e n g i n e e r s r e p r e - s e n t i n g t h e i n t e r e s t s o f c o r p o r a t e p o w e r . T h e p a r t i c i p a t o r y d e s i g n m o v e m e n t s o u g h t t o a d d r e s s t h i s h e a d - o n , w i t h d i r e c t i n v o l v e m e n t f r o m l a b o u r u n i o n s . 1 2 I t i s , I t h i n k , i n s t r u c t i v e t h a t a m o v e m e n t w h i c h s i g n i f i c a n t l y a d d r e s s e s i s s u e s o f p o w e r i n c o m p u t i n g e n v i r o n m e n t s s h o u l d s e e k t o r e - i n s c r i b e t h e u s e r . T H E M I C R O C O M P U T E R R E V O L U T I O N O F T H E L A T E 1970s . J T h e h i s t o r y o f t h e a d v e n t o f t h e " P C " — t h e p e r s o n a l m i c r o c o m p u t e r 1 3 a s w e h a v e c o m e t o k n o w i t — h a s b e e n c o p i o u s l y d o c u m e n t e d a n d i s n o t a t o p i c I w i l l d e v o t e m u c h t i m e t o h e r e ; t h e r e a r e s e v e r a l s t a n d a r d h i s t o r i e s , t h e P B S d o c u m e n t a r y s e r i e s Triumph of the Nerds ( C r i n g e l y 1 9 9 6 ) i s p r o b a b l y s u f f i c i e n t a s a t o u c h s t o n e . T h e s t o r y l i n e h a s b e c o m e m o s t l y c o n v e n t i o n a l : u n k e m p t h a c k e r s i n t h e i r ( p a r e n t s ' ) n o r t h e r n C a l i f o r n i a g a r a g e s d i s c o v e r e d w h a t I B M — t h e m a r k e t l e a d e r i n c o m p u t i n g — h a d m i s s e d , a n d , a s a r e s u l t , t i n y s t a r t u p c o m p a n i e s l i k e A p p l e a n d M i c r o s o f t h a d t h e o p p o r t u n i t y t o m a k e h a y f o r t h e m s e l v e s , e v e n - t u a l l y e c l i p s i n g I B M . S i g n i f i c a n t l y , t h e s e c o m p a n i e s — a i d e d i n n o s m a l l p a r t b y I B M i t s e l f — s u c c e e d e d i n m a k i n g t h e p e r s o n a l c o m p u t e r a n e c e s s a r y p a r t o f m o d e r n l i f e : w e s o o n b e c a m e c o n v i n c e d t h a t w e n e e d e d t h e m i n e v e r y o f f i c e , e v e r y h o m e , e v e r y c l a s s r o o m . 12. Interestingly, one of the leading figures in the early 1970s Scandinavian "experiment" was Kristin Nygaard, who had co- designed the original object-oriented Simula programming language a decade before. 13. I will use the term "microcomputer" hereto distinguish between Alan Kay's conceptualization, of a "personal" computer and the small, inexpensive hobby- and home-targeted microcomputers which emerged in the late 1970s and early 1980s. The latter came to be known as "personal computers" especially after IBM branded their microcomputer offering as such in 1981. C h a p t e r 5: T r a n s l a t i n g S m a l l t a l k 163 Translation #4: From a software research tradition to a "gadget"focus The interesting thing about the conventional story of the microcomputer vanguard is that what had been accomplished at X e r o x P A R C i n the 1970s is almost entirely absent; their creators (that is, proto-billionaires like Steve Jobs and B i l l Gates) operated i n a w o r l d nearly perfectly isolated from the k i n d of thinking A l a n Kay was engaging i n . A p p l e Computer's promethean role is mostly as a hardware manufacturer: they created devices—boxes—that a computer hobbyist could afford. Microsoft's part was to market a rudimentary operating system for I B M ' s entry into the P C market. Both of these contributions—significant as they were i n hindsight—were astonishingly unsophisticated by P A R C ' s standards. Kay report- edly "hated" the new microcomputers that were coming out: "there was no hint that anyone who had ever designed software was involved" (1996a, p. 554). But this is not simply a matter of economics: the difference is not explained by the size of the budgets that separated, for instance, X e r o x from A p p l e i n 1978 or 1979. It is rather a cultural difference; Kay's work—and that of his colleagues at P A R C — d r e w upon a lengthy academic tradition of computing: these were all people w i t h PhDs (and not necessarily i n computer science, as Kay points out, but i n "established" disciplines like mathematics, phys- ics, engineering, and so forth). A p p l e founders Jobs and W o z n i a k were hobbyists w i t h soldering irons, more in the tradition of hot rodding than systems engineering. B i l l Gates was a Harvard dropout, a self-taught programmer who saw the business potential i n the new microcomputers. N o t that these self-styled pioneers were positioned to draw on P A R C ' s research; X e r o x publications were few and far between, and while the Dynabook and Smalltalk work was not secretive, what was published broadly was not the sort of thing that a self-taught, garage- based hacker could work with, despite Kay's best intentions. Even today, w i t h the accumu- lated layers of three decades of computing history at our fingertips, m u c h of the P A R C research comes across as slightly obscure, m u c h of it is still very marginal to mainstream computing traditions. The microcomputer revolution was primarily about hardware, and there is no doubt that much of its early energy was based i n a k i n d of gadget fetishism. A s Chapter 5: Translating Smalltalk 164 this early enthusiasm matured into a market, the resulting conceptual black box was the P C as a thing on your desk, a commodity. W h o could conceive of how software might become a marketable item? It must have seemed more of a necessary evil than something important i n and of itself, at least until the hardware market was sufficiently established for tools like VisiCalc—the first "killer app"—to be appreciated. Translation #5: From a research focus to a marketfocus The pioneers of the microcomputer succeeded most importantly as marketers, turning the hobbyist's toy into something that 'everybody' needed. In this respect their place i n history is secured. The scaling-up of the w o r l d of computing from what it looked like i n 1970— according to P A R C lore, of the 100 best computer scientists i n the world, 80 of them were working at P A R C — t o its size even a decade later, is difficult to encapsulate, and I won't try. Suffice it to say that it complicated Kay's vision enormously. But wouldn't, one might argue, the appearance of a personal computer on every desk be right i n line w i t h what Kay was driving at? Here, seemingly, was the prophecy fulfilled; what's more, right from the begin- ning of the microcomputer age, advocates from both sides of the table—at schools and at technology companies—were trying to get them i n front of kids. Necessary, perhaps, but not sufficient. T o look at it a little more closely, the m i c r o c o m - puter revolution of the late '70s and early '80s represents more of a cusp than a progression. A s for the details—frankly, the early microcomputers were pretty useless: their hobby- ist/tinkerer heritage made them more like gadgets than the personal media tools Kay had envisaged. Market pressures kept them woefully underpowered, 1 4 and the lack of continuity with the academic tradition meant the software for the early microcomputers was unin- spired^ to say the least. The process of turning the microcomputer into an essential part of modern life was a m u c h bumpier and more drawn-out process than the popular mythology 14. Although Kay's team in the 1970s foresaw a $500 personal computer, what they were actually working with cost vastly more; the transition to mass-produced (and therefore inexpensive) machines was not well thought out at Xerox. What was possible to create and bring to market for a few thousand dollars in the early 1980s was still a far cry from the Altos. 15. This reference to inspiration refers to software traditions beyond PARC too; it was not until the early 1990s (and publicly-acces- sible Internet) before Unix-based operating systems—another software tradition with roots in the 1970s—made any real impact on the PC market. Chapter 5: Translating Smalltalk 165 suggests. The question, "what are these things good for?" was not convincingly answered for a good many years. Yet the hype and promise dealt by the early advocates was enough to drive things forward. Eventually, enough black boxes were closed, the answers were repeated often enough to begin to seem right ("yes, I do need Microsoft W o r d " ) , and the new application genres (spreadsheets—desktop publishing—video games—multimedia— etc.) were layered thickly enough that it all began to seem quite 'natural.' By the time the Internet broke through to public consciousness i n the early 1990s, the personal computer was all but completely established as an indispensable part of daily life, and the rhetoric of determinism solidly w o n out: you really can't function without one of these things; you really w i l l be left behind without one; your children w i l l be disadvantaged unless you get o n board. The resulting black box from this translation was the identification of the computer and computer industry as the "engine of the economy," w i t h the various elements of computing firmly established as market commodities. But what had happened to the Dynabook? T H E D Y N A B O O K A F T E R X E R O X P A R C A l a n Kay went on sabbatical from P A R C in 1979 and never came back. The period from 1979 to 1984 w o u l d witness a mass exodus from X e r o x P A R C (Hiltzik 1999 describes at length the complex dynamics leading to this). W h i l e Kay's former colleagues, under Adele Goldberg's leadership, worked to prepare Smalltalk for an audience beyond P A R C , Kay took the opportunity to become chief scientist at A t a r i , w h i c h i n the early 1980s was the rising star i n the nascent video game industry. Kay spent four years at A t a r i , setting up research projects w i t h long-range (7-10 year) mandates, but has described his role there as a "trojan horse." Inside each video game machine is a computer, and therein lies the potential to go beyond the video game. This period at A t a r i is represented i n the literature more as a collection of war stories and anec- dotes (see Rheingold 1985; Stone 1995) than of significant contributions from Kay himself. Chapter 5: Translating Smalltalk 166 A few important characters emerged out of Kay's team at A t a r i : notably Brenda Laurel, who went on to be a leading user-interface design theorist (Laurel & M o u n t f o r d 1990; Laurel 1993) and later head of Purple M o o n , a software company that targeted adolescent girls; and A n n M a r i o n , whose " A q u a r i u m " simulation project at A t a r i became the prototype for the research project w h i c h w o u l d define Kay's next phase. Ultimately, though, nothing of educational significance came out of Kay's term at A t a r i , and i n 1984, corporate troubles ended his time there. In late 1985 Kay took a research fellowship at A p p l e Computer that—along w i t h the patronage of new C E O John Scully—seems to have given h i m a great deal of personal free- dom to pursue his educational ideas. Kay stayed at A p p l e for a decade, firmly establishing a link between h i m and the popular company. A p p l e had become known for its emphasis o n educational markets, w h i c h included large-scale donations of equipment to schools and research programs like "Apple Classrooms of T o m o r r o w " (Apple Computer 1995). Apple's presence in the education sector was key to their branding i n the 1980s (as it remains today). W h e n Kay arrived, A p p l e had just released the first-generation M a c i n t o s h computer, the culmination of the work that had been inspired by the famous visit to X e r o x P A R C i n 1979. The M a c was positioned as the alternative to the paradigm of personal computing defined by I B M ' s P C ; the M a c was branded as "the computer for the rest of us." It was certainly the closest thing to the P A R C genre of computing that the general public had seen. That the M a c was indeed different needs no re-telling here; the cultural and marketing battle between M a c s and P C s (originally inscribed as A p p l e vs. I B M , later A p p l e vs. M i c r o - soft) was the dominant metanarrative of 1980s computing. But despite the M a c ' s mouse- and-windows direct manipulation interface, it remained a long way from the k i n d of personal computing Kay's team had i n m i n d (and had been literally working with) i n the m i d 1970s. The "look and feel" was similar, but there was no facility for the user shaping her own tools; nor was the M a c intended to be part of a network of users, as i n the P A R C vision. Nevertheless, Kay's oft-quoted pronouncement was that the M a c was the first personal computer "good enough to critique." A n d , as evidenced by the number of Kay's colleagues Chapter 5: Translating Smalltalk 167 who came to work at A p p l e i n the 1980s, it must have seemed that the company was headed i n the right direction. 1 6 Kay's first year at A p p l e seems to have been spent writing, furthering his thinking about educa- tion and computing and projects. A n article Kay had published i n Scientific American (Kay 1984) gives a sense of where his thinking was. O n e of the key innovations of the microcomputer revolu- tion—and the first really important answer to the "what are they good for" question—was a software application called Visicalc, the first dynamic spreadsheet program, introduced i n the late 1970s by D a n B r i c k l i n and Bob Franston. The spreadsheet is an interesting example of a computing applica- tion that was born on microcomputers; it is significantly absent from the lengthy collection of innovations from X e r o x P A R C , and P A R C designers were very impressed when they saw it (Hiltzik 1999, p. 357). The spreadsheet concept clearly impressed Kay, too, and he framed it as a key piece of end-user empowerment: The dynamic spreadsheets a good example of such a tissuelike superobject. It is a simulation kit, and it provides a remarkable degree of direct leverage. Spreadsheets at their best combine the genres established i n the 1970s (objects, windows, what-you-see-is-what-you-get editing and goal-seeking retrieval) into a "better old thing" that is likely to be one of the "almost new things" for the mainstream designs of the next few years. (Kay 1984, p. 6) 16. Core members of Alan Kay's team—Larry Tesler, Ted Kaehler, and Dan Ingalls—went to Apple Computer in the early 1980s. Kay himself moved to Atari in 1980 and then Apple in 1984. Other ex-Xerox personalities spread to other key IT companies: word- processing pioneers Charles Simonyi and Gary Starkweather went to Microsoft, as did Alto designers Chuck Thacker and Butler Lampson. John Warnock and Charles Geschke, who worked on laser printing arid the foundations of desktop publishing founded Adobe Systems. Bob Metcalf, who invented ethernet, founded 3Com. Chapter 5: Translating Smalltalk 168 O n S p r e a d s h e e t s : The spreadsheet program remains one of the most powerful—and flexible—tools on • a personal computer; The original Visicalc r was eclipsed by Lotus 1-2-3 ' n t n e e a r ' y : 1 9 8 0 s , a program which made its developers into some o f the first software multi-millionaires. In- 1985, Microsoft-., released Excel, as a spreadsheet for the Macintosh, blending - the spreadsheet concept with a G U I . . Excel today is: ubiquitous and has no serious competition'. However, the range of tasks to which Excel i s , put—rfar beyond general ledger acc- ounting—is vast, including lots of s m a l b , . scale end-user programming and personal data management. A spreadsheet program like Excel is far. more flexible and adaptable to a user's needs than a word processor. Interestingly, though, the basic way a spreadsheet program works hasn't'really changed much ; in 25-years;.the new versions have'a lot of j features, but the basic concept of a j scriptable, two-dimensional grid of dynamically updated values remains. Kay and his colleagues had come to recognize that the microcomputer was a serious force i n the development of personal computing and not just a hobbyist's niche. The extent of Kay's engagement w i t h the spreadsheet idea shows, if nothing else, that a current of new ideas from outside sources were a welcome addition, after a decade of research w i t h i n P A R C . The Vivarium Project A l a n Kay's work at A p p l e was characterized by a single-minded return to the problem of how to bring scientific literacy to children via computing. H i s work o n Smalltalk was over, and the corporate politics of X e r o x were long gone. Kay took up his research fellowship at A p p l e by dedicating himself to working w i t h kids again, something w h i c h had practically eluded h i m since the m i d 1970s at X e r o x P A R C . The project w h i c h most defined Kay's tenure at A p p l e through the m i d and late 1980s and into the early 1990s was the Vivarium: a holistic experiment i n technology integration that is possibly unparalleled i n its scope. Kay's team moved i n to the Los Angeles O p e n School for Individualization—LA's first "magnet school"—and stayed there for seven years. The scale of Apple's investment of time and resources i n the school, and the O p e n School's contributions to Kay's research, make the V i v a r i u m a very special educational technology project. Though little has been written about the Vivarium—compared, say, w i t h the A p p l e Classrooms of T o m o r r o w ( A C O T ) program, w h i c h had a m u c h higher public profile—it has had a lasting impact on Kay's work, educational computing research, and, of course, the L A O p e n School. A n n M a r i o n was the project's manager, and the basic idea for the project had come from her Masters thesis, written while working w i t h Kay's team at A t a r i : to build a game from semi-autonomous cartoon characters. She wrote extensively about the project in a summative report for A p p l e entitled Playground Paper ( M a r i o n 1993). M a r i o n decided on an ecological setting, w i t h fish interacting w i t h (e.g., eating) one another, and this became the core storyline behind the ambitious educational program at the Los Angeles O p e n Chapter 5: Translating Smalltalk 169 School. W i t h i n an explicitly progressivist and flexible educational setting at the O p e n School, V i v a r i u m put the design and development of a complex ecological simulation i n primary-school children's hands. The challenge for the kids was to create more realistic interactions among the fish while learning about ecological models along the way; the chal- lenge for Kay's team was to extend computing technology to the children so that they could effectively carry this out. Larry Yaeger, one of the team members, later wrote: The literal definition of a " V i v a r i u m " is an enclosure or reserve for keeping plants and animals alive i n their natural habitat i n order to observe and study them. The Apple V i v a r i u m program is a long-range research program w i t h the goal of improving the use of computers. By researching and building the many tools necessary to implement a functioning computer vivarium, an ecology-in- the-computer, we hope to shed light on many aspects of both the computer's user interface and the underlying computational metaphor. W e are exploring new possibilities in computer graphics, user interfaces, operating systems, programming languages, and artificial intelligence. By working closely w i t h young children, and learning from their intuitive responses to our system's interface and behavior, we hope to evolve a system whose simplicity and ease of use w i l l enable more people to tailor their computer's behavior to meet their own needs and desires. W e would like untrained elementary school children and octogenarians to be able to make specific demands of their computer systems on a par w i t h what today requires a well trained computer program- mer to implement. (Yaeger 1989) The project was as broad-based as it was ambitious; it boasted an advisory board composed of various luminaries from the w o r l d of cognitive science, Hitchiker's Guide to the Galaxy author Douglas Adams, and even K o k o , the famous gorilla (at one point, the team was engaged i n creating a computer interface for Koko). The computer-based work was enmeshed i n a m u c h larger, exploratory learning environment (the school featured exten- sive outdoor gardens that the children tended) at the O p e n School. Yaeger wrote: The main research test site of the V i v a r i u m program is a Los Angeles "magnet" school known as the O p e n School. A l a n chose this primary school, grades 1 through 6 (ages 6 through 12), because of their educational philosophy, Chapter 5: Translating Smalltalk 170 founded on the basic premise that children are natural learners and that growth is developmental. Based on Piaget's stages of cognitive development and Bruner's educational tenets, the O p e n School was seen not as an institu- tion i n need of saving, but as an already strong educational resource whose fundamental philosophies aligned w i t h our own. W i t h the support of the O p e n School's staff, some 300 culturally and racially mixed children, and our p r i n c i - pal liaison with the school, Dave M i n t z , we have developed an evolving V i v a r i u m program that is included i n their Los Angeles Unified Public Schools curriculum. (Yaeger, 1989) The L A O p e n School had been established i n 1977, "by a group of parents and teachers who wanted an alternative to the 'back-to-basics' approach that dominated the district at that time. The group wanted to start a school based on the principles of Jerome Bruner and the practices of the British infant schools" (SRI International 1995). It was the L A Unified School District's ( L A U S D ) first magnet school, mandated to pursue a highly progressive agenda, w i t h few of the restrictions that district schools worked within. The school held 384 students ( K - 5 ) and 12 teachers, arranged i n 2-year multigraded "clusters," team-taught by two teachers with 62 kids i n each (BJ A l l e n - C o n n , personal communication, N o v 2004). A n n M a r i o n characterized the school setting: The L . A . school we chose to work w i t h we believed had no need of "saving." The O p e n School for Individualization, a public magnet school i n Los Angeles, emphasizes theme-based projects around w h i c h children learn by bringing all the classroom subjects together i n service of the theme. Construction projects evoke a whole person approach to learning w h i c h engages many different mentalitites. In this regard we share the c o m m o n influence of Jerome Bruner w h i c h is evident throughout these different activities. W e find models of group work. Variety and effectiveness of working groups are to be seen of different sizes, abilities and ages, i n w h i c h children collaborate and confront each other. ( M a r i o n 1993, ch 2, p. 1) By 1985—before Apple's involvement—the school had apparently already begun to inte- grate microcomputers, and the teachers had used Logo. A t that time there was already a strong notion of how computers should be used: i n the classroom, not i n labs, and for crea- Chapter 5: Translating Smalltalk 1 7 1 tive work, as opposed to drill-and-practice work or games ( A l l e n - C o n n , personal communication). In that year, A l a n Kay had contacted the Board of Education looking for a school to serve as a research bed; he had also inquired at the M u s e u m of Science and Indus- try, and made a list of possible schools. After visiting several, Kay felt that the O p e n School was philosophically closest to what he had i n m i n d , that the school was indeed the right k i n d of environment for his research. Originally, Kay and M a r i o n planned to run the V i v a r i u m i n one classroom, but principal Bobby (Roberta) Blatt insisted that any resources be used for the entire school; that it w o u l d disrupt the democratic and consensual nature of the school to have one class w i t h an inordinate amount of technology. Kay's research could focus on one classroom (it largely did), but the resources had to be managed across the whole school (Bobby Blatt, personal communication, N o v 2004). Blatt, the teachers, and the parents at the O p e n School negotiated extensively w i t h Kay to establish the terms of the relationship; while they were open and excited by the possibili- ties of using technology intensively, they were concerned that Apple's involvement would change the "tenor" of the school (Blatt, personal communication); instead, they wanted the technology to be "invisible" and fully integrated into the constructivist curriculum they were creating. The focus had to be on the children and their creations. For instance, the school had a policy against video games on the school's computers—unless the children themselves created the games. Blatt reports that this spawned a culture of creating and shar- ing games, mostly developed i n Apple's HyperCard authoring software. In January 1986, when the V i v a r i u m project was launched, A p p l e Computer installed one computer per two c h i l d r e n , 1 7 began training the teachers and staff and provided a technical support staffper- son at the school. The O p e n School also negotiated an investment by A p p l e into maintaining the school's arts, music, and physical education curriculum—since arts c u r r i c u l u m funding was under the axe i n California. Kay was more than happy to comply w i t h this, and so an investment 17. The computers at the LA Open School were installed inside the desks, and the desktops replaced with a piece of plexiglass. This allowed the computers to be installed in regular classrooms without making the classroom desks useless for any other use. (see Kay 1991) Chapter 5: Translating Smalltalk 172 o n the order of $100,000 was made annually to the O p e n School for curriculum and activi- ties that had nothing directly to do with computing, but served to keep academics, arts, and technology i n balance—and w h i c h also provided time for the core teaching staff at the O p e n School to dp group planning. Both Blatt and long-term teacher BJ A l l e n - C o n n (personal communication, N o v 2004) reported that once everyone got to know Kay, their fears of being overwhelmed by the technological agenda quickly abated, owing to Kay's "respectful attitude." Ongoing collaboration between Kay's group and the O p e n School teachers took place at monthly "brown bag" lunches and teas. By the late 1980s, the twelve teachers at the O p e n School were working part-time w i t h Kay's team—10 hours per week plus 6 - 1 0 weeks per summer, as well as conferences and . other professional-development events through the year ( M a r i o n 1993, ch. 2, p. 10)—and being paid consulting fees to help develop curriculum for themes at the O p e n School. K i m Rose claims that some of the teachers bowed out after a few years of this, simply because they wanted to have regular summer vacation for a change (Rose, personal communication, O c t 2004), but this general pattern continued right into the early 1990s. W h a t is obvious here, but w h i c h bears dwelling upon for a moment, is that the relation- ship between Kay's team at Apple and the L A O p e n School was an unprecedented and unparalleled situation of support, funding, and devotion to pursuing the ends of technology integration. It is hard to imagine any other school situation even comparable to this. But rather than thinking of the V i v a r i u m project at the O p e n School as any sort of model for technology integration, we should consider the O p e n School part of Kay's research lab. A t X e r o x P A R C , it had been a challenge to sustain access to children and school settings, or conversely to sustain children's access to X e r o x labs. A t A p p l e , Kay's agenda seems to have been to set up a r i c h and ongoing relationship w i t h a school as a foundational element of the project, and then to move forward w i t h the technological research—somewhat the reverse of the arrangement at X e r o x P A R C . The V i v a r i u m project itself had a broad and holistic vision; it modelled both an explora- tory educational vision and a way of integrating technology w i t h education. A p p l e i Chapter 5: Translating Smalltalk 173 management seems to have given Kay the r o o m to pursue a pure research agenda, but this statement needs qualification: the O p e n School was i n every way a 'real' and applied setting. Rather, Kay colleagues' research there seems to have had little impact on Apple's products or its ostensible presence in the marketplace and the 'purity' of the research should be seen' on this corporate level rather than the decidely messy pedagogical level. A n n M a r i o n ' s summative report on the project goes into some detail about the practical difficulty of attempting to keep the technology subservient to curriculum ends. But V i v a r i u m was a low- profile 'skunkworks' research project, interested in furthering blue-sky research into a wide variety of computing themes—simulation environments, animation systems, user-interface techniques (both hardware and software)—in comparison w i t h the m u c h higher-profile "Apple Classrooms of T o m o r r o w " program, w h i c h sought to deploy existing A p p l e technol- ogies to schools and to be focused on "technology transfer" ( A n n M a r i o n , personal communication, N o v 2004). The V i v a r i u m project had run out of steam by 1993, when A p p l e fell o n hard times economically and organizationally (Kay's patron, John Sculley, was ousted as C E O i n 1993). Bobby Blatt reports that Apple's pullout from the L A O p e n School was conducted w i t h lots of advance warning, and that the parents were motivated to keep the same level of computer integration at the school, having "tasted the wine" (Blatt, personal communication), a chal- lenge w h i c h they have appearently succeeded at. In 1993, principal Blatt, looking at retirement and anticipating Apple's withdrawal from the school, pursued Charter School status for the L A O p e n School, w h i c h w o u l d ensure its continued autonomy; she succeeded, and it became the O p e n Charter School i n 1994, continuing along the same lines today. Apple Computer's investment of people and research has not been duplicated. However, Kay's own team has maintained some level of involvement w i t h the O p e n School (and i n particular, with teacher BJ A l l e n - C o n n ) ever since. In fact, the foundation of a research group that w o u l d provide Kay's working context for the next decade was by this point estab- lished. K i m Rose, for instance, was hired on to the V i v a r i u m project i n 1986, and remains Kay's closest working partner today. Chapter 5: Translating Smalltalk 174 Vivarium research The research conducted through the V i v a r i u m years seems to have two facets: the first, with the O p e n School i n m i n d , was the creation of a simulation environment (of an underwater ecology) in w h i c h primary-school kids could act as designers and systems modellers as they developed their understanding of ecosystem dynamics. The second, w h i c h potentially had more application to Apple's o w n agenda, was an investigation of end-user programming, with a definition of "end-user" beyond the children at the L A O p e n School. The simulation research drew largely on the work Kay's team had done w i t h Smalltalk while at Xerox; the essential challenge is to figure out what sort of basic scaffolding w i l l allow children to work at the level of design and problem-solving rather than wrestling w i t h the syntax and mechanics of the environment (Kay & Goldberg 1976; Goldberg 1979; G o l d - berg & Ross 1981; M a r i o n 1993). The V i v a r i u m project engaged several teams of developers (often drawn from the M I T M e d i a Lab) to try out various approaches to this challenge. M i k e Travers' M S Thesis from M I T , entitled "Agar: A n A n i m a l Construction K i t " (1988) was the result of one early project. Agar provided a customizable, agent/rules-based envi- ronment for setting up autonomous virtual actors and scripting their prototypical reactions to one another. Another example is Jamie Fenton and Kent Beck's first-generation "Play- ground: A n Object Oriented Simulation System w i t h Agent Rules for C h i l d r e n of A l l Ages" (1989); Playground—in the Fenton and Beck version and i n subsequent versions developed by Scott Wallace—was a more abitious agent/rules system w h i c h prototyped a scripting language and environment designed for children to describe interrelationships between actors. There were numerous other prototypes, including icon-based graphical program- ming environments and a variety of other ideas. The Playground system eventually emerged as the dominant platform for the simulation projects at the O p e n School. A n n M a r i o n char- acterized it thus: The playground was chosen as our metaphor for a computer programming environment... The playground is a place where rule-governed activities have a natural place, involving play, invention, and simulation. O n the playground, Chapter 5: Translating Smalltalk 175 children assume roles w h i c h l i m i t their behavior to that of defined and shared characters. Rules and relationships are endlessly debated and changed. The nature and structure of playground play resembles some of the strategy c h i l - dren might exercise on the computer, to set up computer instructions i n construction and play w i t h simulations of multiple players. ( M a r i o n 1993, pref- ace, p. 3.) One way to think about Playground is as having a spreadsheet view, a Hyper- C a r d view, and a textual programming view, all simultaneously available, where the user can make changes i n whatever view seems easiest to work with, and have all views updated appropriately, (ch 1. p. 1) A point w h i c h is easy to overlook from the vantage point of the 21st century is the sheer challenge of making systems of this sophistication workable on the computers of the m i d 1980s. The later versions of the Playground software were developed using a version of Smalltalk for the Macintosh. This might seem like a straightforward thing to do, given the -historical sequence, but performance limitations of 1980s-era M a c i n t o s h computers meant that this must have been a constant headache for the team; what had been possible on expensive, custom-designed hardware at X e r o x was not nearly as practical on relatively inexpensive Macs, even a decade later. A t one point, a special Smalltalk accelerator circuit- board had to be installed i n the M a c s at the O p e n School to get an acceptable level of performance i n Playground. In a sense, the V i v a r i u m project can be seen as a massive logis- tical challenge for Kay: how to move from a context i n w h i c h all the technical facets are (reasonably speaking) w i t h i n his team's control to one where one's ideas are constantly running up against basic implementation obstacles. A t the same time, of course, the engage- ment w i t h the O p e n School and the children there was far deeper and longer-term than anything Kay had experienced at Xerox. Chapter 5: Translating Smalltalk 176 T h i s i m a g e h a s b e e n r e m o v e d b e c a u s e o f c o p y r i g h t r e s t r i c t i o n s . F i g u r e 5.6: P l a y g r o u n d e n v i r o n m e n t , c i r c a 1990 ( f r o m M a r i o n 1993) The other research avenue, into end-user programming as a general topic, is an example of Kay's body of well thought-out ideas coming i n contact w i t h a wealth of related ideas from others and other contexts. Clearly, after a decade of work w i t h Smalltalk, and having had the opportunity to define much of the problem space (of how personal computing w o u l d be done) from scratch, Kay and his colleagues from P A R C had done a huge amount of thinking already. W i t h i n Apple Computer, though, were a number of people who had come at the topic of end-user programming from different perspectives. A n A p p l e "Advanced Technol- ogy Research Note" from the early 1990s (Chesley et al. 1994) reveals a rich and fecund discourse going on w i t h i n A p p l e , despite a relative dearth of results being released to the P C marketplace and computer-buying public. Apart from the already-mentioned spreadsheet model, the standout example was H y p erCard . Kay and his team had the opportunity to engage with and learn from some substantial development efforts, and to watch how real end-users—both children and adults (teachers among them)—reacted to various systems. Chapter 5: Translating Smalltalk 177 W h a t must be underscored i n this discussion about A l a n Kay's tenure at A p p l e Computer is that despite the low profile of the V i v a r i u m project 1 8 and the dearth of overt outcomes (software, publi- cations, further research programs), Kay must be credited w i t h sticking tightly to his agenda, resist- ing the translation of the project into other trajectories, be they corporate or technical, as had happened at Xerox. In short, Kay appears to have fought to keep the black boxes open at all costs ( A n n M a r i o n ' s report supports this). Ultimately, though, by the m i d 1990s, w i t h the V i v a r i u m project w o u n d up and A p p l e i n corporate trouble, Kay found himself w i t h rather less social capital than he w o u l d have liked; without over-dramatiz- ing too much, we might conclude that Kay's efforts to keep his research project "pure" risked its ongo- ing support at Apple—and perhaps beyond. The ultimate outcomes of the V i v a r i u m project had impacts on the persons involved, and we shall see how this affects Kay's subsequent work, but it is hard to see the broader educational (or even tech- nical) results—to say nothing of the influence—of V i v a r i u m . This observation is, I believe, in line w i t h the general sense of Latour and Callon's network theory: it is i n being translated that a project or Message-passing vs. Value-pulling T h e Playground e n v i r o n m e n t ( d e v e l o p e d t w i c e : ' i n t h e late 1 9 8 0 s by Jay F e n t o n a n d " , Kent Beck, a n d in the early 1 9 9 0 s by Scott W a l l a c e ) i r o n i c a l l y r e p r e s e n t s a c o n c e p t u a l reversal' o f the c o m p u t i n g m o d e l Kay h a d p i o n e e r e d in the 1970s. ' Kay's f u n d a m e n t a l c o n t r i b u t i o n to c o m p u t e r s c i e n c e (via S m a l l t a l k ) is the centraJity:.of m e s s a g e - p a s s i n g objects. T h e term*if' S'o.bject o r i e n t a t i o n , " but Kay has n o t e d that the m o r e i m p o r t a n t c o n c e p t is that of7message passing. In a 1998 mailings list p o s t i n g , Kay c l a r i f i e d : The Japanese have a small word— ma—for "that which is in .between"—perhaps the' nearest English equivalent is "interstitial The key in making great and growable systems is much more to - design how its modules communicate . rather [than '-tyhaijj^Xheir internal properties and' benayiors"should be. (Kay 1998b) P l a y g r o u n d , h o w e v e r , c o m p l e t e l y e s c h e w e d m e s s a g e - p a s s i n g . I n s p i r e d by the s p r e a d - sheet m o d e l , Playg m a d e up o f " o b j e c t s w h i c h h a d v a r i o u s p a r a m e t e r s . T h e v a l u e s o f t h e s e p a r a m e t e r s c o u l d t h e n be looked up, a n d c h a i n s o f effect c o u l d be built o u t o f s u c h retrievals (just like in a c o m p l e x s p r e a d s h e e t ) . Instead o f , a n o b j e c t actively s e n d i n g a m e s a g e t o a n o t h e r object, o b j e c t s c o n t i n u a l l y watched for c h a n g e s in o t h e r o b j e c t s / j u s t as a d y n a m i c f o r m u l a cell in a s p r e a d s h e e t w a t c h e s for c h a n g e s in the d a t a cells it takes as i n p u t . W h e n the data cells c h a n g e , the d y n a m i c f o r m u l a u p d a t e s its o w n v a l u e . T h e a r c h i t e c t u r e w a s p e r h a p s a p t — P l a y g r o u n d w a s d e s i g n e d for a m a r i n e e c o s y s t e m s i m u l a t i o n , a n d , arguably, plants a n d a n i m a l s d o m o r e o b s e r v a t i o n - ^ " n o t i c i n g " — o f e a c h o t h e r ' s states t h a n s e n d i n g m e s s a g e s (Ted Kaehler, p e r s o n a l c o m m u n i c a t i o n , O c t 2 0 0 4 ) . P l a y g r o u n d p r o v i d e d a w e a l t h o f i n s i g h t s , but p r o v e d difficult t o p r o g r a m c o m p l e x b e h a v i o u r . Later software p r o j e c t s * r e t u r n e d t o the m e s s a g e - p a s s i n g m o d e l , c o m b i n e d w i t h pieces of,the d y n a m i c - r e t r i e v a l m o d e l . system gains greater currency and interconnectedness. A project w h i c h remains tightly defined perhaps does so at the expense of its larger reach. Chapter 5: Translating Smalltalk 178 H Y P E R C A R D A N D T H E F A T E O F E N D - U S E R P R O G R A M M I N G In assessing the fate of Kay's projects and ideas while at Apple, it is instructive to consider the contemporaneous example of HyperCard, a piece of personal media software similar i n spirit to parts of the Dynabook vision. H y p e r C a r d was relatively successful i n comparison, but had a decidedly different history and aesthetic. Like the dynamic spreadsheet, Hyper- C a r d forced Kay and his team to take notice, and to look very closely at its success. HyperCard's ultimate fate, however, points to larger cultural-historical trends w h i c h signif- icantly affected Kay's projects. Translation #6: From media environment to "Multimedia Applications" H y p e r C a r d was a pet project of B i l l A t k i n s o n , who was perhaps the key software architect of the original M a c i n t o s h and its graphic interface (he had been part of the team from A p p l e that went to X e r o x P A R C to see Smalltalk i n 1979). Atkinson's place in M a c i n t o s h history was further cemented w i t h his release of MacPaint, the original M a c i n t o s h graphics appli- cation and the ancestor of software like Adobe Photoshop. After MacPaint, A t k i n s o n began playing w i t h the notion of an interactive presentation tool called Wildcard. W i l d C a r d — rebranded HyperCard in 1985—was based on the metaphor of a stack of index cards w h i c h could contain any combination of graphic elements, text, and interactive buttons. The main function of such interactive buttons was to flip from one 'card' view to another, thereby making H y p e r C a r d a simple hypermedia authoring tool. HyperCard put together an unprecedented set of features—graphics tools like MacPaint, a simple text editor, support for audio, and, w i t h the addition of a scripting language called HyperTalk i n 1987, a simple and elegant end-user programming environment. Despite what the historical sequence might suggest, and despite some similarities w h i c h appear i n H y p e r C a r d (a rough object model, message passing, and the "HyperTalk" 18. The published literature on the Vivarium is very thin, given that the project ran for almost a decade. Notable is an article Kay wrote for Scientific Americanin 1991, "Computers, Networks and Education," which is heavy on Kay's educational philosophy and light on project details. The project gets a brief mention in Stewart Brand's popular book, The Media Lab: Inventing the Future at MIT(] 987), due to the involvement of several Media Lab researchers. Chapter 5: Translating Smalltalk 179 scripting language), Smalltalk was not a direct influence on HyperCard; rather, it apparently came more-or-less fully formed from B i l l Atkinson's imagination. A t k i n s o n had of course seen Smalltalk, and there were notable e x - P A R C people including A l a n Kay and T e d Kaeh- ler at A p p l e (and even involved i n HyperCard's development), but H y p e r C a r d and its workings were Atkinson's o w n (Ted Kaehler, personal communication, July 2004). • HyperCard's great innovation was that it brought the concept of hypermedia authoring down to earth; it was the first system for designing and creating non-linear presentations that was w i t h i n the reach of the average P C user. A designer could put any combination of media elements on a given card, and then create behaviours w h i c h would allow a user to move between cards. The system was simple to grasp, and i n practice, proved easy for users of all ages to create "stacks," as H y p e r C a r d documents were called. Key to HyperCard's success was Apple's decision to pre-install H y p e r C a r d on all new M a c i n t o s h computers after 1987. The result was a large H y p e r C a r d community that distributed and exchanged thousands of user-created HyperCard stacks, many of w h i c h took the form of curriculum resources for classrooms. 1 9 Alternatively, H y p e r C a r d was seen as a multimedia authoring toolkit and was put to use as a writing and design medium (or multimedium, as it were), again, often i n classrooms; the genre of multimedia "authoring" was first established i n this period, indicating the design and construction of hypermedia documents in a tool such as H y p e r C a r d . A m b r o n & Hooper's 1990 book, Learning with Interactive Multimedia, is a snapshot of the kinds of uses to w h i c h H y p e r C a r d was being put i n the late 1980s. N o t surprisingly, H y p e r C a r d was introduced very early on to the teachers and staff at the O p e n School, and met w i t h considerable zeal; the teachers there could quickly see appli- cations for it, and could quickly figure out how to realize these. The children were able to work w i t h it easliy, too. This experience was i n some contrast with Playground and the simulation environments, which, although being m u c h more sophisticated, were barely 19. HyperCard was used early on to control a videodisc player attached to one's Macintosh; this gave HyperCard the ability to inte- grate large amounts of high-quality multimedia content: colour images and video, for instance. Chapter 5: Translating Smalltalk 180 usable on the limited hardware of the day. This, i n combination w i t h HyperCard's elegant balance of simplicity and flexibility, proved to be a lesson Kay took to heart; here was a system that managed to acheive the low threshold of initial complexity that Kay had been shooting for over a decade or more. Still, HyperCard's limitations frustrated Kay. A s elegant i n conception and usability as it was, H y p e r C a r d was nowhere near the holistic media environment that Smalltalk had been. A n d while Kay praised H y p e r C a r d for its style and its obvious appeal to users, he railed against its limited conceptual structures: "That wonderful system, H y p e r C a r d , i n spite of its great ideas, has some 'metaphors' that set my teeth on edge. Four of them are 'stack,' 'card,' 'field,' and 'button'" (Kay 1990, p. 200)—the entirety of HyperCard's object model! This is not mere griping or sour grapes on Kay's part; the object paradigm that Smalltalk pioneered meant that objects were fundamental building blocks for an unlimited range of conceptual structures; to restrict a system to four pre-defined objects misses the entire point. That said, that H y p e r C a r d i n its conceptual simplicity was an immediate success—not just at the O p e n School, but w i t h users around the world—was not lost o n anyone, least of all Kay, and its influence w o u l d be felt i n his later work. HyperCard's limitations were felt by others, too. It became clear that, though Hyper- C a r d could do animation, a dedicated tool like Video W o r k s (the prototype for M a c r o M e d i a ' s "Director" software) was a better animation tool. Similarly, M a c P a i n t and like graphics programs were more flexible than H y p e r C a r d (which, despite its decade-long life, never went beyond black-and-white graphics). The very idea of an all-encompassing media environment like Smalltalk, and, to a lesser extent, H y p e r C a r d , was doomed to buck the trend toward a genre of discrete application programs: individual w o r d processors, spreadsheets, paint programs, and so on. If nothing else, a dedicated tool like a paint program was easier to market than an open-ended authoring environment. The "what are they good for" question is directly the point here. This drawing program is excellent for making diagrams; this w o r d processor is excellent for writing letters; but what was Hyper- C a r d good for, exactly? Its hundreds of thousands of users all had an idea, but it is important Chapter 5: Translating Smalltalk 181 to remember that none of these early users had to make a purchasing decision for Hyper- Card, since it had been given away w i t h new M a c s . Even H y p e r C a r d ultimately had to justify its existence at Claris, the software company spun off from Apple i n the early 1990s, by claiming to be a "multimedia" toolkit, and was branded by Apple as a "viewer" application for H y p e r C a r d stacks; the authoring functionality was sold separately. H y p e r C a r d was u l t i - mately abandoned i n the 1990s. HyperCard's relative success i n the personal computing world underscores two points, which, seen through the lenses of Latour's translations and subsequent black boxes, appear thus: first is the shift from media environment to "application toolkit," w i t h the attendant metaphors: standardized palettes, toolbars, and the establishment of the commodity "appli- cation" as the fundamental unit of personal computing. The second shift is from a generalized media environment to that of " M u l t i m e d i a " applications, reifying " M u l t i m e d i a " as an industry buzzword. A s these concepts become commoditized and reified i n the marketplace, the interesting work of defining them shifts elsewhere. Translation #7: From epistemological tools to "Logo-as-Latin" Seymour Papert's work w i t h the Logo programming language had begun as early as 1968, but despite its significant impact on A l a n Kay's personal mission and despite a good number of published articles from the early 1970s, Logo made very little impact on the public imagi- nation until 1980, with the publication of Papert's signature work, Mindstorms: Children, Computers, and Powerful Ideas. The appearance of this book set the stage for a significant commercialization and marketing effort aimed at getting Logo onto the new personal microcomputers. Programming i n Logo grew into a popular computing genre through the early 1980s. A look at library holdings i n the LB1028.52 0 range reveals a huge surge of output surrounding Logo in the classroom i n the m i d 1980s. Papert and Logo had become practi- cally synonymous w i t h educational technology in these years. But of course any substantial movement of an idea—let alone a technological system—into very different and (and vastly 20. LB1028.5 is listed in the Library of Congress Classification as "Computer assisted instruction. Programmed instruction"— ironic, given Papert's comments on children programmming computers and vice-versa. Chapter 5: Translating Smalltalk 182 larger) contexts brings w i t h it necessary translations. In the case of Logo, this shift was i n the form of branding. What was Logo, that it could be rapidly picked up and spread across school systems i n N o r t h A m e r i c a and Europe i n just a few short years (Aglianos, Noss, & W h i t t y 2001)? Chakraborty et al. (1999) suggest that the effort to make Logo into a market- able commodity effectively split the Logo community into "revolutionists" like Papert, interested in a radical redefinition of mathematics pedagogy, and more moderate "reform- ers," who were more interested i n spreading Logo as widely as possible. This means that what Logo became i n the marketplace (in the broad sense of the word) was a particular black box: turtle geometry; the notion that computer programming encourages a particular k i n d of thinking; that programming i n Logo somehow symbolizes "computer literacy." These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the 'thinking skills' strategy was never part of Papert's vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equiva- lent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by w h i c h Logo became a mass phenomenon. Papert, for better or worse, stuck by Logo all the while, fighting something of a rear-guard action to maintain the complex and challenging intellectual foundation w h i c h he had attempted to lay. It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, w i t h i n educational circles, to represent computer programming i n the large, despite Papert's frequent and eloquent statements about Logo's role as an epistemo- logical resource for thinking about mathematics. In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo 'didn't work as promised,' as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated i n terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo's growing numbers of critics of technocentrism: Chapter 5: Translating Smalltalk 183 Egocentrism for Piaget does not mean "selfishness"—it means that the child has difficulty understanding anything independently of the self. Technocen- trism refers to the tendency to give a similar centrality to a technical object— for example computers or Logo. This tendency shows up i n questions like "What is T H E effect of T H E computer on cognitive development?" or "Does Logo work?"... such turns of phrase often betray a tendency to think of "computers" and "Logo" as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important compo- nents of educational situtations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology. (Papert 1987, p. 23) But by 1990, the damage was done: Logo's image became that of a has-been technology, and its black boxes closed: i n a 1996 framing of the field of educational technology, T i m o t h y Koschmann named "Logo-as-Latin" a past paradigm of educational computing. The blunt idea that "programming" was an activity w h i c h could lead to "higher order thinking skills" (or not, as it were) had obviated Papert's rich and subtle vision of an ego-syntonic mathematics. By the early 1990s, the literature on educational technology had shifted; new titles i n the LB1028.5 section were scarce, as new call numbers (and thus new genres) were i n vogue: instructional design (LB1028.38); topics i n the use of office productivity software (LB1028.46) and multimedia i n the classroom (LB1028.55). Logo—and w i t h it, program- ming—had faded. This had obvious effects for other systems—like H y p e r C a r d (Ted Kaehler, personal communciation). In fact, HyperCard's rise to relative popularity i n this same period (and i n similar call numbers) is probably despite its having a "programming" component; its multimedia strengths carried it through the contemporary trend. T o my knowledge, there is no scholarship tracing the details of HyperCard's educational use historically, but one piece of evidence is a popular competitor (perhaps it w o u l d be better to say "successor") to H y p e r C a r d called HyperStudio. HyperStudio featured roughly the same stack-and-cards metaphor, and added colour graphics, but dropped HyperCard's elegant scripting language. In fact, and somewhat ironically, later releases of HyperStudio incorpo- Chapter 5: Translating Smalltalk 184 rated a language called "HyperLogo" (perhaps to flesh out the program's feature list), though it was not particularly well integrated, 2 1 and there is little evidence that it made m u c h of an impact on HyperStudio's use. Similarly, a new genre of simulation environments for teaching systems concepts (e.g., SimCalc) eschewed the notion of'whole' environments, prefering instead to provide neatly contained microworlds w i t h a m i n i m u m of dynamic scope; these are obviously quicker to pick up and easier to integrate into existing curriculum and existing systems.2 2 The message—or black box—resulting from the rise and fall of Logo seems to have been the notion that "programming" is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as i n the analogy w i t h Latin. Moreover, w i t h the coming of "multimedia" as the big news i n early-1990s educational computing, the conclusion had seemingly been drawn that programming is antithetical to 'user-friendliness' or transpar- ency. H o w far we had come from the Dynabook vision, or any kind of r i c h notion of computational literacy, as diSessa called it: The hidden metaphor behind transparency—that seeing is understanding—is at loggerheads w i t h literacy. It is the opposite of how media make us smarter. M e d i a don't present an unadulterated "picture" of the problem we want to solve, but have their fundamental advantage i n providing a different represen- tation, w i t h different emphases and different operational possibilities than "seeing and directly manipulating." (diSessa 2000, p. 225) The Dynabook vision seemed further away than ever! Smalltalk, no matter what you may call it, is a programming language. Or is it? T o answer that question, we first need a more comprehensive assessment of what personal computing means to us today. 21.1 had the opportunity to write high-school Information Technology curriculum materials for distance education in the late 1990s. HyperStudio was a popular resource and I was encouraged to write it into my materials. However, the HyperLogo imple- mentation so underwhelmed me that I rejected it in favour of a plain and simple Logo implementation (UCBLogo) for a module on introductory programming. 22. Interestingly, Jeremy Roschelle and colleagues on the SimCalc project argue persuasively for a "component architecture" approach as an alternative to all-encompassing systems (Roschelle et al. 1998), but this must be read in historical context, appearing in a time when networking technology was re-appearing as a fundamental component of personal computing and monolithic application software was being challenged. • Chapter 5: Translating Smalltalk 185 Chapter 6: Personal Computing in the Age of the Web W H A T IS A " P O W E R F U L I D E A , " A N Y W A Y ? There was a time in educational computing—the time we have been speaking of so far— when talk of "powerful ideas" was the order of the day. But today, there is no mention of powerful ideas—or ideas at all, for that matter—in discussions of learning management systems, interoperability standards, or test banks. T o what can we attribute this shift? Is it merely that the early exuberance of pioneers like Kay and Papert has given way to a more sober, mature perspective o n the daily business of education? Is it that we have now seen through the "technohype," realizing, after all, that we were the dupes of salespeople and other evangelists? Was all that talk of powerful ideas just marketing? Just what is a powerful idea, anyway? Powerful ideas don't come out of thin air; they have histories, and politics. Papert pointed out—in what is no doubt something of a reponse to the Logo-as-Latin eulogy—that Latin was not so long ago equated w i t h "powerful ideas;" that it was c o m m o n l y felt that learning Latin made students think more logically, or, i n today's parlance, that it "fosters higher-order thinking skills." 1 But we think such ideas quaint today. The alternative to "Latinesque" curriculum, Papert says, is "Driveresque"— that is, oriented to clear practicali- ties. N o t quaint, to be sure. But where do we place math curriculum along this continuum? Solving quadratic equa- tions hardly counts as "Driveresque" for the vast majority of us. Does this mean that the teaching of such concepts as algebra is quaint? O r do we still believe there is some more general value i n mathematics? Algebra is—in our time—taken as a powerful idea, and this justifies its general inclusion beyond what is merely practical i n the context of graduates' day-to-day life skills. It is not so hard to point to other such powerful ideas lurking i n school 1. Papert made these comments is a guest lecture in an undergraduate computing course taught by Alan Kay at UCLA, April 15, 2004. Papert's presentation that day was partially a reprise of his talk at the 2004 AERA conference a few days earlier. Chapter 6: Personal Computing in the Age of the Web 186 curriculum: liberal democracy, the atomic theory of matter, supply-and-demand econom- ics, cell biology, reading and writing, or Maxwell's equations regarding electromagnetic fields. In the U n i t e d States currently, there is substantial debate over just how powerful an idea evolution is. The latter example points to an important consideration: what counts as a powerful idea is something that is constructed, not given. W h a t makes an idea "powerful" is what it allows you to do; i n the vocabulary of the sociology of translation, powerful ideas are related to the lengthening of sociotechnical networks. Once you have a powerful idea established w i t h i n the discourse, you now have access to the range of ideas—of articulations— connected to it. Powerful ideas are those that are richly or deeply connected to other ideas; these connections make it possible to make further connections and tell stories of greater richness and extent. Algebra is a powerful idea because of what it leads to: science, engineer- ing, analysis, generalization of quantifyable patterns; without it, one is profoundly limited i n access to these realms of understanding and agency. But algebra is not a powerful idea on its own. N o r was Latin during the Renaissance; rather, the teaching of Latin represented a powerful idea because of what classical literacy led to: a whole universe of discourse. 2 Following on this, the relevance of these ideas to education is a matter of considerable importance, historically contingent and entirely political. A n educational system that places central value o n algebra or evolution or Latin is taking a particular political stance with respect to the accessibility of those networks of human activity. Such curriculum decisions are not made by appeal to their direct relevance to students' lives; there is very little i m m e d i - ate practical applicability for high-school students of, say, Maxwell's equations, but the connections this particular powerful idea leads to are of enormous scope. The extent to w h i c h educators present such ideas has significant political importance: who should be 2. The teaching of Latin is easy to dismiss as an idea whose relevance has simply faded; other 'powerful' ideas with great currency in the past have been criticized for more acute reasons: Intelligence Quotients and racial hierarchy are two prominent 20th- century examples. We will no doubt in the future reject a number of 'powerful' ideas on grounds of either their quaintness or our moral indignation. Chapter 6: Personal Computing in the Age of the Web 187 taught something like Maxwell's equations? O n l y students planning to study science or engineering in University? Everyone? Just the boys? Just the girls? Clearly, such a decision requires the weighing of many competing priorities: a model of electromagnetic waves is only one of many important concepts vying for limited curriculum time; curriculum must also address issues of available resources, teaching time, learner context, and a host of other practicalities. But my point here is not to argue for the impor- tance of one or other concept, but to demonstrate that "powerful ideas"—though we might disagree about which ideas qualify—are of core importance i n everyone's understanding of curriculum: officials, administrators, educators, parents, and students themselves. "Power- ful ideas" do not operate i n any mysterious way, for they are (merely) key pieces of contemporary worldviews. The issue of w h i c h particular ideas count as powerful i n any particular context is enormously complex; but save the chronically cynical among us, I do not think anyone—however materialist, pragmatist, or, for that matter, postmodern they might be—would seriously claim to have outgrown or rejected powerful ideas i n general and their role i n shaping our worldviews. So why have powerful ideas been evacuated from educational computing i n the past decade or so? I do not mean just the particular "powerful ideas" espoused by Papert or Kay; there has i n recent years been no discussion or, apparently, concern, with what the powerful ideas i n computing and information technology might really be—save perhaps a vague and overgeneral regard of the Internet's enormous scale. W e have instead apparently abdicated the task of thinking deeply about the importance—technically, politically, and ultimately morally (any truly powerful idea has a moral dimension)—of computing. Is this not the responsibility of education, of schooling? T o investigate this issue, and to perhaps come to a cultural/historical appreciation of the current malaise, we w i l l need to take stock of the world of computing today, i n the age of the W e b . . Chapter 6: Personal Computing in the Age of the Web 188 T H E 1990s: T H E A R R I V A L O F T H E W E B I lead my analysis of the 'current' state of personal computing with a discussion of the W o r l d - W i d e W e b , for it is surely the defining point of (personal) computing today. The W e b is, interestingly, based o n technologies and ideas w h i c h pre-date personal computing. It is, i n many ways, the realization of A R P A director JCR Licklider's vision (1960) of a global public information utility. It is the A R P A dream, finally writ large, w i t h public access to time-shared 3 computing resources all over the world; the computing paradigm of the m i d and late 1960s finally come to fruition twenty-five years after the fact. This little anachro- nism was perhaps the first and original compression of time and space that the "cyberculture" era would experience. Nonetheless, the explosion of Internet access to a wider public i n the 1990s was a very exciting thing, and there came w i t h it a great wave Of creative energy. W i t h a critical mass of online users, the dynamics of popular IT began to shift. Software, for the first time for P C users, began to flow like a fluid, rather than being distributed (and purchased) i n boxes. I recall new versions of Internet programs appearing every couple of weeks, w i t h each one doing so m u c h more than its predecessors that I had to re-adjust my conceptual goggles repeatedly. In 1993, it was not at all obvious that the W e b w o u l d be the next big thing. It was certainly not the only contender. A t the time, the fledgling W e b competed for attention with a variety of other comers: there emerged at roughly the same time a similar document retrieval application called Gopher, w h i c h allowed a browser to traverse hierarchical menus leading to information resources around the world. There was a growing collection of search and indexing tools for organizing and traversing a world of FTP-based resources. A t the same time, there was a flurry of activity around online games and chat—the M U D s and M O O spaces I referred to earlier—which I believed to be more interesting than web pages. 3. The asymmetrical, client-server architecture of the Web is essentially that of a time-sharing application, despite the existence of sophisticated computers at the client end. The server side of a Web application slices its computing resources in time for a vari- ety of roughly concurrent distributed users. Chapter 6: Personal Computing in the Age of the Web 189 There were also the already venerable electronic mail and U S E N E T discussion forums, w h i c h consumed the majority of Internet traffic (and probably user attention). But it would be the Web—invented i n Switzerland i n 1990 and propelled to popular appeal by the release of the University of Illinois' Mosaic web browser i n 1993—that w o u l d see exponential growth through the 1990s. The W e b quickly obliterated Gopher and the FTP-based way of distributing information and became the key Internet application: the "killer app" as T i m O'Reilly pointed out (O'Reilly 1999)—that is, the reason people wanted to get on the Inter- net, more important than e-mail, or discussion groups, or any of the novel ideas w h i c h were popping up. The W e b gave the Internet a familar face—the 'page'—which new users could easily see and recognize. I recall talking w i t h a friend—a fellow enthusiast—in early 1994, remarking that websites were really remarkably like magazines. I was not the only one to see that analogy. Translation #7: From stand-alone PCs to information applicances The resulting dynamic is pretty clear to anyone who has not been asleep for the past decade; the W e b expanded exponentially, spawned an industry, and became such a dominant para- digm that it ended up stifling a good deal of the creative outflowing of the early 1990s. The industrialization of the W e b has been nearly all-encompassing. The W e b has been such a "killer app" that it is undoubtedly one of the key factors i n the eventual realization of one of the missing pieces of the " P A R C genre:" the networked P C . Before the advent of the W e b , most PCs weren't networked at all; those that were interconnected lived i n offices where the network allowed access to a particular resource: a shared printer, a file server, a database. After the W e b , a network interface became an integral part of every P C manufactured. What good is a PC without a network connection, we can now legitimately ask? Hence, the personal computer as communications medium has finally been realized. The black boxes resulting from this shift are those of the networked P C as commodity and the maga- zine/brochure metaphor for the W e b and (by extension) the Internet. Chapter 6: Personal Computing in the Age of the Web 190 Despite the Web's obvious virtues as a mass medium, two large-scale trends have shad- owed its growth as an actor i n our daily information lives. The first is that user interface has been largely reduced to the lowest c o m m o n denominator provided by W e b browsers. Instead of a thriving, pluralistic ecology of interface designs (as was apparent in the early 1990s, especially as multimedia became a hot topic), there is today just one interface, that provided by the W e b (even further: that provided by a single browser: Microsoft's Internet Explorer). That the user interface is now standardized, allowing people to negotiate new information spaces without having to concern themselves w i t h the mechanics of the soft- ware, is of course not a bad thing, not i n itself. The trouble is that we have settled for a W e b interface that is, frankly, quite crude compared w i t h the visions of personal computing, information access, and networked computing that preceded it. The W e b offers nothing but a simple, stateless query-and-response file delivery service, and the page-description language H T M L is, at its best, a numbingly simplistic, static way to represent a 'page' of information, let alone a hypertext system. A t its worst, it is a nightmarish wrong turn, given the sheer amount of H T M L - b a s e d information online today. 4 This is all detail, though. The more significant impact of the W e b as all-encompassing information technology paradigm is that it has drowned out all comers. The enormous promise of new media has been realized—for the time being, anyway—as the business of making W e b pages, of gathering W e b audiences, holding them by hook or by crook, and the stultifying instrumentalism of marketing logic. The black box of new media has been closed, for now. Translation #8: From Closed to Open Systems If there is an antidote to the unquickening of new media it is the mass growth of open systems: that the architecture of new media is based on openly published specifications and 4. HTML's historical antecedents—Engelbart's NLS system (circa 1968); Kay's Dynabook vision from the 1970s, van Dam's Inter- media system from the 1980s, to name a few—all go significantly beyond the Web's capabilities. Despite current efforts such as the "Semantic Web" (Berners-Lee etal. 2001) and discussions around a "Web 2.0" which emphasize two-way communica- tions and something closer to a direct-manipulation interface for the Web, the vast majority of the ten billion or so pages that Google currently claims to index are static, unstructured HTML files; a truly unfortunate circumstance, given Engelbart and con- temporaries' research on information structure, Kay's work on distributed message-passing objects, and so on. Chapter 6: Personal Computing in the Age of the Web 191 standards. This is i n fact the heritage of the Internet going back to A R P A days, and to a very real extent, it underlies the continued success of the Internet. The creation of the Internet as an open architecture, and the fact that this "end-to-end" architecture was conceived as and remains application-neutral (Saltzer et al. 1984) is an immense and far-reaching achieve- ment, possibly on a par w i t h the greatest public works projects of history. 5 The W e b itself, for all its faults, has at least proceeded according to similar processes; the architectures and standards that make up the W e b are open; no one owns them (despite a few attempts), and the W e b has come to be a k i n d of de facto mass publishing medium. The jewel i n this crown, i n my eyes at least, is the distribution of Free and Open Source Software; that is, software w h i c h is developed, distributed, and used without significant financial or marketing infra- structure i n place. Free and O p e n Source Software (FOSS) represents a movement that could only have happened w i t h an Internet, and probably couldn't have flourished without something as ubiquitous as the W e b . The open and widely available communications (and organiza- tional) medium of the W e b seems to have allowed the widespread distribution not just of software but of software development across thousands of geographically distributed i n d i - viduals; what previously could only have been done by people working i n close proximity (and therefore requiring something like a capital base) was now possible o n a distributed, and i n a sense casual, basis. A n d , while the content of this movement is, self-reinforcingly, the very software that underlies the Internet and W e b , the amazing thing is that out of this matrix has come a large-scale, non-commercial model for software development and dissemination. Developer-cwm-'ethnographer' Eric S. Raymond wrote inffuentially of the "Cathedral and the Bazaar" as two approaches to software development. The former is the model used i n industry and academic research, i n w h i c h an elite class labours i n isolation, periodically handing down the fruits of its efforts. The latter, w h i c h characterizes the open source software community, resembles "a great babbling bazaar of differing agendas and 5. The "information superhighway" metaphor directly recollects the American interstate highway projects of the post-war period, a public infrastructure program which clearly shaped an entire nation. Chapter 6: Personal Computing in the Age of the Web 192 approaches," (Raymond 1999a) where a great deal can be accomplished by many hands working more or less together. Ironically enough (or perhaps not quite enough), the resulting "black boxes" here are rather "open boxes" i n that the ideals of the free and open-source software movement exalt the ability to go to any piece of software and dig into it, to look under the hood, and to modify it as required. O f course, as a cultural movement, the F O S S movement has its own share of black boxes; its sacred cows are no less sacred for being "open." But i n this ethic of free sharing, participatory development, and emphasis on personal empowerment we are witnessing something very different from the commodity P C industry of the past two or three decades. T H E W E B A S A N E D U C A T I O N A L M E D I U M Translation #9: From learning experiences to an economy of learning objects W i t h the advent of cheaply available Internet and the growth of the W e b as a publishing medium i n the m i d 1990s, educators got very excited. For the reasons outlined above, the W e b presented a sort of universal multimedia platform, and the ability to access an enor- mous variety of resources very inexpensively. The obvious result was its widespread adoption i n educational settings. In a sense, the W e b as information resource was the anti- dote to the packaged curriculum ( C D - R O M ) trend of the early 1990s. Since the software for browsing the W e b is essentially free and the technology and skills required to use it are widespread, the costs of using the W e b are limited to the costs of hardware and connectiv- ity, making it an appealling choice for teachers and administrators w i t h limited technology funds. The popular reputation of the W e b as a universal library or as access to the world's knowledge has led to the romantic rhetoric of children reaching 'beyond the classroom walls' to tap directly into rich information sources, to communicate directly w i t h scientists and experts, and to expand their horizons to a global perspective. Chapter 6: Personal Computing in the Age of the Web 193 In this model, access is the prime mover; technology equals access. W e use technology to get to information, or we're busy making information available to others. Under the rubric of access, "breaking down the walls of the classroom" (Hiltz and Turoff 2000), increasing choice for students or parents, "any time, any place learning" (Harasim 1990), content repurposing, the promise of integrated learning environments—part CAI-style drill-and-practice, part surveillance system (Boshier and W i l s o n 1998)—media conver- gence, and good old novelty, we have i n recent years witnessed the blossoming of a substantial "e-Learning" industry. E-Learning is largely about the logistics and management infrastructure of education: about vendors, service providers, standards and standards bodies, accountability. The Instructional Management Systems Global Learning Consor-. Hum, a vast alliance of publishers, technology companies, and educational institutions, aims to provide a set of standards for the exchange and integration of all sorts of e-Learning components and services (IMS Project, n.d.). W i l l i a m H . Graves of eduPrise.com writes of the "ultimate goal of facilitating the acquisition of component parts from a range of suppli- ers i n the educational value chain of nonprofit and commercial interests" (Graves 1999). The language here alone speaks volumes. M e d i a historian David Noble, i n his "Digital D i p l o m a M i l l s " (1999) lays bare the under- lying structure and logic of the e-Learning industry, m u c h to the chagrin of its participants and boosters (e.g., W h i t e 1999). Noble points out that the boom of correspondence schools i n the 1920s is being reprised today, with similar implications: a blurring of private and public institutions and offerings, a shift toward the commodification of learning materials, and economic implications such as the trend toward increasing enrollment (as opposed to completion and accreditation) as an end i n itself. Popular e-Learning buzzwords reveal aspects of the industrial and commodity- oriented nature of online education: Course Management Systems ( C M S ) ; Learning Management Systems ( L M S ) ; Managed Learning Environments ( M L E ) . W i t h o u t even stop- ping to unpack these Taylor-esque names, we learn that such systems typically do two things: first, they provide an environment i n w h i c h a 'course author'—sometimes this is a Chapter 6: Personal Computing in the Age of the Web 194 http://eduPrise.com teacher or professor, but not necessarily6—can assemble various pieces of'content' into a curriculum and add various online tools and resources: discussion spaces, shared filespace, quiz 'engines,' and the like (see W e b C T , the industry leader). Courseware as such blurs the author/publisher role somewhat, i n that it aspires to make a universe of "learning objects" (Henderson 1999) available to a "course author" for orchestration and presentation i n a given e-Learning context. The second thing that an L M S commonly does is provide tools for an instructor or administrator—in better cases, an individual learner—to 'manage' i n d i - vidual learning experiences w i t h the courseware: by keeping track of w h i c h "learning objects" have been accessed, w h i c h tests and quizzes are appropriate when, and what grades are at any given point. A c o m m o n ideal here is to allow the learner to 'personalize' her learn- ing environment, or at least to customize the visual interface to it. W h a t is seriously at issue, however, is the extent to w h i c h the learner is i n fact 'managing' her o w n learning vs. the L M S 'managing' the learner. A n L M S trades i n standardized educational components—the "learning objects"—and clearly, the ideal for L M S e s is to be able to participate i n the free trade of learning objects from a wide variety of sources (IMS Project); a sort of N A F T A for lesson plans. A p p l e Computer's initial (late 1990s) foray into this arena was un-ironically named the "Educational Object Economy"—a standards-based clearinghouse for educa- tional Java applets. So, the resulting black boxes of this translation are commoditized educational resources, educational standards, and the attendant level shift: away from any individual's experience to the semiosis of networked components. How far we have come from the Dynabook! Kay's words indeed seem quaint now: The particular aim of L R G was to find the equivalent of writing—that is, learn- ing and thinking by doing i n a medium—our new "pocket universe." (1996a) The personal computer has been enlisted as the means to access the managed and packaged w o r l d of the learning object economy. The only thing personal, about it is the ability to set preferences, change 'skins,' set bookmarks, post comments. Kay's "curriculum of user inter- 6. See Bryson 2004 for a case study of some of the implications for traditional policies of academic freedom in the face of com- moditization of learning experiences. Chapter 6: Personal Computing in the Age of the Web 195 face" i n w h i c h one's interactions w i t h the computer were to follow a personal, exploratory, constructive path has been reduced to a stock and standardized menu of choices, in w h i c h the only exploratory and constructive options concern w h i c h link one clicks on next. Even from the 'authoring' side, those of us involved i n the design and creation of online resources and environments (educational or otherwise) are hemmed i n by all-encompassing U I stand- ards and the imperative to make things 'Googleable' by composing appropriate metadata and—most importantly—branding things with unique CamelBackedNeologisms to ensure that they survive the sea of search results. Learning has indeed become "enterprised up" (Haraway 1997, p. 70). T H E D Y N A B O O K T O D A Y : H O W F A R H A V E W E C O M E ? What, then, is personal computing in the early years of the 21st century? W h a t is its relation- ship to education, defined broadly? I take myself and my o w n practices as illustrative (if not typical) here. I sit, working through the daily management of texts and tasks and ideas and responsibilities, i n front of a laptop computer. This machine is mine; I do not share it w i t h anyone else; its hard drive is full of the accumulated debris of several years' worth of material—documents7 of one k i n d or another—that I have collected and/or produced. The machine is connected to the Inter- net most of the time, though I carry it around to various physical locations, and as such it acts as my o w n personal interface to the global network. It is, i n this sense, a typical "personal computer" of this day and age. M y day-to-day practices i n this laptop-mediated environment bear the traces of the past three decades of computing. The operating system on my circa-2003 PowerBook is a merging of 1970s U n i x and 1980s M a c i n t o s h metaphors and systems; its software tools too 7. That our current computing landscape is dominated by "documents" is ironic given Xerox's ("the document company") ambiv- alent involvement. The docu-centric division between applications and files can be traced to the Unix software culture in ascen- dancy in the 1970s and which had a impact on the early microcomputer market of thel 980s. It is actually in sharp contrast with Kay's Dynabook vision, in which media objects featured both contents and computing intelligence. The "file" metaphor, when combined with the "authoring" software of the early 1980s—word processors, spreadsheets, paint programs—becomes trans- lated to the arguably friendlier "document" metaphor, still ontologically distinct from "applications." Chapter 6: Personal Computing in the Age of the Web 196 are a blend of Unix-derived command-line programs; graphical, single-user Mac-style "authoring" programs; and Internet clients (electronic mail, web browser, file-sharing tools). These three realms of software rarely overlap, and to fully use a system such as this is to shift between the cultural traditions these tools represent; I am i n significantly different cultural space when I am using e-mail (largely defined circa 1970), writing i n a word-proces- sor (circa 1985), manipulating a photograph i n Photoshop (circa 1992), or searching the W e b via Google (circa 2004). In terms of day-to-day productivity (defined as m u c h by the shape and practices of the contemporary workplace as by software), Internet-based personal communications and reference-browsing makes up the bulk of my actual computing practice. After this comes document creation; i n my case, this is almost entirely about writing and the production of papers and reports for various purposes and audiences, for w h i c h I employ a small arsenal of writing and publishing tools, perhaps somewhat atypically, since I eschew the ubiquitous Microsoft W o r d on political grounds (more about that later). The practices w h i c h make up this work (drafting, revising, opening, saving, cutting, pasting, printing) are essentially those established twenty-five years ago when w o r d processing became a black box closely equated w i t h personal computing. The third major aspect of my actual personal computing is one I share w i t h an increasingly large population, and especially those younger than me: listening to music. I have, for a few years now, been listening to recorded music primarily via the computer rather than C D player or cassette-tape player, and my collection of digital music files (which take up nearly half'my laptop's hard disk) is w i t h me wherever I am. W i t h "rings on my fingers and bells on my toes," as the nursery rhyme goes, I w i l l have music wherever I go. Digital files—MP3s being the ubiquitous format at this point in history—are such an improvement i n convenience over physical formats like discs or cassettes that one might think that this is another "killer app"8— one that the recording industry is notoriously having a hard time dealing with. A similar trend to keeping and playing music as digital files 8. As evidenced by the iPod personal music players this trend is not lost on Apple Computer, though it is yet unclear what the rela- tionship between iPods and personal computing is. Chapter 6: Personal Computing in the Age of the Web 197 is digital photography, and, ever since my kids were born, my laptop has become my photo album as well. A number of observations on this little portrait of personal computing are i n order. First, and notably, there is very little "computation" going on i n any of what I have just described; the operative model is m u c h more one of a small set of data formats (e-mail, web pages, written documents, M P 3 s , photos) and a software toolset for managing and sharing them. Second, the ubiquity of the Internet makes my personal computing an extension of— or perhaps a replacement for—public and private communications systems like the tele- phone, television, and print publishing. T h i r d , w i t h the exception of writing and taking photos—practices at least theoretically independent of the computer—there is almost no "authoring" or creative expression going on here at all; rather, these tasks are either formally communicative (i.e., the production of highly generic forms like documents) or relatively passive. I am inscribed as either a commentator or a consumer of digital media; despite the powerful tools at my fingertips, very little of my day-to-day computing involves creativity or even exploration. It is instructive to examine this portrait i n the light of the Dynabook vision. Some of what Kay had i n m i n d i n the early 1970s is clearly present: the basic form factor and hard- ware capabilites of todays laptops are very close to what Kay foresaw: a truly portable device, connected wirelessly to the global information utility, and capable of presenting and inte- grating a variety of different media (text, image, audio, video). But, seen a slightly different way, the personal computer of 2005 is more akin to a T V set than a computational medium. In 1972, Kay wrote: W h a t then is a personal computer? One would hope that it would be both a medium for containing and expressing arbitrary symbolic notations, and also a collection of useful tools for manipulating these structures, w i t h ways to add new tools to the repertoire, (p.3) I have to admit that the ways i n w h i c h I "manipulate arbitrary symbolic notations" o n my personal computer are few andfar between. W o r d processing is the one notable exception; Chapter 6: Personal Computing in the Age of the Web 198 almost every other operation o n digital information is one of presentation or filing (sorting, archiving, searching). A d d i n g "new tools to the repertoire" is also underwhelming; this can only refer to purchasing or downloading new application software.91 have o n my hard drive software tools for creating and manipulating images, composing and arranging electronic music, editing video—each application living i n its own narrowly constrained domain, poorly (if at all) integrated w i t h other applications, even the dominant ones of e-mail, web browsing, w o r d processing. Further, there is something troubling about the "arbitrariness" of these "symbolic notations;" i n practice, they are not very arbitrary at all. Rather, they are established, standardized notations: one for text documents, another for bitmapped graph- ics, another for electronic music, and so on. In a data-centric world, standards for data representation (file formats, etc) are essential to make content exchange possible. Note that this is i n contrast to Kay's vision of a world of message-passing objects w i t h dynamic and negotiated context and semantics. If our symbolic notations had the k i n d of "arbitrary" character Kay had i n m i n d , we (users) would perhaps come up w i t h new ones now and then, even on a personal level; we might blend them, modify them, experiment w i t h them. But to even muse on such possibilities i n our workaday w o r l d puts us i n the land of either idle spec- ulation or marginalized geekdom. If my personal portrait of personal computing is underwhelming, the mainstream real- ity of educational computing is even more so. Educational computing must at this point be taken as a subset of personal computing—the application or recontextualization of the sorts of application and communication software I have been describing to classroom and curric- u l u m use—and, unfortunately, often with the "personal" qualities removed or limited for adminstrative reasons: shared computer labs w h i c h necessarily prohibit even the sort of messy habitation of a personal computer that I have described here; top-down adminstra- tion w h i c h puts severe restrictions on the use of programs and the storing of personal files; 9. Despite my relative comfort with a half-dozen programming languages and even some substantial experience with software development, this does not fall within the realm of my "personal computing." Nearly all the software I have written myself has been for someone else, and within the client-server model of the Web. The one exception is a personal bibliography-manage- ment tool I created and still use—an anomaly in the model I have been describing. Chapter 6: Personal Computing in the Age of the Web 199 pre-packaged and pre-defined possibilities. Remember that i n Kay's vision, personal computing began w i t h the educational context, and out of this would come the day-to-day personal computing of adult professionals. Educational computing w o u l d have been the superset, the productivity tools used by adults a specialized subset of the possibilities explored by kids. W e have instead got the reverse. Vendorcentrism One major factor Kay may not have imagined is the role that key software and hardware companies play i n mediating our personal computing experiences. If we believe the c o m m o n statistic that only about 1% of desktop computers use the open-source L i n u x oper- ating system, we can surmise that for 99% of us, personal computing is something we do via a toolkit created and sold by a large A m e r i c a n corporation, and i n the vast majority of cases, it is Microsoft. It is not my intent here to analyze the dynamics of a market-driven computing land- scape; such a study would constitute a separate project, one at least as large as this one. But it is not much of a stretch to suggest that a truly malleable personal computing environment of the sort Kay envisioned might be very difficult for a commercial operation to market; a far safer and more established model is to offer consumers a relatively small palette of choices, and to carefully and proactively manage their expectations. Mergers of IT companies w i t h media and entertainment firms (e.g., A O L / T i m e - W a r n e r ) seem to strengthen this general trend. Even without speculating o n the details of corporate motives and strategies, it is clear that personal computing has since its very beginning been dominated by a very small number of companies weilding enormous power. The. market dominance enjoyed by Microsoft is i n practice a single point of interpretation, or i n Latour's language, an obligatory passage point; a single entity positioned so as to exercise unparalled control over what personal computing means, what it includes, and what is possible w i t h i n its horizons. O u r practices w i t h respect to IT are necessarily defined by this agency. That there exist other large computer companies does little to offset this: the contributions of the 'second string' Chapter 6: Personal Computing in the Age of the Web 200 of corporations—Apple Computer, Sun Microsystems, I B M , A O L / T i m e - W a r n e r — a r e i n most cases merely lesser versions of the Microsoft model. "New Media " vs. "Cyberculture " in the 21st century The opposition between "new media" and "cyberculture," proposed by Lev M a n o v i c h i n his introduction to M I T Press' The New Media Reader (2003), is part of Manovich's effort to define "new media" by elaborating a historical perspective (informed largely by art history) on digital technology and its relationship to culture and cultural production. M a n o v i c h offers eight partial definitions of new media, but the first one—that it is distinct from what he calls "cyberculture"—is especially food for thought: In my view, [new media and cyberculture] represent two distinct fields of research. I w o u l d define cyberculture as the study of various social phenomena associated with the Internet and other new forms of network communication. Examples of what falls under cyberculture are online communities, online multi-player gaming, the issue of online identity, the sociology and the ethnog- raphy of e-mail usage, cellphone usage i n various communities, the issues of gender and ethnicity i n Internet usage, and so on. Notice that the emphasis is on the social phenomena; cyberculture does not directly deal w i t h the cultural objects enabled by network communications technologies. The study of these objects is the domain of new media. In addition, new media is concerned w i t h cultural objects and paradigms enabled by all forms of computing and not just by networking. T o summarize: cyberculture is focused on the social and o n networking; new media is focused on the cultural and computing, (p. 16) In reflecting on this distinction, it occurs to me that the vast majority of academic and popu- lar/journalistic discourse around computing (personal or otherwise) in the past decade has not been i n the "new media" space, but i n the sociological realm of Manovich's "cybercul- ture." Furthermore, it is largely the attitude toward the division of labour between experts and end-users that leads me to this identification; we talk of the "effects" of computeriza- tion, of "social impacts," of "user friendliness," of "no programming experience required," or of "making the technology serve pedagogical ends first"—cliches w h i c h inscribe i n the first place to a divide between technology and society (cf. Latour 1993), and i n the second place Chapter 6: Personal Computing in the Age of the Web 201 further reify the division of labour between experts and users; assumed is a class of quasi- magical designer/engineer types, who are somehow not quite human, (yet) who wield enor- mous power, handing technologies down to the rest of us ('hand-me-downs from the military industrial complex,' according to one popular notion), who then are forced to use these inhuman and dehumanizing tools which never quite meet our predefined aims and goals. Inscribed here too are the circumscribed possibilities of a so-called resistance i n w h i c h the content of cyberculture is the impotent critique of its own form—impotent because it is b l i n d to and thus further reifies its own political-economic conditions; that is, the market as the inescapable model for all discourse. This characterization is of course the market's own self-perpetuating reality; generated and sustained because it makes good business sense to do so, as has been proven time and time again i n examples such as Microsoft's and B i l l Gates' surreal fortunes. N o w , this situa- tion would be servicable if it ended there, and the now almost traditional themes of resistance and culture-jamming and the romantic ideal of the arts could be called upon to prevail over the forces of oppression. The frightening part, however, is that as digital tech- nology becomes more ubiquitous, our collective implication in this divide seems to become deeper. A s the W o r l d - W i d e W e b became a daily tool and information source for larger and larger segments of the Western w o r l d i n the late 1990s, the sense of collective helplessness seemed to be even more entrenched, despite the early talk of democratization that accom- panied it (e.g. Landow 1992). This notion of democratization may have been simplistic, but it was not without merit; I recall, i n 1993, after being paid to create web pages for the first time, remarking that no one w o u l d ever make any money this way, that it was so simple that trained monkeys would soon be doing it; and yet, by the time the dot-com boom was i n full effect i n the late 1990s, budgets for websites reached to hundreds of thousands and even millions of dollars, and a new 'professionalism' turned the Internet—potentially the most participatory institution i n history—into something beyond the curtain, more and more like television i n most people's lives. Chapter 6: Personal Computing in the Age of the Web 202 T o ride Manovich's framing a little longer, it indeed appears that "cyberculture" has been i n ascendance for the past decade or so, and "new media" has remained marginal; this parallels the difference apparent between my portrait of contemporary personal computing and what A l a n Kay might have had i n m i n d . The implications of such a cultural trend for education are profound: given its status as a spectatorial, market-dominated discursive space, what does "cyberculture" offer education, exactly? H o w do we ensure that it does not' become the IT equivalent of "music appreciation?" This critique is, I believe, a microcosm of the much larger criticism made by H o m i Bhabha of the language of "cultural diversity"— that, i n the liberal tradition, although there is entertainment and encouragement of cultural diversity, there is always also a corresponding containment of it. A transparent n o r m is consti- tuted, a n o r m given by the host society or dominant culture w h i c h says that 'these other cultures are fine, but we must be able to locate them w i t h i n our own grid' (Bhabha 1990). In the case of cyberculture, the "transparent n o r m " is provided by the market logic of the industrialized Internet, w h i c h easily accommodates critique, diversity, even radicalism, while maintaining and reproducing the 'means of production': hence, cultural diversity and even resistance are articulated w i t h i n the familiar bounds of Microsoft W o r d . This little example is facile, trivial, i n comparison to the analysis of discourse and truth effects that Foucault mounted three decades ago. D o we see anyone calling this out, or Working on alternative channels? In techie circles, yes; but i n academia and education, no. L E S S O N S F R O M T H E O P E N - S O U R C E M O V E M E N T Unix, or Linux, is the stone-soup operating system, where everybody has brought their own contribution to the common pot. - Tim O'Reilly, 2002 There is, thankfully, an alternative to the pervasive corporate-dominated computer indus- try, a growing force in the past decade. The truly interesting developments i n computing i n Chapter 6: Personal Computing in the Age of the Web 203 the last decade have almost entirely taken the form of radical innovations from the fringes w h i c h is then transformed by some large existing corporation or other mass of capital (more fluid i n the dot-com b o o m of the late 1990s) into marketable form; much of the Internet's translation into a so-called 'public sphere' has been a shift on this level. But that a marginal realm even exists from w h i c h innovations can emerge and where interpretations remain somewhat more fluid is an encouraging thing. The relative importance of this margin has been shored up i n recent years, as well, i n the reification and institutionalization of the Free/Open-Source movement. The Free and O p e n Source Software ( F O S S ) 1 0 movement came to popular attention i n 1997 when the rhetorical implications of the words "free" vs "open" were problematized, bringing a long-standing but increasingly marginalized tradition to a point of encounter w i t h the corporate w o r l d . This is a tradition of software developers sharing their creations freely w i t h one another, a practice which, in certain sectors of the IT w o r l d pre-dates any corporate or market involvement, and w h i c h became formalized i n the mid-1980s by R i c h - ard Stallman (1985; 1998), founder of the Free Software Foundation, i n response to what he saw as an erosion of the collaborative community of programmers he had come to know i n the 1970s and 1980s. . Stallman's articulation of the rationale and ethics of sharing software put a formal face on a practice that had been widespread, though informal, for decades. The A R P A c o m m u - nity in the 1960s widely shared the fruits of their efforts, seeing their work as an extension of the scientific/academic tradition of publishing research so that other researchers could build upon it. The underlying technologies of the Internet and most of the computing archi- tecture surrounding it were developed and disseminated according to these ideals. The Unix operating system and the community and tradition surrounding it (since the early 1970s) most clearly embodied this model of development, since it meant that the develop- ment community was distributed, consisting of programmers and researchers from a 10. The rather awkward moniker "Free and Open Source Software" attempts to be inclusive of both and the business-friendly "open" term. Chapter 6: Personal Computing in the Age of the Web 204 number of different sites and institutions, some corporate and some academic. U n i x argua- bly became the dominant computing platform of the 1980s—and ultimately the key platform for the Internet—because of this development and distribution ethic. 1 1 But what Stallman saw i n the early 1980s was a "stark moral choice" (Stallman 1998) presented by corporations increasingly interested i n protecting their 'intellectual prop- erty'—Stallman believed that he must either take action to counter the trend toward corporatization of software, of find another career. H i s efforts, as rationalized i n the GNU Manifesto (1985), culminated i n two important contributions: the beginnings of a G N U operating system—a formally free version of U n i x , unencumbered by typical corporate licenses 1 2 —and the G N U General Public License ( G P L ) , w h i c h is written so as to do the opposite of what most software licenses do. Instead of focusing o n the owner's control of the software and granting rights of use to licensees, the G P L ensures that the software's source code (that w h i c h a programmer writes or modifies) remains open and freely available, no matter who (or what corporate entity) contributes to it. It is a copyright document—as Stallman put it, a copyleft document—that subverts the traditional copyright concept, ensuring the freedom of users rather than restricting use. The G P L ' s secret weapon is not that it merely says you can do whatever you want w i t h this software, but that it stipulates that anything you create based on GPL-licensed software also falls w i t h i n the license—that is, the source code must always remain open. This has the effect of preserving and propagat- ing the idea far beyond its original application. The G P L establishes a specially formulated commons; software licensed under the G P L cannot be restricted, in use or distribution; it guarantees that the source code remain free for anyone to access, use, or modify for their own ends. 1 3 11. On the Unix tradition, see Eric Raymond's The Art of Unix Programming (2003)—a work of cultural ethnography at least as much as a guide to software development, which proceeds by tracing the virtues and values emergent within this now venera- ble culture of computing. 12. GNU is a recursive acronym for GNU's Not Unix. The project was to produce 'copyleft'-licensed components making up an entire, usable operating system based on Unix. Unix had roots in academic and sharing-friendly environments, but was actually owned by AT&T and licensed to various various competitive vendors. Stallman's GNU project made steady progress through the late 1980s, but the real breakthrough came in 1991, when Finnish programmer Linus Torvalds released a free Unix-like system kernel under the GPL license.Torvalds' "Linux" kernel effectively completed Stallman's GNU project, making it possible to download and install a completely free Unix-like operating system for (by then ubiquitous) Intel-based PCs. 13. The GNU General Public License (GPL) can be found at http://www.gnu.org/copyleft/gpl.html Chapter 6: Personal Computing in the Age of the Web 205 http://www.gnu.org/copyleft/gpl.html Through the late 1980s, as the Internet was growing i n size and profile, the G P L was applied to a number of key Internet infrastructure components, the result being that the Internet was largely constructed out of free software, and the tradition of free software was firmly established w i t h i n the early Internet community. By 1997, GPL-licensed Internet software—and the G N U V L i n u x operating system—had risen to sufficient popularity that Microsoft began to comment on the threat that this developm'ent movement presented to its market dominance. Eric Raymond's strategic rebranding of the free software movement as the "open source" movement capitalized on this popularity. Raymond felt that Stallman's rhetoric, w i t h its stark moral terms and talk of "free software" might alienate corporate A m e r i c a and therefore undermine the popularity of software written and released under such terms (1999). Raymond's efforts seem to have been a success, at least i n terms of rais- ing the public profile of L i n u x and other "open source" projects. • Historical details aside, the importance of the FOSS movement—which remains largely centered around the L i n u x operating system—isn't i n the details of the license terms, or i n the specific vocabulary used when pitching it to corporate audiences. Rather, it is i n the strength and coherence of the community w h i c h has emerged around it. The FOSS move- ment now commands enormous "mindshare" if not marketshare (we might alternatively say that it has great discursive influence). W h i l e remaining a very small player i n the desktop- based personal computing environment (accounting for somewhere between 0.5% and 3%, if we believe statistics as reported i n The Economist, A p r i l 15,2004 1 4 ), L i n u x and F O S S ' share of the institutional and web server market is much greater: L i n u x accounting for between 1/4 and 1/3, according to most sources, w i t h particular open-source applications (e.g., the Apache webserver) boasting even higher marketshare. Moreover, the community surrounding these projects and their deployment have come to self-identify and rally around a few key ideas: 1. the promethean ideal of a viable alternative to Microsoft's near-monopoly; 14. My point here is not that The Economist should be distrusted on this issue, but rather that it is very difficult to gain meaningful statistics on how many copies of a particular free software system are running. How would one count them? There are various strategies, but compared, say, to counting how many Windows licenses are sold, it remains very inexact. Chapter 6: Personal Computing in the Age of the Web 206 2. the ideological stance about corporate ownership vs freedom ("as i n speech, not as i n beer") as proposed by the G N U Project's Richard Stallman; 3. the notion that "open-source" development strategies lead to higher-quality sofware, as promoted by Eric Raymond; 4. the do-it-yourself example (and success) of community leaders like Linus Torvalds; 5. the economic fact that F O S S systems of very high quality can be downloaded, installed, and used for 'free.' The fact that some of these ideas and ideals may be i n tension or conflict has been analyzed extensively i n FOSS-oriented online fora and i n scholarly journals (especially First Monday), but the coherence of the open-source community is readily apparent: In what might be characterized as one of the central organs of the L i n u x community, the Slashdot news forum, the self-righteousness of the FOSS model is so often proclaimed, and devia- tions from the party line so roundly denounced, that the overall effect is sometimes something like a political rally. Furthermore, the amount of ongoing attention that FOSS receives in such mainstream magazines as Forbes and The Economist serves to show the impact it has on IT culture. Of Unix and other IT cultures T o speak of IT culture at all is to step out on a limb; there has been little if any ethnography to document the rituals, kinship structures, and worldviews of various computing-centric communities. But if, as postcolonial critic H o m i Bhabha suggests (1994, p. 36) culture is knowable only i n the experience of difference, then the moment of encounter w i t h some- thing like L i n u x puts it i n sharp relief: something very different is going on here than i n the mainstream P C market. Alternatively, i f we treat—as I am wont to do—culture as history, then it is fair to say that L i n u x and the FOSS projects surrounding it are extensions of the m u c h more venerable U n i x tradition. U n i x was originally developed by hackers at A T & T Bell Labs i n the early 1970s and spread almost despite any formal efforts through the 1970s Chapter 6: Personal Computing in the Age of the Web 207 and early 1980s by individual sharing; A T & T had been judged a monopoly by the U S Justice Department i n the 1950s, and the regulations w h i c h followed this ruling effectively prevented A T & T from bringing software to market directly. The popularity of U n i x in the 1980s led to its being controlled by a number of licensees who marketed and protected it aggressively; this i n part was the situation Stallman felt he must react to. U n i x ' role i n the development of Internet standards, as well as the emergence of alternative U n i x software under the G N U project and then Linux, set the stage for something of a renaissance i n the 1990s.1 5 T o know U n i x (and/or Linux) well is to come to appreciate its history; this is m u c h more important than i n any other computer system i n widespread use. Though most U n i x and U n i x - l i k e systems today have familiar-looking windows-and menus graphical inter- faces, the " U n i x way" is to type interactively in a c o m m a n d shell (the 1960s time-sharing paradigm lives on); the commands one issues are i n many cases only understandable by reference to the historical contexts that produces them (see Raymond 2003 for a full exposi- tion of this theme). W h e n one encounters the U n i x command tar—useful when packaging up files for backup or distribution—it is only when one realizes that was originally short- hand for "tape archive" that it begins to make real sense, despite its continued existence and daily use on systems with no tapes anywhere near them. Novelist N e a l Stephenson wrote that " U n i x is not so m u c h a product as it is a painstak- ingly compiled oral history of the hacker subculture" (Stephenson 1999). A better characterization—and one that avoids the cliched, pith-helmet anthropologist trope— comes from software philosopher W a r d Cunningham, who observed that " U n i x is the Latin of computer science," to w h i c h some wag replied, "Except it ain't dead yet." ( W i k i W i k i W e b : UnixCulture). N o t dead by a long shot. Unix-as-Latin is a pretty good analogy (far more apt than Koschmann's "Logo-as-Latin"). W e must bear in mind, though, that this would be Latin 15. I am, for simplicity's sake, completely ignoring the role of the Berkeley Standard Distribution (BSD) of Unix, and its three main open-source versions. BSD Unix is technically closer to the pure Unix tradition than Linux, but the more popular Linux has become symbolic of the FOSS movement. Chapter 6: Personal Computing in the Age of the Web 208 circa 1700 or so, not Latin as it exists today, for the simple reason that a large proportion of the 'scholarly' class still speak it (Unix), and one cannot truly enter that class without know- ing at least a little of it. It is difficult, obscurantist, and exclusive. It is also brilliant, elegant, and—judging from L i n u x and FOSS—currently i n its renaissance. Its impact is huge and wide ranging. W h e n one stops to consider that U n i x and its direct descendants (like Linux) are the fruits of Licklider's A R P A vision (almost literally kept alive through the dark ages by monks) and that they serve as the foundation stones of the Internet and the W e b ; that the F O S S movement has made such important waves i n the IT industry (even Microsoft talks about it); and that every contemporary operating system inherits something from it (Apple's 'new' O S X is a G U I o n top of a U n i x foundation layer, and even Microsoft's W i n d o w s draws from U n i x on several counts); one begins to appreciate its pervasiveness, and—more importantly—its resilience i n the face of wave after commercial wave o f ' i n n o - vation' designed to make it obsolete. N i k o l a i Bezroukov, noted for his cutting critiques of Eric Raymond's open-source anthropology i n FirstMonday, noted that " U N I X Renaissance OSS proved to be the most important democratic movement i n software development i n the 20th century" (Bezroukov 1999). Latin it may be, but herein lies the problem w i t h the U n i x / L i n u x / O S S tradition as a would-be educational model: the analogy with Latin is a little too apt. The U n i x tradition, like that of Latin in the Renaissance, is frankly stubborn, dogmatic, obscurantist, sexist, and exclusive. It rewards those who can, whether by sheer dint of w i l l or by having access to the resources to struggle their way to the inside of the citadel. U n i x culture has little patience w i t h those who cannot or have not, chiding them instead w i t h RTFM—read the fucking manual—the scathing words of the U n i x culture-hero, the "bastard operator from hell." In the F O S S world, the admonition is the politer-sounding but ultimately more daunting "read the source code." The gathering places of U n i x and FOSS culture, like the Slashdot forum ("News for Nerds"), are rife w i t h arrogance, conceit, and condescension; these can be unfriendly places for the uninitiated. 1 6 Chapter 6: Personal Computing in the Age of the Web 209 That said, there is no doubt that the tradition which comprised U n i x and the bulk of the FOSS movement is a curriculum par excellence, that it constitutes a literacy i n the richest sense of the word. It is a vast and multi-layered community of practice, precisely as Lave and Wenger (1991) would have it (for this angle, see T u o m i 2000; Hemetsberger & Reinhardt 2004), and it is arguably a more efficient learning environment than many schools provide, judging by the sheer scale and growth of the movement. But it is not, I argue, the k i n d of learning environment anyone would purposefully design, nor choose, if not for the simple . dearth of technological alternatives. W h a t is the lesson in that gap? Despite the growth i n popularity of open-source tools as educational environments, and a semi-organized community of "open-source i n education" advocates o n the Internet, these advocates' approach seems to be practical and incremental rather than visionary. 1 7 U n i x culture is, ironically, a highly effective curriculum built on absolutely perverse pedagogical principles— chaotic at best, and positively hostile i n many cases. W h a t lessons can we draw from that? M y intent here is not to pass judgement on U n i x or FOSS as they stand today, given my general admiration of them and hope for their possibilities, and given the likelihood that this tradition w i l l continue to flourish and evolve. Rather, I mean to position the U n i x / F O S S tradition as a major historical actor (or actor-network), and to point out some of the enor- mously generative spaces it has opened up i n the past decade: a real democratization of software development, a non-commercial development and distribution model, a huge community of practice operating w i t h virtually no formal institutional support, and an evolving ideological frame (exemplified by Stallman's copyleft) that is being energetically adapted to other realms: publishing, file sharing, and so on. A n d finally, I mean to point out, as A l a n Kay has, that there seems to be an interesting 30-year lag between the time a power- ful idea is first worked out and when it reaches the mainstream. 16. To be up front about my positionality here, I make no claim to being "uninitiated" in this culture. I have been running Linux- and Unix-based systems at home and at work since about 1997 and have participated in a number of small-scale open-source development projects. That said, the exclusive atmosphere of the community is still readily apparent to me, despite its demo- cratic rhetoric. 17. See "Moodle" (http://moodle.org/), an open-source course management system that is gaining widespread popularity among educational institutions. See also the "Open Source in Education Foundation" (http://www.osef.org/) Chapter 6: Personal Computing in the Age of the Web 210 http://moodle.org/ http://www.osef.org/ T o bring us back to the Dynabook theme, recall that U n i x ' s initial incarnation precedes the first Smalltalk by only a few years, that these two technological threads are close histori- cal siblings. 1 8 What if we were presented with an equally venerable model of computing that featured many of the good ideas inherent in the Unix/FOSS culture—openness and sharing simplicity and modularity of design, collaborative and network-based architecture—but one which was designed with pedagogical ends in mind? A U T H O R I N G I N T H E A G E O F T H E W E B The great, visible, and lasting legacy of U n i x culture is, more-or-less, the Internet and the W o r l d - W i d e W e b . Since some time i n the m i d 1990s, the W e b has dominated personal computing to such an extent that it has in some ways become invisible: T i m O'Reilly (1999) noted that the W e b itself wasn't the "killer app" that made people want to buy a computer i n the 1990, Amazon.com was. The ubiquity of the W e b , both i n terms of its physical manifes- tation and its sociocultural embeddedness have made it a background field, the invisible infrastructure (Star 1999) that provides the ground upon w h i c h our more focused delibera- tions appear. That bedrock sensibility points to some tightly shut black boxes, and it is here that I want to draw our attention. Because the Internet, the W e b , and the FOSS movement all draw on U n i x culture, the conceptual motifs—genres—of the Internet and W e b of the 1990s and early 2000s draw heavily o n the U n i x worldview; the architecture of m u c h of the existing Internet and W e b , as a result, is U n i x - l i k e . It is not surprising, given this, that U n i x has experienced a renais- sance i n the age of the W e b . Conversely, the community dynamics that surround projects like L i n u x are dependent on the existence of an Internet and on technologies like the W e b . Indeed it is unlikely that projects of the scale of L i n u x could have flourished without the k i n d of sociotechnical foundations provided by the Internet and W e b : mailing list c o m m u - 18. John Unsworth (1995) notes the apparent paradox in the 'open,' 'public,' even 'democratic' technologies springing from the "pure research" contexts of American monopoly capital—the ARPA project, AT&T, Xerox—and that their relative insulation from quarterly results and other "real world" economic pressures that such collossal institutions could uniquely provide gave rise to technologies of unparalleled scope, while enshrining at a deep level the structures and assumptions of liberal capital: individualism, property, labour, and power. Chapter 6: Personal Computing in the Age of the Web 211 http://Amazon.com nities; networked hypertext documentation and file repositories, Internet-based version- control systems, and the k i n d of free-flow of information that U n i x culture has always relied upon. The flip side of this, w h i c h is most relevant to our consideration of the Dynabook, is that the Internet—as a vast network of U n i x and Unix-derived systems—is significantly not a vast network of message-passing Smalltalk objects. A n d though U n i x and Smalltalk projects are both products of the early 1970s, and although they share many virtues i n common—openness, modularity, simplicity, to name a few—they are distinct 'paradigms.' 1 9 W h e r e U n i x is firmly grounded i n the terminals-and-mainframe timesharing model of the A R P A project, and its programming paradigm (enshrined i n the language C, a co-develop- ment of the U n i x project) is one of algorithms operating on structured data residing i n files (this model so dominant and ubiquitous today that it escapes analysis for the most part—see Lanier 2006), A l a n Kay's vision of Smalltalk thirty-five years ago was a reaction to and turn- ing away from this very model, away from the data-vs.-procedures distinction w h i c h underlies it. Kay had in m i n d personal computing objects w h i c h interacted i n a peer-to-peer fashion; instead of distributed access to centralized computing resources (the A R P A vision, w h i c h U n i x implements), the Smalltalk vision is one of a radically decentralized network, i n w h i c h each personal node is as powerful as any other, and where computing can occur i n any combination, local or distributed). 19. I use the word 'paradigms' here advisedly, and in Kuhn's (1996) sense, I mean that beyond a superficial level, the virtues of Unix culture seem distant from a strong OOP/Smalltalk perspective, and conversely, that the virtues of object-orientation seem for- eign to a 'native' Unix perspective (e.g. Raymond 2003, pp. 126ff). Chapter 6: Personal Computing in the Age of the Web 212 W e have seen fragments of this vision appear in recent years: on the Internet, truly peer-to-peer applications have begun to emerge, often rendered marginal by their questionable legal status (in some circles, peer-to-peer has come to be simplistically equated with illegal sharing of music files). A n d the very ubiquity of the W e b and web-based applica- tions has led to a relative downplaying of operating systems perse, at least i n terms of desktop comput- ing environments. By and large, however, the centralized model— i n more contemporary parlance expressed as "client-server" computing—is dominant, despite the considerable amount of computing horsepower i n the average P C . Though truly peer-to-peer computing is more feasible now than it ever has been, consider the following commonplaces: web pages are stored and served from a centralized host (e.g., an academic department, a commercial host- ing provider); "web-mail" stores and manages your 'personal' e-mail o n a centralized server (though a look at the corporate politics behind Microsoft's HotMail service or Google's gmail surely raises the question of just how 'personal' these are); Google's search engine indexes the W e b centrally. Even apparently distributed applications like blogs are hosted by massive centralized servers. There is absolutely no 'technical' reason why all these activi- O b j e c t s a n d O O P in t h e 1990s •1 don't mean to suggest that the Unix •paradigm is opposed to.the object-oriented paradigm. There are in fact many points where these ideas can be seen to intersect.; While Kay's Smalltalk represents a distinctly- different vision from the "Unix way," O O P itself became merged with Unix culture in the 1 9 9 0 s despite skepticism about object- ' orientation amongst Unix devotees (Raymond 2 0 0 3 , p. 127) .OOP gained enormous popularity a n d , legitimacy" i n , the early 1 9 9 0 s , as-books, proliferated and languages like C++ began to be integrated in college curriculum. C++ could be integrated into existing Unix-based contexts (both "social and computer-wise," writes Stroustrup) while offering the advantages of classes and encapsulation. Here was an O O P language which existing Unix and C programmers could easily adapt to. In 1995 the introduction of Java was* perhaps an even bigger push for object; oriented programming, and it came from Sun .Microsystems, a.major Unix vendor. C++'s original developer Bjarne Stroustrup' writes: . • 7995> ]ava burst, upon the.' ' (programming scene with o n - . s 1 unprecedented style of and amount of marketing. This was the first time that the resources of a large corporation had been thrown squarely into a;~ .programming language debate. (Stroustrup 7998, p. 290) Java's release was wrapped in idealistic, if not revolutionary aspirations. Java took much from Smalltalk, in terms of its virtual- -machine architecture and its brandingrfaili >.L' "i r a. , „ „ „• , ^tigm the language of the Internet. Java ^has* been a qualified success, especially i n " encouraging the adoption of O O P concepts in the Unix and Internet world. • But Smalltalk still remains apart; its basic: model for interaction-and filing are notably distinct from the Unix tradition. In a sense, •Smalltalk denigrates the very idea of ah . operating system, " a c o l l e c t i o n of things that • don't fit into a- language. There. shouldn't be one" (Ingalls 1981, p. 2 9 8 ) . In this sense, Smalltalk is better compared not just to languages like C or C++ or Java but entire operating systems like Unix or Windows. Chapter 6: Personal Computing in the Age of the Web 213 ties couldn't be carried out on our own 'personal' computers i n a peer-to-peer manner, but the way we have been conditioned to think about computing makes the division of labour between clients and servers seem natural. The very ubiquity of the W e b as a client-server application with its standardized user interface strongly reinforces this, and i n a sense undermines the 'personal' potential of personal computing. In the very simplest rendering, dealing w i t h the w o r l d through a web browser 'dumbs down' our interactions w i t h that world, because it casts every activity into a classical asymmetric relationship: the brains of the operation are at the server, the beauty is at the client end. It is not surprising that this model has worked out well for marketing purposes. Heaven forbid we be told that we're the brains of the operation. A n d , ironically, despite the apparent 'democratization' of L i n u x and FOSS, these tech- nologies are quite conservative on this architectural axis; they maintain the centrality of the client-server model. V e r y few people actually use L i n u x as the client-side of their computing world, and even so, L i n u x as a desktop environment closely emulates the W i n d o w s model and metaphors. Web authoring and computer literacy In painting this broad-strokes history, though, it is important to avoid the sense that these stories are smooth and unidirectional. In the case of U n i x , FOSS, and the W e b , the key point to bear i n m i n d is that this is a story which picks up enormous m o m e n t u m and relative clar- ity i n the early to m i d 1990s. Prior to that—that is, prior to the Web—the threads of this storyline were m u c h harder to discern. If we go back to about 1991 or 1992, we find people working i n H y p e r C a r d o n the Macintosh, which, for all the limitations of the existing soft- ware and hardware base of the day, was m u c h more conceptually i n line w i t h Kay's vision (significantly, though, it lacked a networking component). Here was "authoring" i n an open- ended and personal style, on the personal computer. In a nostalgic piece o h the state of graphic and interactive design in the age of the W e b , compared w i t h the flowering of a Chapter 6: Personal Computing in the Age of the Web 214 multitude of design ideas i n the late 1980s and early 1990s, designers Groff & Steele write of a "creative golden age" so different from today's web-oriented perspective: W h a t strikes us, looking back, is the sheer originality. W o r k from this period is fascinating because artists and programmers were required to work from a blank slate. The territory was mostly unexplored, so by definition every move required originality. (Groff & Steele 2004) Similarly, from the standpoint of education and especially IT curriculum, the advent of the W e b and "web-page authoring" is in most cases a huge step backwards (or downwards) i n comparison w i t h the radical mathematical aspirations of Logo or even the open-ended multimedia authoring of HyperCard . Yet by 1995, both Logo and H y p e r C a r d were "history"—despite marginal groups of loyalists, these systems had been all but washed away by the tide of the W e b . In the process, the general thrust of "computer literacy" moved away from authoring and media production and toward information access. N o w the dynamics of information and access to it—who has it, who controls it, who provides it—are certainly of critical importance, and it w o u l d seem that this is an entirely reasonable direction for educational IT to move i n . But before we consider that question answered, let us pause and consider the black box w h i c h is being closed here. The shift away from 'media' and toward 'access' makes an assumption about the under- lying infrastructure: that the means and mechanisms are good and sufficient to meet our informational ends, and conversely, that the means and mechanisms are of sufficiently low level and sufficiently uninteresting from an pedagogical perspective that we can safely forget about them, leave them to professionals, and attend to the higher levels of getting to the content. The extent to w h i c h we willfully forget M a r s h a l l M c L u h a n ' s cardinal insight here is noteworthy, but this is, I argue, the deal we struck i n the 1990s, when we adopted the combination of web browser, H T M L , and client-server architecture as our basic means of encounter w i t h the digital world. W h a t we get i n return is a w o r l d packaged as web pages. N o w , considering that Google currently claims to index ten b i l l i o n web pages, this might not seem like a bad arrangement. But let us continue: what we gave up i n the bargain is the Chapter 6: Personal Computing in the Age of the Web 215 ability to critically engage w i t h the form and structure of that information, w h i c h we have left largely (though not entirely) to packagers from Microsoft to A O L / T i m e W a r n e r . Black boxes, of course, are all about such bargains; I contend, however, that from the educational perspective at least, this box has been closed too soon. A n d y diSessa writes: T o be direct, information is a shockingly limited form of knowledge. Unfortu- nately, our c o m m o n culture seems insensitive to that fact. W e even call the computer's influence the "information revolution." There is an information revolution, but it is not the basis for a revolution in education. A s turbulent and big as the information revolution has been, I think the truly interesting (and maybe bloody) part of the revolution as regards computational literacy has yet to begin, and society is less well prepared for it. (diSessa 2000, p. 219) Admittedly, there is more to the W e b than web pages per se, and beyond that, there is certainly more to the Internet than the W e b . A n d of course the W e b i n its monotonous Webbiness is also a thing of amazing scope and quality and variety. The point I am trying to make is that the culture of the Internet has its centre of mass i n a spectatorial/consumer- . oriented milieu, and that most thinking about IT in education has bought into this whole- sale. A key example of this is the online "economy" of "learning objects"—this refers to a large scale educational technology trend, i n w h i c h a set of interoperability standards (e.g., the I M S Project) have been established w h i c h enable and encourage the development and exchange of reusable, modular, multimedia educational content?0 The ideal for such learn- ing objects seems to be interactive Java applets (that is, small, Java-based programs that run w i t h i n a web page) which simulate some complex system or model and allow learners to explore its behaviour. There is no doubt that many such educational applets have been created, exchanged, and indeed re-used i n various contexts. But some professional programmer (or, better but far rarer, a teacher) somewhere "authors" such content, while the learner's engagement w i t h the content is restricted to her viewing or playing w i t h it. This is a traditional one-to-many publishing model; despite the formidable superstructure 20. Norm Friesen's excellent article, "What are Educational Objects?" (2001) puts in sharp relief the relationship between interop- erability and decontextualization in the learning object economy. Chapter 6: Personal Computing in the Age of the Web 216 of technologies, traditions, standards, and frameworks w h i c h provide the learner w i t h these opportunities, it amounts to little more than 'interactive textbooks'—with the control remaining w i t h the publishers and the audience left as an undifferentiated mass. The economics of the publishing (or broadcasting) model are too powerful to ignore: one gener- ates an audience i n order to sell access, not to the information, but to the audience. One cannot deny that the W e b is indeed a distributed authoring environment, i n that anyone anywhere can put up a web page on their local server. Furthermore, no single agency is truly i n control of what content appears. A s a result, we have a proliferation of what might be termed "artistic" or "expressive" uses of the web: H T M L - b a s e d web pages, extensive graphic design work, javascript (scriptable web pages) programming, and visually intense Flash animations and games. It would be dishonest and unfair for me to dismiss the wealth of this k i n d of online expression out of hand; I w i l l instead merely point out that the genres of Web-based authoring are very conservative, after more than a decade of the W e b , and the forces leading toward the one-button publishing model (which basically limits user-level W e b authoring to forms like blogging) are strongly i n ascendance. Is this just nostalgia for a golden age? Before standardized platforms, so m u c h more was possible—this is of course a truism. N o doubt there is an element of nostalgia i n my analysis, but let us remain focused on the black boxes, and which ones we might benefit from keeping open. • W H Y N O T S M A L L T A L K ? W H Y N O T T H E D Y N A B O O K ? W e are today the heirs of two dominant traditions of computing. First, the U n i x tradition is manifest i n our Internet technologies and i n much of the programming/development culture—especially that of the FOSS movement. The second is a "personal computing" tradition w h i c h can be traced academically back to X e r o x P A R C , but w h i c h bears more the stamps of the industrial and marketing forces that shaped the personal computer as a commodity i n the 1980s and 1990s. These two traditions are i n significant tension with one another; the U n i x tradition prizes depth of knowledge, inner workings, and almost delights Chapter 6: Personal Computing in the Age of the Web 217 i n its arcana. In this tradition, software is more important than hardware. Its practitioners, especially i n the FOSS movement, value a do-it-yourself ethic and the free sharing of the fruits of their efforts, w h i c h become the scaffolding for others' work. The personal comput- ing tradition, on the other hand, has become a consumer-oriented tradition; here, (buying) hardware is more important than software. It is i n many ways a discourse of anxiety: how to deal w i t h complexity, inundation, threats, spam, viruses? The answer is to buy the next version, the latest offering. These two traditions are at this point entirely intertwined: the U n i x tradition relies on the ubiquity of cheap hardware and network access provided by a personal computing market; the latter relies on the former for the network itself, the means of using it, and, in a broader sense, innovation. The web browser is the canonical interface between these two worlds. There is, as I have attempted to demonstrate, a third tradition—or at least the seeds of one—nascent, latent, possible. This is the tradition of the Dynabook. It differs markedly from the two dominant traditions today, and as such it offers antidotes to the major failings of each. W h a t the Dynabook offers to the U n i x tradition—with which it is contemporane- ous and shares many key virtues—is a superior vision of the user. A l a n Kay's key insight i n the late 1960s was that computing would become the practice of millions of people, and that they w o u l d engage with computing to perform myriad tasks; the role of software w o u l d be to provide a flexible medium w i t h w h i c h people could approach those myriad tasks. U n i x , i n constrast, has always been a system for systems people; 2 1 despite the democratic rhetoric of the F O S S movement, U n i x has no serious potential to become the computing environment of the masses; it is not difficult to see the cultural/historical dynamics working directly against this. Relatedly, what the Dynabook offers to the "personal computing" tradition is also a superior vision of the user, but i n this instance, the difference is that the Dynabook's user is an engaged participantrather than a passive, spectatorial consumer—the Dynabook's user was supposed to be the creator of her own tools, a smarter, more capable user than the 21. To be fair, Smalltalk-80 has also been primarily a system for systems people. Chapter 6: Personal Computing in the Age of the Web 218 market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the "end-user" as documented by Bardini & Horvath (1995). This is all simple enough. W h y has the Dynabook vision not prevailed? O r , prevailing aside, why is it so very marginalized? The short answer is elaborated i n the "sociology of knowledge" literature. It is fatalistic i n a historically contingent way: the circumstances surrounding the emergence of these computing traditions, w i t h their associated virtues and vices, led to a particular historical unfolding; once cemented in their extensive networks (marketing, manufacturing, popular discourse, journalistic coverage, and the pedagogical process of initiating new project participants), the 'ecological niche' possibilities for other options diminished. This process is nicely described i n the "social shaping of technology" literature (e.g., Bijker & Law 1992), i n w h i c h the manifold possibilities of a new technology are completed and subsequently reified by social factors. This also works well w i t h the notion of "irreversibility" of techno-economic networks raised by M i c h e l C a l l o n (1991). Personal computing has thus become "a network whose interfaces have all been standard- ized," and therefore w h i c h "transforms its actors into docile agents and its intermediaries into stimuli w h i c h automatically evoke certain kinds of responses" (p. 151). Is that enough? Perhaps not. T o end the story—and the analysis—here is to miss much of the r i c h and interesting detail; it is to revert to the agnostic stance, i n w h i c h technocultural phenomena are judged 'behaviourally' by their empirical historical impact or by the extent of their propagation. It is to miss—or willfully ignore—the questions of what makes a good idea good, a powerful idea powerful. W h a t makes us able to recognize a good or powerful idea, and conversely what prevents us from that recognition? W h y should we want to sidestep those issues? W e have to go deeper, into the specifics of the thing. The technocentric answer to the question of why the Dynabook vision has not prevailed is one of implementation engineering: the U n i x (and by extension the programming language C) tradition makes a virtue of prying computing efficiencies—and therefore Chapter 6: Personal Computing in the Age of the Web 219 speed—out of language abstraction. C strikes an extremely different bargain w i t h a . programmer than Smalltalk does. C (and like languages: Pascal, C++, etc.) allows a trained programmer to achieve computing efficiencies not too far off writing i n assembly language (i.e. very low level, close to the machine) while giving that programmer a reasonably expres- sive language. In contrast, Smalltalk (and Lisp before it) are languages w h i c h begin w i t h abstraction; they are mathematically derived, and the implementation details follow; more important is the conceptual flexibility and scaffolding opportunities afforded the program- mer. The result is that though they allow vastly greater and more flexible abstraction and open-endedness, dynamic languages like Smalltalk and Lisp simply produce slower- running software than languages like C, all other things being equal. This issue, and the debate over whether it is preferable to conserve processor time or programmer time has existed since the early days of Lisp. But, more practically, where this issue has hit the marketplace, especially i n the early days of microcomputers, the need for low-level performance has dominated. 2 2 A more interesting and culturally situated treatment of this cultural divide is explored from w i t h i n the Lisp community in Richard Gabriel's (1991) article, "The Rise o f ' W o r s e is Better,'" w h i c h suggests that the virtues of simplicity, correctness, consistency, and complete- ness are i n different proportion i n different systems design communities, and, by extension, that the definitition of these qualities changes somewhat according to their interrelation- ship. Gabriel identifies the Lisp school of design (the " M I T approach") as the right thing with a particular formalist rendering of these four qualities, putting correctness and consistency foremost. The "worse-is-better" school of design, however, places a higher prority on the virtue of simplicity, w h i c h brings about a refiguration of the other three: Early U n i x and C are examples of the use of this school of design, and I w i l l call the use of this design strategy the New Jersey approach. I have intentionally 22. Recall that while Alan Kay was at Apple Computer, it was all they could do to wring barely usable performance levels out of 1980s-era Macintoshes running Smalltalk. Their experience hardly inspired anyone to adopt such technology, and they contin- ued to use it—even to the point of developing a specially optimized version (Smalltalk-V) for the Vivarium Project;—despite the practical issues of performance. Chapter 6: Personal Computing in the Age of the Web 220 caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the N e w Jersey approach is a bad approach. However, I believe that worse-is-better, even i n its strawman form, has better survival characteristics than the-right-thing, and that the N e w Jersey approach when used for software is a better approach than the M I T approach. (Gabriel 1991) W i t h o u t getting into the details of Gabriel's analysis, and to say nothing of the generations of rebuttals and counter-rebuttals he and others have written, it is informative to see that the issue can be quite effectively addressed from the inside of a particular cultural context, and that such expositions tell us something quite different from an external analysis. That morally charged vocabulary like "right thing" and "correct" can be used unproblematically (or at least constructively) i n a consideration of why technological systems tend toward one or other historical trajectory is key: this is a long way from the "social shaping" approach, i n that it allows for the virtues of the ideas themselves—or at the very least their incarnation i n systems—to speak alongside more straightforward social and cultural factors. It speaks to significance and meaning-making that is emergent from technocultural systems and their attendant virtues. That said, there is nothing here to suggest that there is only one "insider" perspective; perspectives are emergent; they rise and fall, as discursive patterns come and go, are rein- forced and reiterated or wither and fade away. Gabriel's discussion is not an argument for C and U n i x , but rather an assessment of the fate of Lisp i n a C / U n i x - d o m i n a t e d world. This assessment has a particular character today w h i c h is decidely different from what it looked like i n 1975 or 1985. This is an ecological shift, though, as opposed to an argument being w o n or lost on objective merits. Note that even a technocentric analysis, taken to sufficient depth, necessarily becomes one of ecology. The Dynabook: existence and essence The Dynabook—like Smalltalk itself—is no more or less marginal an idea today than it ever was, despite there being better supports—technological and cultural—for it than there ever Chapter 6: Personal Computing in the Age of the Web 221 have been before: computer hardware is inexpensive and ubiquitous; the Internet is wide^- spread and more or less faithful to its original concept; there exists now a set of norms and institutions governing the widespread sharing of software; and, though this way of express- ing it has fallen out of favour, "computer literacy" is a matter of practical concern to millions of people, young and old alike. The extension of networks—in both the literal sense and i n Latour's sociotechnical sense—is far greater and more powerful. If there was ever a time for the Dynabook to succeed i n the world, surely it is now. But the cultural milieu—about ideas and meanings and the relative significance of things—are far from favourable to the Dynabook today. W h a t is recognizable as a powerful idea has shifted significantly. W h a t counts as a virtue or a vice i n this cultural milieu has shifted since the m i d 1970s. O n some level that is not at all technical—nor even technocul- tural—the Dynabook seems very far apart from the values and concerns and practicalities of today's world, today's schools, today's students. I can argue here that this is an unfortunate thing, but there is little I can do to bridge the gap conceptually. 2 3 In his evocative and stylish case study, Aramis: For the Love of Technology, Latour goes so far to as to raise the question of whether or not A r a m i s , the ill-fated rapid transit system, exists: Chase away the people and I return to an inert state. Brin g the people back and I am aroused again, but my life belongs to the engineers who are pushing me, pulling me, repairing me, deciding about me, cursing me, steering me. N o , A r a m i s is not yet among the powers that be. The prototype circulates i n bits and pieces between the hands of humans; humans do not circulate between my sides. I am a great human anthill, a huge body i n the process of composition and decomposition, depending. If men stop being interested i n me, I don't even talk anymore. The thing lies dismembered, i n countless pieces dispersed among laboratories and workshops. Aramis, I, we, hesitate to exist. The thing hasn't become irreversible. The thing doesn't impose itself on anyone. The thing hasn't broken its ties to its creators. (Latour 1996, p. 123) 23. DiSessa articulates a similar theme with reference to the difficulty of obtaining funding for computational media projects in a world in which so many of these black boxes are closed. "The self-evident state of the art blinds people to other possibilities" (2000, p. 241). Chapter 6: Personal Computing in the Age of the Web 222 Surely the ontological status of the Dynabook is no different. A r a m i s too comprised many powerful ideas, but powerful ideas alone didn't make it real. Despite three decades of almost constant attention by Kay and his circle of devotees and developers, the Dynabook exists only tenuously: implied i n the marginal survival of Smalltalk, echoed in our wireless laptops, remembered by a small cadre of developers, but surely i n danger of total collapse as soon as guiding hands fall away. O r is it? Does Latour's parable of the irrevocable, tragic reversibility (pace Callon) of technosocial systems totally apply? A t some point, the Dynabook's continued play for exist- ence (if not actual existence) over thirty-five years puts it in a different class than the m u c h shorter-lived Aramis project. Does this make the Dynabook more alive—or just more undead, a better candidate for the B-movie horror genre? A n d what of the myriad facets of the Dynabook w h i c h have been realized and even reified i n any number of successful forms, from the various Smalltaiks to Powerbooks, Photoshop, and Peer-to-Peer networks? Bits of the Dynabook surely live on—though mostly far removed from educational concerns. But if these surviving elements are no longer connected to an educational project, let alone an educational vision, then we have to ask whether the Dynabook has been translated into unrecognizable form, into non-existence. The bigger question w h i c h emerges here is perhaps not i n what ways does the Dyna- book experience resemble or not resemble Latour's account of A r a m i s . The bigger question, it seems to me, is whether Latour's technosocial ontology has everything it needs to account for a phenomena like the Dynabook. Latour, for one, seems to have no time for anything resembling a "powerful idea"—such things, when they appear at all in a story like A r a m i s ' seem to come and go, are either engineers' fantasies or mere epiphenomena of Latour's concrete, materialist philosophy. Such materialism makes for tidy technosociologies. I am not convinced, however, whether it does justice to the experience of technosocial actualities. Where this story needs to go next is on to the Dynabook's latter-day resurgence, i n a project called Squeak, w h i c h arose i n Kay's final days at A p p l e Computer i n the 1990s, and Chapter 6: Personal Computing in the Age of the Web 223 w h i c h has carried his project along ever since. Whether the Dynabook is real—in any mean- ingful sense—surely has something to do w i t h the reality of Squeak. Chapter 6: Personal Computing in the Age of the Web 224 Chapter 7: Squeaks Small but Mighty Roar In the back of our minds, we allfelt that we werefinally doing what we hadfailed to do in 1980. - Dan Ingalls, 1997 The story related so far is unremarkable i n so far as it tells of the rise and fall of a cultural object—an idea whose time has come and gone; a technology w h i c h has become obsolete; an ideological or philosophical stance no longer relevant i n a changed w o r l d . If the story were to end there, this finality w o u l d be an easy conclusion, and we w o u l d be i n a position, like Latour i n Aramis, to ask—in the past tense—who or what killed the Dynabook? W h a t makes this story more complicated and ultimately more important to present- day concerns of educational technology, the political landscape of the digital sphere, and even approaches to the sociology of knowledge is the significant re-emergence of the D y n a - book project from A l a n Kay's team i n the late 1990s: a new Smalltalk environment called Squeak. N o t merely a re-release of "classic" Smalltalk, Squeak represented a re-contextuali- zation of Smalltalk i n the age of the W e b , multimedia, and the free/open-source software movement. Squeak symbolizes at least the persistence of the Dynabook vision and exists as a k i n d of proof of concept for the applicability of Kay's 1970s-era research i n a later age. The Squeak project and the communities surrounding it provide interesting fodder for an actor- network approach to technocultural history; on the one hand, Squeak's trajectory and the challenges of establishing networks of support and currency make for straightforward actor-network exposition; on the other hand, the extent to w h i c h Squeak embodies ideas w h i c h have been lurking, latent, without supporting networks, for close to two decades, presents challenges for this mode of sociology; it is difficult to account for Squeak and its relative success from a purely materialist frame. Chapter 7: Squeak's Small but Mighty Roar 225 S Q U E A K : A R E N A I S S A N C E S M A L L T A L K By 1995, A l a n Kay's momentum had slowed considerably. The V i v a r i u m Project, which had provided a development and research focus through the 1980s, had "run down" by 1993, and the software research that followed it amounted to little. The patronage of Apple C E O John Sculley ended in 1993. After a decade of struggling to regain the energy of the 1970s, the m i d 1990s must have been a frustrating time. I saw Kay deliver a keynote speech at the third W o r l d - W i d e W e b conference at Darmstadt i n 1995; wielding his videotape of Douglas Engelbart's 1968 demo i n w h i c h the software pioneer showed a mouse-driven, networked hypermedia system, he chided delegates not to be so proud of the W e b , and made an appeal for a network of message-passing objects instead. After his talk, he was set upon by dozens of young developers dying to know what he thought about Java, Sun Microsystems' brand new O O P language, positioned very much as a W e b technology. Kay remained guarded. The same year, D a n Ingalls, who had been the chief architect of Smalltalk at X e r o x P A R C , returned to Kay's team. Ingalls had come to A p p l e Computer i n the m i d 1980s to do research work on Smalltalk at Apple, but this lasted only a short time. Ingalls left Apple, and the IT industry entirely for almost a decade. In the early 1990s, he took a job at Interval Research working on a Smalltalk-based home-media project. In late 1995, T e d Kaehler began coaxing Ingalls to come back to the team at A p p l e . Ingalls' work at Interval required h i m to come up w i t h a Smalltalk implementation, w h i c h he did using the relatively open- licensed A p p l e Smalltalk and the program listings published i n Adele Goldberg and Dave Robson's 1983 book Smalltalk-80: The Language and its Implementation—the book w h i c h hadbeen the culmination of X e r o x P A R C ' s release of Smalltalk into the wider world. M u s i n g over a reunion with Kay and Kaehler at Apple, Ingalls felt that he would need to re- create a Smalltalk version again, and, i n a flash of insight, he realized that what he had done at Interval could be accomplished mechanically. Ingalls joined Kay's team at Apple i n late 1995, and immediately began this work. Chapter 7: Squeak's Small but Mighty Roar 226 "Back to the Future" In a 1997 conference paper by Ingalls and a few of his long-time associates, he wrote: In December of 1995, the authors found themselves wanting a development environment i n w h i c h to b u i l d educational software that could be used—and even programmed—by non-technical people, and by children. W e wanted our software to be effective i n mass-access media such as P D A s and the Internet, where download times and power considerations make compactness essential, and where hardware is diverse, and operating systems may change or be completely absent. Therefore our ideal system would be a small, portable kernel of simple and uniform design that could be adapted rapidly to new deliv- ery vehicles. W e considered using Java but, despite its promise, Java was not yet mature: its libraries were i n a state of flux, few commercial implementations were available, and those that were available lacked the hooks required to create the k i n d of dynamic change that we envisioned. W h i l e Smalltalk met the technical desiderata, none of the available imple- mentations gave us. the k i n d of control we wanted over graphics, sound, and the Smalltalk engine itself, nor the freedom to port and distribute the resulting work, including its host environment, freely over the Internet. Moreover, we felt that we were not alone, that many others i n the research community shared our desire for an open, portable, malleable, and yet practical object-oriented programming environment. It became clear that the best way to get what we all wanted was to build a new Smalltalk w i t h these goals and to share it with this wider community. ("Back to the Future." Ingalls et al. 1997) This "new Smalltalk" was Squeak, released (from A p p l e Computer's Research Labs) i n October 1996 as D a n Ingalls made a short announcement o n the comp.lang.smalltalk Usenet newsgroup: "Squeak—A Usable Smalltalk written i n i t s e l f (Ingalls 1996). The "back to the future" phrase was particularly apt; not only had A l a n Kay managed to regain his core Xerox-era technical team i n Ingalls and Kaehler, but they were able to pick up where they'd left off, nearly two decades before.1 The Squeak project is notably not a necrophilic dredging of the glory days; its prime motivation was a profound dissatisfaction w i t h the tools availa- 1. Ingalls later reflected on the split (between an educational focus and systems programming focus) that seems to have emerged at Xerox PARC after Smalltalk-76 was created. According to Ingalls, Kay had suggested at the time that the more child-friendly Smalltalk-72 could be implemented within Smalltalk-76, and Ingalls agreed." But the thing is, I didn't do it." With Squeak, first priority was given to educational and personal media tools like animation, sound, and child-friendly programming. Chapter 7: Squeak's Small but Mighty Roar 227 ble, as well as the knowledge that what they had had i n the late 1970s had not been equalled. Kay had spent, by this point, over a decade trying to wrestle Apple's systems into service for his educational projects, using various 1980s-vintage Smalltalk implementations, simula- tion environments like Agar and Playground, and Apple's own tools (like HyperCard), without much success. In 1995, Sun Microsystems' Java appeared, drawing i n large part on Smalltalk's vision: a network-aware, object-oriented system that could serve as an interac- tive multimedia platform for the W e b . But Java failed to capture Kay's imagination; it was far too inelegant for h i m : "Java is the most distressing thing to hit computing since M S - D O S , " he reportedly said at a 1997 conference (Guzdial 1997). W h a t Kay's team at A p p l e wanted was clear enough: something as elegant as the Small- talks they had worked w i t h at Xerox, but brought up to date, w i t h the modern Internet and modern multimedia i n m i n d (colour, sound, video). They needed something like Java, but not the huge, byzantine system that Java was emerging as. Java was distressing because it delivered on some of the promise—a network-aware object-oriented system—but without the tidy conceptual elegance (e.g., the kernel expressible i n "half a page of code") w h i c h could make it usable (and extensible) by people other than professional software engineers. O f course, Java's popularity soared, and with it its complexity. Ingalls' aesthetic addressed this, too; he wanted to write a Smalltalk entirely i n Smalltalk, to make the system completely self-contained. This w o u l d accomplish two very important things: it w o u l d allow the system to be capable of evolution: if the base implementation needed to change, this could be accomplished 'from within.' Ingalls wrote Squeak in Apple Smalltalk-80, and as a result, future versions can be written within Squeak. But second, having a self-contained Smalltalk implementation would mean that it was vastly more "portable": Squeak could be imple- mented o n a M a c , on a P C , on a U n i x workstation, or what have y o u . 2 Ingalls' goal was to create a bootstrappable Smalltalk that would free it from dependency on any other plat- form—or vendor. J 2. Ingalls' paper," Back to the Future" details exactly how this "Smalltalk written in itself" was accomplished. The practical results are impressive: the team at Apple had Squeak working in a mere 16 weeks, (Ingalls et al. 1997). When the code was released onto the Internet, ports to Unix and Windows appeared within 3 and 5 weeks, respectively (Ingalls 2001). Chapter 7: Squeak's Small but Mighty Roar 228 The larger context for this development was the fate of A p p l e Computer, w h i c h i n 1996 could not have looked darker. After long-time C E O John Sculley's departure from the company i n 1993, Apple's corporate health was i n crisis. By 1996, frequent senior management changes, big layoffs, and mismanagement had pushed Apple's stock price lower than it had been i n 10 years, descending into a trough that it w o u l d not recover from until 2000. W h i l e it has been fashionable for computer industry pundits to predict Apple's imminent demise ever since the company's founding, at no time did it look more likely than the m i d 1990s. Microsoft had just released its enormously popular Windows95 operating system and was in a period of unprecedented growth. M a n y people—including Apple insiders—believed that Apple's days were numbered. By throwing i n their lot w i t h Apple, A l a n Kay The P r e c T r i g S g n ^ ^ f ^ l l t a l g i B n Q Q e ' J .:ipiliiji..n,pwi!' # • ' wnether.Apple Computer,would collapse or not Kay;s research funding was in jeopardy, and i h e ^ ^ ^ ^ ^ S ^ ^ ^ ^ ^ ^ a k r i r S f R " ^ ' 2 C M i , p ^ ^ ^ ^ ^ ^ ^ g ^ ^ ^ ^ ^ ^ ^ ^ i f . ' t h e ! i n d u s t r y ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ w A ' p ' p I c like a r i S K ) 8 p ^ r a e ^ p q u ^ ^ a e ^ l d p e r John Moloney recalls working with Kay s Icam at the t mo • ' ' . v - i*i.-4ltrv.. I I vuvs'ilamnd that iLwt wnntid test stun in si nan \ <>ht woula tnaki' a- • wayjci that to happc'ri'tiut Witht. lit a i'Kir path for wide sink, royalty Im want Toj tjjort. mto- Paui'i ialii}, - ' 'ilntity and his colleagues had i n a sense chosen sides i n the " ' 'computer wars' of the 1980s; it had begun as an Apple vs. I B M battle, but by the m i d 1990s, Apple's adversary was clearly Microsoft. By 1996, this must have looked like a dangerous predicament. The development of Squeak—a portable implementation of Smalltalk—and Kay's work to secure an open source license agreement for it ensured that they had an escape pod. Though the development work had been done at Apple, and had relied largely on A p p l e resources, Squeak w o u l d be i n no way dependent upon Apple; if Apple were to 'go under' and Kay's team to find a new home, at least the fruits of their efforts would escape w i t h them. Chapter 7: Squeak's Small but Mighty Roar 229 A s it happened, Kay and his research team did indeed jump ship; In October 1996, just as Squeak was publicly released on the Internet, Kay and his colleagues left A p p l e to join W a l t Disney Imagineering, recruited by the head of R & D there, film and theatre designer Bran Ferreh. Kay and the Squeak team w o u l d remain at Disney for another 5 years. A p p l e Computer, of course, would survive to make blue computers and iPods for a waiting w o r l d . Squeak's release to the Internet was, i n a sense, the culmination of a project begun i n 1979, when Adele Goldberg and colleagues set about preparing Smalltalk-80 for release to the w o r l d beyond X e r o x P A R C . But the computing industry of 1980 had evolved limited options for the general release of technologies. Goldberg's aspiration for Smalltalk was for it to be released as widely as possible, and i n fact, a number of the original licenses were sold for $1 (Goldberg 1998). However, the entanglements between X e r o x Corporation, Parc- Place Systems (the company spun off to license Smalltalk-80), and the limited number of interested licensees) meant that Smalltalk-80 remained a marginal technology. In 1996, however, w i t h a widely available Internet and a growing movement i n free and open-source software, Squeak had the opportunity to have a truly wide reach. The initial results were encouraging. By the end of 1996, external developers had sight- unseen picked up Squeak and ported it to W i n d o w s and U n i x platforms, confirming Ingalls' belief i n its platform independence (the October 1996 release had been a M a c application). Today, Squeak runs on more varied computing platforms than almost any open source technology (over 20, according to viewpointsresearch.org). Portability and openness were just two of the virtues Squeak embodied; also key was the fact that Internet and multimedia capabilities were almost immediately available w i t h i n Squeak. In 1997, a system was released that allowed Squeak to run w i t h i n a web page (by way of a web browser plug-in, just like Java, Flash, and other multimedia platforms), so Squeak-based content could be accessed via the W e b . This makes Squeak m u c h easier to access for teachers and others restricted from installing software on lab computers. F r o m the standpoint of the core developers, Squeak meant that they were once again i n control of their software. Georgia Tech's M a r k G u z d i a l notes that, " A l a n Kay is most proud Chapter 7: Squeak's Small but Mighty Roar 230 http://viewpointsresearch.org that each generation of Smalltalk obliterated the version before, until it went commercial. Then it became frozen. Use Squeak as a way to obsolete it"(Guzdial 1997). Kay hoped that Squeak, though based on Smalltalk-80, would open up the development of the architectures and concepts underlying the language itself. In his introduction to Guzdial's book, Squeak: Object Oriented Design with Multimedia Applications, Kay wrote: In Squeak, you have i n your hands one of the most late bound, yet practical, programming systems ever created. It is also an artifact w h i c h is wide, broad, and deep enough to permit real scientific study, creation of new theories, new mathematics, and new engineering constructions. In fact, Squeak is primed to be the engine of its own replacement. Since every mechanism that Squeak uses i n its own construction is i n plain view and is changeable by any programmer, it can be understood and played with. "Extreme play" could very easily result in the creation of a system better than Squeak, very different from Squeak, or both. W e not only give permission for you to do this, we urge you to try! W h y ? Because our field is still a long way from a reasonable state, and we cannot allow bad defacto standards (mostly controlled by vendors) to hold back progress. (Kay 2000b, p. xii) This very idea that Squeak might be the instrument of its o w n obsolescence speaks to an important tension w i t h i n the Squeak community. S Q U E A K A S A N E D U C A T I O N A L P L A T F O R M Squeak Is An Idea Processor For Children Of All Ages! - Alan Kay, www.squeakland.org W i t h i n the Squeak community that has emerged since 1996 is a tension w h i c h dates back to the 1970s at X e r o x P A R C : a system of sufficient flexibility, concision, and conceptual elegance to be useful as an educational media platform is also a powerful draw for systems developers and professional programmers. This tension was manifest in the translation of Smalltalk-72 into the more robust Smalltalk-76, and it is manifest i n Squeak, w h i c h presented itself first as a personal media toolkit along the lines of Kay's Dynabook vision (in Chapter 7: Squeak's Small but Mighty Roar 231 http://www.squeakland.org Kay's terminology, an "almost new thing"), but w h i c h was quickly picked up by members of the Smalltalk development community as an open-source Smalltalk implementation ("a better old thing"). Today, these two communities are clearly distinct, w i t h different websites and mailing lists serving as the touchstones and meeting places of these groups. For instance, the website at squeak.org and the squeak-dev mailing list primarily serve the devel- opment community; a different website at squeakland.org (and the squeakland mailing list) address the concerns of educators and learners (there are also a large number of other distinct subcommunities online—more about that below). This unresolved tension has existed since the 1970s at Xerox: certainly since Kay's "burn the disk packs" turning point w i t h Smalltalk-76, and probably long before that (Ted Kaehler, personal communication); what is different today, however, is that both the systems focus and the educational focus draw on three decades of prior work. In practical terms, this means that Squeak as an educa- tional platform is certainly not starting from scratch. Etoys: Doing with Images makes Symbols Etoys emerged in the early years of Squeak, and is practically synonymous w i t h Squeak for many people who encounter it. Etoys is a tile-based scripting environment for children that lets them orchestrate sophisticated behaviour for onscreen objects—without having to compose programming code i n the traditional way (i.e., typing syntactically perfect code and then watching what breaks). Etoys, thus, is a latter-day iteration of Kay's evolving ideas about kids as novice programmers, building on a three-decade heritage of research i n this area. This work began i n Smalltalk-72 at Xerox, w i t h the Learning Research G r o u p investi- gating what kinds of scaffolding are required i n order for kids and novices to achieve meaningful results. Adele Goldberg's "Joe Box" work is the hallmark of this research, gener- alized to the idea that a set of well thought-out superclasses as building blocks then allows users to specialize them i n interesting ways. The original Smalltalk-72, drawing on the Logo language, attempted to offer a very simple syntax and keep the referents concrete, but the Chapter 7: Squeak's Small but Mighty Roar 232 http://squeak.org http://squeakland.org mode of user interaction was still primarily textual—one typed i n program code one line at a time. Smalltalk-76 furthered the idea of a system architecture based on a hierarchy of object classes and provided a host of graphical tools for accessing those components, but this was arguably of more use to systems developers than children or novices. Kay wrote a chapter for World Book Encyclopedia's Science Yearbook 1979 entitled "Programming Y o u r O w n Computer." This chapter laid out a schematic for a programming interface re-oriented to novice users (indeed, high-school students were the audience for this book). In the World Book chapter, a "player" object could be inspected via a very simple window displaying the current values of various parameters, and these values could be directly manipulated or scripted. The importance of such a feature is that it brings the "direct manipulation" interface (Laurel & M o u n t f o r d 1990) to the realm of programming; by clicking on an onscreen object, one can reveal properties and behaviours, w h i c h can then be directly modified. Chapter 7: Squeak's Small but Mighty Roar 233 This i m a g e has b e e n r e m o v e d because o f c o p y r i g h t restrictions. Figure 7 . 1 : From "Programming Your Own Computer" (Kay 1 9 7 9 ) Chapter 7: Squeak's Small but Mighty Roar 234 The programming interface i n the World Book chapter was idealized, but Kay's projects i n later years w o u l d realize this ideal. The Playground system from the V i v a r i u m project at A p p l e was based on "player" objects that could be inspected i n just this way. "One way to think about Playground is as having a spreadsheet view, a H y p e r C a r d view, and a textual programming view, all simultaneously," wrote A n n M a r i o n (1993, ch 1. p. 1). But Playground, w i t h its "observer" model inspired by the dynamic spreadsheet (as opposed to the Smalltalk message-passing model) proved a bit too rigid. Playground "players" were not message-sending objects, but observers, watching other objects' parameters i n the same way a formula cell i n a spreadsheet watches the values of the cells it depends on. For exam- ple, i n the Playground environment, a simulated 'shark' object is continually watching the parameter values on a 'prey' object; a message-sending version of the same thing could have the 'shark' object specifically querying the 'prey' object for its co-ordinates or direction, or, alternatively, the 'prey' object sending out various parameter values, w h i c h may or may not be received by the 'shark' object. The "observer" model produces an arguably simpler, cleaner programming environment, but provides a single, limited model for inter-object communication. Kay recollected: Playground was k i n d of a generalized event-driven system that had objects which were kind of like a collection of generalized spreadsheet cells, This i m a g e has b e e n r e m o v e d because o f c o p y r i g h t restrictions. Figure 7.2: Playground II " S c r i p t o r " (from M a r i o n 1993, c h . l p. 7) Chapter 7: Squeak's Small but Mighty Roar 235 completely concurrent, etc. This gave the kids the state of objects to look at, but not call/return. Every value was a thread. I loved to program in it, but I thought that, i n the end, it was a little too pristine for 9 year olds—it was a little too m u c h like pure Lisp functional programming i n how clever you needed to be. The cleverness was rewarded by beautiful, simple, and powerful programs, however. (Kay 2001) Etoys i n Squeak built upon this player-and-viewer model, merging parts of the "observer" idea w i t h Smalltalk's message-passing model. It also drew substantially from a tradition of iconic programming research at A p p l e and elsewhere. Iconic programming—that is, composing program-like structures by manipulating graphical elements on screen—dates back to Ivan Sutherland's Sketchpad and the R A N D Corporation's early pen-based GRAIL system, and has been sought after ever since. Kay mentions having iconic programming i n m i n d i n the first Dynabook concepts (Kay 1996a, p. 552). David Smith's PYGMALION system ("an executable electronic blackboard"), w h i c h dates from the m i d 1970s at Xerox, is an early and celebrated iconic programming environment (Smith 1993). Smith and A l l e n Cypher later worked on an iconic programming system called Cocoa while at A p p l e Computer (Cocoa was commercialized i n 1999 as Stagecase Creator). A n d , during his brief stay at A p p l e i n the m i d 1980s, D a n Ingalls (along w i t h Scott Wallace and others who w o u l d later create Squeak) created a Smalltalk-based iconic programming environment called Fabrik. A n A p p l e "Advanced Technology G r o u p Research Note" (Chesley et al. 1994) enti- tled End User Programming: Discussion of Fifteen Ideals illustrates the k i n d of thinking that was active i n the m i d 1990s: throughout, direct manipulation and visual concreteness are key virtues. Kay's touchstone was his reading of Jerome Bruner's model of three distinctive modes of thinking: concrete, iconic, and symbolic (Bruner 1966, p. l l f f ; Kay 1990, p. 195ff)—that these are not developmental stages we pass through so m u c h as styles of thinking w h i c h we use i n different combinations i n different contexts. The way forward, according to this thinking, was not simply to replace the symbolic w i t h the iconic—that is, to replace program code w i t h iconic representations—but to recognize and nurture the interplay Chapter 7: Squeak's Small but Mighty Roar 236 a m o n g a l l t h r e e . A k e y r e m i n d e r o f t h i s w a s t h e s t r a i g h t f o r w a r d s u c c e s s o f B i l l A t k i n s o n ' s H y p e r C a r d , w h i c h , w h i l e b e i n g m u c h s i m p l e r ( a n d m o r e r e s t r i c t i v e ) c o n c e p t u a l l y t h a n a n y o f t h e S m a l l t a l k s , f a c i l i t a t e d s i m p l e s c r i p t i n g b y a t t a c h i n g m e s s a g e s t o b u t t o n s o r t e x t f i e l d s o n t h e s c r e e n . A u s e r c o u l d u s e g r a p h i c s t o o l s t o c r e a t e a n o n s c r e e n o b j e c t , t h e r e b y m a k i n g s o m e t h i n g c o n c r e t e t o w o r k a n d t h i n k w i t h — a n d then a t t a c h s o m e s c r i p t i n g o r m e s s a g e - s e n d i n g t o i t . T h i s s i m p l e i n s i g h t w a s t a k e n u p b y K a y ' s t e a m f o r E t o y s ( T e d K a e h l e r , p e r s o n a l c o m m u n i c a t i o n , O c t o b e r 2 0 0 4 ) : a l l o w i n g d i r e c t m a n i p u l a t i o n o f o b j e c t s ( t h a t i s , o n s c r e e n w i t h t h e m o u s e ) t o l e a d t o s c a f f o l d e d s c r i p t s , o n e l i n e a t a t i m e , a n d t h e n t o f u r t h e r g e n e r a l i z e d p r o g r a m m i n g a t t h e p o i n t w h e n i t s e e m s e a s i e r t o m o v e t o t h e m o r e a b s t r a c t f o r m . W h a t E t o y s p r o v i d e s i s a v i s u a l p r o g r a m m i n g e n v i r o n m e n t t h a t i s a b l e n d o f t h e d i r e c t - m a n i p u l a t i o n i c o n i c s t y l e a n d t h e t e x t u a l / s y m b o l i c a p p r o a c h . A n y g r a p h i c a l o b j e c t ( E t o y s e n c o u r a g e s o n e t o b e g i n w i t h a g r a p h i c o b j e c t ) c a n b e s c r i p t e d b y o p e n i n g a " V i e w e r " v e r y m u c h l i k e t h e o n e s k e t c h e d o u t i n t h e 1 9 7 9 World Book c h a p t e r ; t h i s p r e s e n t s a n u m b e r o f t h e o b j e c t ' s a t t r i b u t e s ( p o s i t i o n , h e a d i n g , e t c . ) t o b e s h o w n i n a l i s t o f d i r e c t - m a n i p u l a t i o n w i n d o w s . O 0 I v i S k e t c h O Search | Mou-e j f| S k e t c h ' s I s l l n d e r M o u s e true > < 0 * scripts! 1 f| S k e t c h c i r c l e s paused I f| S k e t c h e m p t y S c r l p t r~ < O i * • c t i o f j I g] S k e t c h f o r w a r d b y J i t ! g S k e t c h t u r n by 5 [j S k e t c h ' s « m % 650 0 S k e t c h ' s y £J 1505 @ S k e t c h ' s h e a d i n g [jj 56 I Q S k e t c h a l i g n a f t e r d o t I H S k e t c h b o u n c e | s i l e n c e f g S k e t c h f o l l o w P a t h I f| S k e t c h m o v e t o w a r d d o t g S k e t c h ' s o b t r u d e s false I f| S k e t c h t u r n t o w a r d d o t I f| S k e t c h w r a p ! O • S k e t c h ] c i r c l e s paused [^] X S k e t c h f o r w a r d b y • 10 a- S k e t c h t u r n b y £ 5 |» Figure 7.3: Etoys " V i e w e r " in Squeak 3.8 C h a p t e r 7: Squeak's S m a l l b u t M i g h t y R o a r 237 At the most immediate, the values of the various parameters can be changed, either by typing or by choosing possible values from a menu. Some of the parameters are animation oriented: for each "tick" of a system clock, any parameters concerned with changes—position, heading, etc.—will update by the specified amount. The result is that simply by clicking and typing in the Viewer window, the object can be made to change and move immedi- ately. The second step is to drag a "script" window out from the Viewer. Once this has been done, "tile-based" programming is available. Parameters from the Viewer can be drag-and-dropped into the script; here is where the tradition of iconic programming intersects with Etoys. With a script constructed, running the "ticker" system clock causes the script.to be executed (looping is the usual behaviour). The script can be modified as it is running by dragging new items from the Viewer, or removing them, or by changing the values of object parameters. Conditional "tests" can be dragged in as well, again, using the Viewer parameters as components. So a good deal of programming functionality can be accomplished simply by dragging and dropping parameters and script components, and by directly manipulating the parameters in real time (this also provides a real-time update of the various parameters). With just what I have described, most of Logo-style turtle geome- try can be accomplished via direct manipulation: no "code" needs to be written as such. However, since Etoys is merely an interface to Squeak Smalltalk, there is a traditional Smalltalk code representation lurking in each Etoys object as well; and it is possible to toggle between the draggable "script" version and a more traditional code editor window. Chapter 7: Squeak's Small but Mighty Roar 238 ! O • |Engine| script2 paused FT] X Engine f o r w a r d b; ; 8 *• Engine "rake soun | splasr Test Engine's • color see:Hcolor Englnejturn b; 6 E n g i n e ' s [ P » n D o w t i j 'alse Engine t u r n b; | 7 V E n g i n e ' s [ p « " P ° " ' " [ » r " « ! O H :Engine|script2 paused FT] X script2 self forvardi 8. self beep: 'splash', (self color: (Color r: 0.29 g: 0.0 b: 0.0) sees: (Color r: 0.774 fe: 0.516 b: 0.0)) " false ifTrue: [self turn: -6. self setPenDovn: false] ifFalse: [self turn: 7. self setPenDovn: truej Figure 7.4: Etoys tile representation and equivalent Smalltalk c o d e The e f f e c t is that Kay's Bruner-esque "Doing with Images makes Symbols" is embodied most directly in the Etoys interface: concrete, direct manipulation is the most immediate kind of interaction made possible here; iconic tiles then can be used as stand-ins for black- boxed functionality (such as the Logo-derived "move forward 5 steps", or even, at a higher level of organization, an entire script represented as an icon or tile), and the symbolic level is available as well. The idea is that kids can encounter and explore the Etoy environment in any (or any combination) of these styles. Squeak vs. Squeakland Is Etoys what Squeak is all about? Yes and no; it is undoubtedly the central focus of Squeak- land.org, the website and community established by Alan Kay's nonprofit Viewpoints Research Institute in Glendale, C A . Squeakland is the public face of Squeak in educational contexts, and Etoys is what it is all about. As we shall see, there are other faces to Squeak, but this one forms the core of the present discussion. The Squeakland.org website offers about two dozen Etoy projects—these are download- able Squeak files which can be accessed either via the Squeak web browser plugin or from a full Squeak installation. A "project" is an encapsulated multimedia artifact, typically consist- Chapter 7: Squeak's Small but Mighty Roar 239 http://land.org http://Squeakland.org ing of an Etoys simulation or tutorial (often as an "active essay"—that is, explanatory text accompanied by a number of active Etoy components). These downloadable projects are organized into rough curricular divisions: elementary school, middle school, high school. Some have been created by Viewpoints staff, but others are showcased works by children or teachers. B. J. A l l e n - C o n n , a teacher at the L A O p e n School, and Viewpoints executive director K i m Rose have published a book of Etoys exercises ( A l l e n - C o n n & Rose 2003) w h i c h perhaps serves as the central text of the Squeak educational community. The book walks though—tutorial style—a series of Etoys exercises w h i c h deal i n a roundabout fashion w i t h the concept of acceleration. The book begins w i t h the usual Etoy starting point: 3 painting a small graphic of a car. The car object is then made to move around the screen by adjusting the parameters i n the Viewer. Next, we are instructed to paint a steering wheel, and param- eters for the steering wheel are connected to those of the car, so that when we rotate the steering wheel object, the car turns left and right. This basic Etoy can be built i n a couple of minutes, and is presented as an introductory Etoys exercise. The "Montessori game" here is for the children to get most of their pay-off playing i n the hand-eye arena, while gradually and subliminally gaining fluency and appreciation of the power of symbols. ( A l l e n - C o n n & Rose 2003, p. 7) 3. The Etoys "car" exercise is featured as well in several of Alan Kay's recent lectures and talks; it serves as the basic Etoys demo. Chapter 7: Squeak's Small but Mighty Roar 240 ! O • [Car] drive paused ^ X Car forward by | 3 )• Car turn by Wheel's heading i- O : f§| v C a r Bp ft basic! I ^ Car make sound | croak I |g Car forward by | 5 ! 1 Car turn b y | * 5 f r j | | Car's xi 1 Car's y g J 457 [ § Car's heading' J[J f.128 *665 mm * pen usej I [|] Car clear all pen trails I Car's dotSize i Car's penColor i Car's penDown | true i Car's penSize I Car's trallStyle ; lines Figure 7 . 5 : T h e hallmark "drive a car" Etoy, with d y n a m i c interaction between two objects, represented by the two square tabs in the u p p e r right; the Viewer panel for the C a r tab is o p e n and visible here. Tiles such as " W h e e l ' s h e a d i n g " can be s i m p l y d r a g g e d a n d d r o p p e d as b u i l d i n g blocks into a script ( A l l e n - C o n n & Rose 2 0 0 3 ) . From this direct-manipulation exercise, we are invited to begin scripting the car to behave semi-autonomously; following an arbitrary track, and speeding up and slowing down. Then, two cars can be raced on the screen, and the concept of acceleration is introduced by distin- guishing between speed (distance travelled per 'tick') and change in speed (one parameter affecting another parameter). W i t h a model for visualizing acceleration established, A l l e n - C o n n & Rose then turn to a discussion of gravity; we are invited to go outside and drop different kinds of weights from a height, as with Galileo's classic experiment. Squeak is re- introduced as a multimedia tool at this point, to examine frames of digital video shot of the dropping weights. By measuring the distance travelled by the falling weights from frame to frame, a concrete visualization (an "inscription," i n Latour's terminology) of acceleration is re-created; now the earlier simulation model of acceleration can be adapted to a simulation of gravity's pull on falling objects (pp. 69-71). In A l l e n - C o n n & Rose's treatment, Etoys seems aimed at teachers and schools; mostly upper elementary and middle school. K i m Rose (personal communication, 2004) reports Chapter 7 : Squeak's Small but Mighty Roar 241 that she works with a small (30-odd) but growing number of schools on a regular basis, help- ing to support their use of Etoys; this represents a good proportion of the traffic o n the squeakland e-mail list. But interestingly, when Etoys were first developed, schools were not the target; rather, individual children (and parents) surfing the Internet were. A l a n Kay explained it this way: I should say a little about the history of Etoys. They were originally not aimed at classrooms but as 10-20 minute projects supplied on the web for parents and their children to do together. I stripped out as many features as I could and tried to come up with a system that could do "100 examples" pretty straightfor- wardly. The documentation that was intended here was to have been to teach parents how to do the examples so they and their kids could have a good expe- rience. For several reasons, this plan did not work out at Disney. But BJ [Allen- Conn] saw it and wanted to try Etoys i n her 5th grade classroom. I was initially against the idea because I thought that Etoys were not complete enough for that venue. But she and K i m Rose decided to do it anyway. Six weeks later they started to show me some really good results, and I realized that it would be worth doing a 3 year experiment to see how well the Etoys—even w i t h some of their lacks—would work out w i t h 10 and 11 year olds. (Kay 2003 b) Squeak in School Squeak was not originally aimed at schools; indeed, homeschooling is a theme lurking just under the surface of the Squeak philosophy. 4 Introducing Squeak to educators was not orig- inally a priority. Even the Etoys user interface, w h i c h represents a significant advance i n terms of kids having some leverage i n a full-featured programming environment, was thought of in its early years as something that curious children would find on the Internet and become interested i n . However, once BJ A l l e n - C o n n picked up Squeak for use at the L A O p e n School, Squeak became a school-based technology. A n d , one must admit, if one's goal is to reach children, schools are at least a good place to find them ( K i m Rose, personal communication, Oct 2004). 4. The Squeakland website lists "unschooling" advocate John Holt as one of its "Deep Influences on Thinking and Learning" (along with Bruner, Piaget, Montessori, and others). Chapter 7: Squeak's Small but Mighty Roar 242 Squeak's open-source license is somewhat telling. By making Squeak an open, non- commercial entity, Kay and team abandoned m u c h of the traditional w o r l d of vendor- driven educational technology. Squeak was aimed at children first, and then perhaps teach- ers. In contrast, 'successful' educational technology projects seem to be aimed at adminstrators and technology co-ordinators first. A s such, Squeak has been spreading via word-of-mouth rather than via a marketing campaign as such ( K i m Rose, personal c o m m u - nication, O c t 2004). It is, however, extremely difficult to say anything definitive about the extent of Squeak's use by individual learners, parents, or even teachers, i n the same way it is difficult to say how many workstations running L i n u x there are i n the w o r l d . W h a t we can L talk about w i t h some certainty is what particular teachers have been doing w i t h Squeak. After the V i v a r i u m Project ended at Apple Computer (around 1993), BJ A l l e n - C o n n maintained close contact w i t h A l a n Kay and K i m Rose, and continued to be part of their circle for many years. Every summer for many years, A l a n Kay has hosted a "Learning Lab" retreat at the A p p l e H i l l Centre for Chamber M u s i c in rural N e w Hampshire (the attendee lists read like a who's who i n educational technology 5 ). A t the 1997 retreat, A l l e n - C o n n saw an early version of Squeak. A t the time, her technology curriculum at the L A O p e n School relied on an amazingly widely varied suite of tools: Logo; Apple's Cocoa (later Stagecast Creator, a graphical "programming by demonstration" environment); AgentSheets and HyperGami (simulation software and 3D-modelling environment from the University of Colorado); and A m y Bruckman's MOOSE Crossing (a constructionist M O O environment from the M I T M e d i a Lab)—all these simultaneously.6 But after seeing Squeak at Apple H i l l , she told Kay she wanted it for her classroom. Despite Kay's initial reservations about using Squeak i n a classroom setting, Kay and Rose and colleagues set about getting a version ready. A l l e n - C o n n began to replace the various software tools she was using, one by one, w i t h Squeak. The factor that made the difference for her, she says, was the amount of time 5. For more info about Kay's Apple Hill retreat, see http://minnow.cc.gatech.edU/learninglab/17 6. BJ Allen-Conn's ongoing connection to cutting-edge educational technology research is probably unparalleled among class- room teachers. • • • Chapter 7: Squeak's Small but Mighty Roar 243 http://minnow.cc.gatech.edU/learninglab/1 the children spend problem-solving, compared w i t h wrestling w i t h the software (Allen- C o n n , personal communication, N o v 2004). Since that time, A l l e n - C o n n has represented the vanguard of educators using Squeak as a curriculum tool, and one of the primary developers of Squeak/Etoy-based curriculum resources (such as the book she and K i m Rose published i n 2003). Similiarly, the L A O p e n School has remained K a y s educational testbed, although not at the scale it enjoyed while Apple was funding whole-school, integrated curriculum research i n the 1980s. Other nota- ble users of Squeak i n school settings include two schools i n Toronto, where school district trustee Sheine Mankovsky is a vocal advocate (Don M i l l s M i d d l e School and D o n M i l l s Collegiate). Beyond N o r t h America, it would appear, Squeak has more of a foothold. A s the old saying goes, Squeak is "big i n Japan." In fact, at a 2003 conference i n Kyoto, "over 300 teach- ers, students, researchers and developers gathered... to spend a day dedicated to talks and sharing about Squeak, curriculum development, best practices and new Squeak media." These numbers must be compared to the 2004 and 2005 Squeakfest conferences, held i n Chicago, and w h i c h attracted 80 and 60 people respectively.7 There is also Squeak activity i n Korea and Brazil. The largest numbers may be i n Europe; a substantial community of devel- opers is found i n Germany. In Spain, the regional government of Extremadura has undertaken a very large-scale project to install open-source software i n schools. Over 60,000 computers are running Squeak on L i n u x there, though information about actual use of Squeak i n classrooms is harder to come by. 8 Despite Squeak's geographical distribution, though, the number of schools and teach- ers actually using Squeak is, frankly, small. There seems to be more 'interest' i n Squeak as an educational environment than there is actual practice. The squeakland mailing list, w h i c h is specifically for educational topics, has very little traffic: 400+ subscribers, but only a handful of postings per week. By comparison, the squeak-dev list, w h i c h serves the programmer 7. Information about the Japanese Squeak conference from http://squeakland.jp/images/news/html/kyotonews.htm—the US- based Squeakfest can be found at http://interactive.colum.edu/partners/squeakfest/part.aspx 8. http://www.small-land.org Chapter 7: Squeak's Small but Mighty Roar 244 http://squeakland.jp/images/news/html/kyotonews.htm http://interactive.colum.edu/partners/squeakfest/part.aspx http://www.small-land.org community, has 1358 subscriptions as of the fall of 2005, and commonly runs to 2 0 - 3 0 messages per day. W h y isn't Squeak a bigger deal, given its heritage, its flexibility, its goals, its cost? The reasons are not surprising, but they are worth outlining i n some detail: • Technology support in schools: The L A O p e n School is far from typical i n its commitment to technology, for reasons both ideological and financial. BJ A l l e n - C o n n is quite skeptical of the idea that Squeak could spread significantly via 'viral marketing' in a self-sustaining way. "Teachers need support," she insists, i n the form of books and guides and one-on-one support; somebody has to teach the teachers, and make it possible—if not easy—for a teacher to actually pick up and use something like Squeak i n their classroom. In order for Squeak use to be more widespread, the supporting resources—books, tutorials, installers, etc.—would need to be much more robust. A l l e n - C o n n sees herself as the 'buck-stops-here' element of Kay's Squeak team: she has to make it meaningful for children: "I can think lofty thoughts, but I also have to make this a viable tool." (personal c o m m u n i - cation, N o v 2004). So, despite the fact that A l l e n - C o n n herself remains encouraged, Squeak's presence simply w i l l not grow of its o w n accord. • The difficulty of sustaining technology once installed: A l l e n - C o n n points out that during the 1980s, the entire teaching staff at the O p e n School was trained to main- tain and repair A p p l e computers. In the years after Apple's involvement, the parent-teacher community of the O p e n Charter School has taken it upon itself to keep up its committment to the sustainability of its technical investments. H o w many schools can claim this kind of involvement? • Technology policies in most schools and school districts are notoriously restrictive: for support and administrative reasons, the typical pattern is that classroom or lab computers are 'locked down' to prevent students from making modifications that may put a burden on the tech support staff. So, installing software other than that Chapter 7: Squeak's Small but Mighty Roar 245 officially approved (typically Internet Explorer, Microsoft Office, and a small hand- ful of'educational' applications), neither students nor teachers are at liberty to install anything else—like Squeak. G o i n g beyond these blanket restrictions would likely entail getting approval at an adminstrative or even board level, something that even companies w i t h substantial marketing budgets struggle to do. A g a i n , this is an instance i n w h i c h the general autonomy of the O p e n Charter School provides a very atypical example. • General awareness and mainstream trends: more generally, Squeak faces an uphill battle simply by virtue of its distance from the mainstream (i.e. commercial soft- ware and/or Unix-derived Internet culture). If educational computing is commonly defined as learning to use a particular canonical list of application programs (web browsers for research, 'office' software for document production, domain-specific games or simulations for particular pedagogical ends), what are teachers or co- ordinators to make of an environment that at times presents itself as programming language, operating system, animation tool, simulation environment, or that bills itself grandly as an "idea processor whose music is ideas?" • Squeak is probably more complex than it needs to be: allowing a somewhat techno- centric criticism here, it is fair to point out that the Squeak environment that one faces after first launching the software is not immediately obvious: this undoubt- edly presents a practical challenge to Squeak's wider adoption. First, Squeak doesn't look or act quite like a M a c i n t o s h or W i n d o w s application, drawing as it does on both older (and newer) user interface traditions; users must grapple with how to interact w i t h the mouse and onscreen objects before moving on to more interesting things. Second, there is substantial clutter i n the Squeak environment; much of the thirty-five years' heritage of Squeak is actually in there (hundreds of object classes, multiple interface frameworks, components of software no one has actively used i n years, and so on). The shrinking of the downloadable Squeak Chapter 7: Squeak's Small but Mighty Roar 246 "image" is the subject of ongoing debate on the squeak-dev mailing list. 9 A n d finally, the Morphic user interface framework itself, w i t h its complete transparency and direct-manipulation access to everything, has the potential to overwhelm and confuse new users. Despite Squeak's three-decade heritage, Etoys' features, and the educational virtues espoused by Kay and his colleagues, Squeak still remains an unknown quantity to main- stream educators, even those who are active i n the w o r l d of educational technology. Is this likely to change? N o t given the current landscape. There are simply no substantial forces driving Squeak toward the educational mainstream, and there are many forces actively resisting it. But we must keep i n m i n d that the educational face of Squeak, of w h i c h Etoys is emblematic, is not the whole story. T H E S Q U E A K C O M M U N I T Y A N D I T S T R A J E C T O R I E S W h a t to make of Squeak? T o whatever extent it represents an important step forward, or the next iteration of a powerful tradition, it is also in many ways a non-starter, a tiny blip o n the vast landscape of (educational) technology. It seems as though one must grapple w i t h the history and heritage of the Dynabook ideal i n order to fully appreciate Squeak; that w i t h - out this historical perspective it appears as a bewildering and complex software environment, inordinately difficult to sum up, let alone to treat as an instrument of practical consequence. The hope of many i n the Squeak community, presumably, is that Squeak should gradually build its own momentum, and thereafter we w i l l have a better appreciation of its intellectual heritage. But what is Squeak, exactly? A n y singular answer to this question is partial, simply by virtue of the multi-headedness of the project. I have so far focused mostly on the educa- tional (Etoys) face of Squeak, but this is not manifest destiny. O n the contrary; o n the 9. The default downloadable Squeak image, as of version 3.8, is about 13 megabytes. There are various opinions about how small an image could be while still being usable by the average person. One of the more extreme projects, called Spoon, has an image of less than 100 kilobytes. See http://www.netjam.org/spoon/ Chapter 7: Squeak's Small but Mighty Roar 247 http://www.netjam.org/spoon/ Internet, an open-source project becomes what its users and developers make of it; there is no ontological directive, apart from the actual practice of the people who choose to make a committment of time and energy to a project. In this sense, the lessons from Latour's A r a m i s are most apt: "There is no such thing as the essence of a project. O n l y finished projects have an essence" (Latour 1996, p. 48). But when is a project ever finished? In Squeak's case, the sheer flexibility of the system means that there are various sub-communi- ties making headway i n a number of different directions at once. Squeakland and Etoys represent only one of these. The Blue Plane and the Pink Plane There are two orthogonal forces at work in the Squeak team, with which we have been able to make two kinds of progress. These have most recently been articulated in Alan Kay's allusion, to Arthur Koestler's metaphor of progress in two planes: the incremental improvement plane (which Alan calls the "pink"plane) and the paradigm shift (or "blue") plane. - Dan Ingalls, 1997 The metaphor of planes of progress is drawn from A r t h u r Koestler's The Act of Creation (1964), one of Kay's touchstones: Koestler's model bears some resemblance to Thomas Kuhn's dynamics of paradigm shifts, though Koestler was concerned with creativity as a psychological process rather than w i t h the workings of institutional science. In any case, these two 'forces' manifest themselves as two distinct but not wholly separate cultures active i n the Squeak community. The distinction can be characterized as follows. Incremen- tal improvement, or "pink plane" development, is the focus of a group of developers who treat Squeak as a "better old thing," as Kay might put it—that is, Squeak is a new, portable, open-source Smalltalk-80 implementation. The efforts of this group go towards making it a better implementation; improving the speed, cleaning up the codebase, putting i n infra- structure support for the development community (bug-report databases, versioning systems and collaborative tools, etc.), streamlining and enhancing the user interface, and adding various applications. This group, composed largely of Smalltalk programmers who have been working w i t h the language for years, behaves m u c h like the development Chapter 7: Squeak's Small but Mighty Roar 248 communities surrounding other open-source projects, like L i n u x or Apache. There is no central leader of this subcommunity; rather, there are a small number of key developers who communicate largely via the squeak-dev mailing list, co-ordinating the changes w h i c h peri- odically constitute new version releases of the Squeak software, w i t h incremental updates appearing roughly twice per year. 1 0 The more revolutionary, "blue plane" development appears to happen i n a less open environment. This force is represented more by Kay's development team and colleagues. Prior to the v3.4 release of Squeak i n 2003, this "Squeak Central" team—that is, the group at A p p l e and later Disney—made a number of substantial changes to Squeak (mostly notably integrating the Morphic graphics framework, w h i c h was called a "blue plane" innovation at the time), but since this time they have not been particularly active i n the online Squeak development community. Rather, they have been working "behind the curtain" on a number of emerging innovations; notably, a new novice programming environment called Tweak (which may also include the future of Etoys), and an immersive, multiparticipant 3D environment called Croquet. These projects may potentially shift the focus of Squeak devel- opment substantially, even to the point of threatening the relevance of some of the "pink plane" development going o n among the open-source community. D a n Ingalls notes: T o best understand the "blue" pulls within the Squeak group, you need to understand what we're after. O u r number one commitment is to an exquisite personal computing environment. Imagine a system as immediate and tactile as a sketch pad, i n w h i c h you can effortlessly mingle writing, drawing, painting, and all of the structured leverage of computer science. Moreover imagine that every aspect of that system is described in itself and equally amenable to exam- ination and composition. Perhaps this system also extends out over the Internet, including and leveraging off the work of others. Y o u get the idea—it's the H o l y G r a i l of computer science. A l l and everything. So if some new approach comes along that takes us closer to that ideal, but at the cost of a 10. There is a complex history of leadership within the Squeak community, which I will not go into in detail. In 2005, a group called The Squeak Foundation, composed of several influential developers from around the world, declared itself the (un)official stew- ard of Squeak releases. See http://www.squeak.org Chapter 7: Squeak's Small but Mighty Roar 249 http://www.squeak.org break w i t h ST-80 tradition, we w i l l probably take the new approach. (Ingalls 1997) T o put it i n Eric Raymond's terms, the Squeak project is simultaneously the "cathedral" and the "bazaar." Kay, Ingalls, and their cohort work i n semi-private, releasing sizable new inno- vations only now and then, while the evolutionary group works according to the bazaar model. There is clearly some tension between the two groups, particularly when the "cathe- dral" releases appear, and the evolutionary group must grapple w i t h the changes i n the landscape. 1 1 / Squeak Communities Today After Kay's team left Disney i n 2001, there ceased to be a central commercial interest behind Squeak. Kay and Rose established the nonprofit Viewpoints Research Institute to act as the official home for Squeak, but while Viewpoints has been partially funded i n recent years by Hewlett Packard, 1 2 the mainstream of Squeak development is no longer centered there. It is rather spread globally, in a wide variety of different projects. The result is, as evidenced by the potpourri of links o n the main squeak.org website, that there are several public faces of Squeak currently: 1. The Viewpoints Research Institute, the non-profit organization w h i c h is officially behind Squeak, and w h i c h acts as an institutional focus for some of its develop- ment. Viewpoints is the interface between the funding that Squeak may receive from a company like H P and the notion of an open-source project owned and beholden to no one. In practice, Viewpoints acts as the co-ordinating force behind Squeakland.org and "blue-plane" development emerging from Kay and his i m m e d i - ate colleagues (e.g. the new Tweak authoring system, w h i c h purports to replace portions of the M o r p h i c interface). 11. This issue comes up periodically on the squeak-dev mailing list. In September 2005, for instance, a discussion of the introduc- tion of a technology called "traits" as an alternative to the nearly 30-year old hierarchy of object classes spawned much debate. 12. In July 2005, following a year in which Alan Kay received no less than three high-profile awards for his contributions to com- puter science and engineering, Kay and VPRI colleagues (along with 14,500 other employees) were 'let go' from HP in a round of corporate restructuring. Chapter 7: Squeak's Small but Mighty Roar 250 http://squeak.org http://Squeakland.org 2. Squeakland.org, hosted by Viewpoints Research Institute: this is the home (or at least the central meeting place) of Etoys and the web-browser plugin version of the , Squeak software. O n this website can be found a number of articles by Kay and others elaborating the philosophy of Squeak, Etoys projects and tutorials created by the core team, and links to Etoys projects and tutorials contributed by teachers and students. Squeakland.org can be thought of as the "teacher-friendly" face of Squeak. The accompanying squeakland mailing list is low-volume (a handful of r messages per week), and the topics of discussion are generally user-oriented. 3. The squeak-dev mailing list, w h i c h is the operating vehicle of Squeak's open-source community. Squeak-dev is a high-traffic list (hundreds of messages per week) w i t h over 1350 subscribers. The topics of discussion on this list are very technical: this is the forum for discussions of Squeak's core programming, upgrades, bug reports and fixes. The members of this list think of Squeak as first and foremost a Smalltalk implementation. This community also finds expression i n a number of web-based forums and blogs such asplanetsqueak.org, squeakpeople.org, and M a r k Guzdial's SWiki at minnow.cc.gatech.edu. 4. The Croquet Project (croquetproject.org) has the potential to radically change the public face of Squeak. Croquet is an immersive, multiparticipant 3D environment built o n Squeak. The Croquet Project is maintained by Viewpoints Research Insti- tute, the University of W i s c o n s i n , and the University of Minnesota. Croquet has an educational profile, but it seems m u c h more relevant to the post-secondary w o r l d than to K - 1 2 (see L o m b a r d i 2004). Croquet seems content to brand itself distinctly from Squeak, and although Squeak is the underlying technology, a casual browser could peruse the r i c h and informative croquetproject.org website and miss the reference entirely. 5. Alternative educational applications to Etoys, such as Stephane Ducasse's Botslnc. (see Ducasse 2005 and http://smallwiki.unibe.ch/botsinc/), and the Scratch (see Chapter 7: Squeak's Small but Mighty Roar 251 http://Squeakland.org http://Squeakland.org http://planetsqueak.org http://squeakpeople.org http://minnow.cc.gatech.edu http://croquetproject.org http://croquetproject.org http://smallwiki.unibe.ch/botsinc/ Maloney et al. 2004 and http://weblogs.media.mit.edu/llk/scratch/) project at the M I T M e d i a Lab—part of the lab's LifeLongKindergarten program, and aimed at the M e d i a Lab's after-school Computer Clubhouses. Both Bots Inc. and Scratch are combinations of Squeak-based software and curriculum resources. 6. Seaside, a highly sophisticated W e b application framework built i n Squeak. 1 3 Seaside breaks significantly from m u c h of the Squeak tradition i n that it uses Squeak i n service of the U n i x client-server model; the user interface i n this case is provided by your web browser. Seaside may be an atypical Squeak application, but as W e b development matures (e.g. the "Web 2.0" movement currently afoot), Seaside's profile is quickly growing outside the Squeak community. 7. Beyond these key English-language sites and communities, Squeak is actively used and developed i n Europe, A s i a , and South A m e r i c a , both as an educational applica- tion (with Etoys) and as an open-source Smalltalk. The Squeak Foundation, for instance, is (nominally, at least) based i n Germany, as is Impara, a company co- founded by core Squeak developer Andreas Raab. There is a also strong Japanese school-based effort at squeakland.jp. I do not wish to give the impression that Squeak is entirely an A m e r i c a n phenomenon. W h a t ties these varying brands together? N o t much, currently. Despite the fact that they all draw from a c o m m o n ancestor, and all embody i n some way some combination of the virtues of the Dynabook vision, there is no clear sense that these different themes are j converging or even diverging. A more unified brand would perhaps create a stronger image to present to the w o r l d beyond the initiates, but the proliferation of versions presumably allows Squeak to be picked up i n different forms i n different community contexts. O r i g i - nally, Etoys was aimed at children surfing the Internet, or at least their parents—that is, it 13. Coincidentally, Seaside's lead developer is Avi Bryant, who developed Ricki Goldman's Orion back in 2000 (not in Seaside, though). In 2006, Bryant's company, Smallthought, released a web-based data management application called DabbleDB that leverages the strengths of Smalltalk to seriously push the envelope of web-based applications. For a discussion, see http://www.techcrunch.com/2006/03/11/dabbledb-online-app-building-for-everyone/ Chapter 7: Squeak's Small but Mighty Roar 252 http://weblogs.media.mit.edu/llk/scratch/ http://squeakland.jp http://www.techcrunch.com/2006/03/11/dabbledb-online-app-building-for-everyone/ had a deliberately subversive trajectory. But, largely due to BJ A l l e n - C o n n ' s success (and the book she and K i m Rose published i n 2003), Etoys has become the mainstream educational face of Squeak; other efforts now take o n the subversive, viral marketing agenda. The immersive, 3 D Croquet environment, with its obvious appeal to video-game players (and lack of an overt 'educational' message), has the potential to take this on, even given its back- ing by major U S universities as a platform. Squeak in print? K Ironically, the success of Squeak i n the wider world depends greatly on the publication of paper-and-ink books. W h a t has made Squeak legitimate i n educational contexts is largely the availability i n the early 2000s of a number of books o n Squeak; the first significant one was written by Georgia Tech professor M a r k G u z d i a l (2000); the second was a collection of essays edited by G u z d i a l and K i m Rose (2003); the third and perhaps most significant is Rose and BJ A l l e n - C o n n ' s (2003) Powerful Ideas in the Classroom, a book targeted specifi- cally at teachers—"helping the helpers," as the Viewpoints motto goes. Similar books have been published i n Japan, i n France, and i n Spain. Stephane Ducasse's Squeak: Learn Programming with Robots (2005) follows a number of French-language books. Having books i n print adds strength to the project in a number of ways: first, the mere existence of a book lends credibility; the open-source movement has known this for years, and with any new project, the emergence of books from the key technical publishers (espe- cially O'Reilly or one of the Pearson imprints: Prentice-Hall, Addison-Wesley, Peachpit, Que, N e w Riders 1 4 ) is a sign of some maturity. Second, though, a book represents a vector for newcomers to be able to pick up the software and make sense of it without having to pick through the maze of often incomplete online documentation that accompanies most open source projects (for that matter, most software, open or not). The need for printed resources presents a challenge for Squeak, which has no central body pouring resources into it. Neither H P (which until recently employed Kay, Rose, arid a 14. Note the "obligatory passage point"here in establishing legitimacy even among open-source projects. Chapter 7: Squeak's Small but Mighty Roar 253 number of key developers) nor the Viewpoints Research Institute (which is a nonprofit soci- ety, and devoted m u c h more to development than to marketing) is really i n a position to underwrite the writing and publication of books (or to realize any sort of return on them). In 2005, The Squeak Foundation was established; one of its motivations is to provide an entity through w h i c h development funds could flow. Notably, author Stephane Ducasse was a driving force behind the establishment of the Foundation. So far, though, the Squeak community produces books slowly, and though everyone seems to recognize books as hallmarks of maturity, it would appear that no one is i n m u c h of a position to strategically address this. A n d perhaps it is also evident that this is a good thing. Were an A p p l e Computer or Disney Imagineering to wholly back a project such as Squeak, what w o u l d be the cost i n terms of control of the project's direction? There is a very good case to be made for keeping Squeak independent of any one corporation's interests, despite the attendant lack of resources. . The variety of foci w i t h i n the Squeak community is a factor here as well. O n one hand, this can be interpreted as a sign of maturity, and could give rise to a stronger sense of confi- dence i n the ongoingness of the development. O n the other hand, however, it could be seen as the fragmentation of an already small and marginal project. But suppose a popular book on Croquet were to appear. If Croquet really took off, w o u l d Etoys wither? Perhaps not, given its tenacity i n the face of its small user base to date, but this does raise the question of the required "critical mass" for a project to begin to generate its own momentum. Where is that Dynabook, Anyway? G i v e n the growth of the Squeak project—perhaps it is even a stretch to characterize it as a single project anymore—are we any closer to the Dynabook vision? A recurring theme discussed o n the squeak-dev mailing list is the idea of porting Squeak to the latest, smallest handheld or palmtop device on the market. 1 5 The reasoning here is that Squeak, running o n a device w h i c h looks something like Kay's cardboard mockups (circa 1970), must equal the 15. The technical windfall here is that Squeak has been ported to over 20 different hardware and software platforms. Chapter 7: Squeak's Small but Mighty Roar 254 Dynabook. A n d while, following the letter of the text, putting checkmarks next to all the required elements i n Kay's 1972 manifesto, this might look like the real thing, it is, like all literalist accounts, depressingly ahistorical, as it ignores what personal computing has come to mean in the intervening 35 years. The sobering counterfact is that no one I have encoun- tered is able to actually conduct all of their daily wOrk i n Squeak; I visited Viewpoints Research Institute i n 2004 and found the same general clutter of technologies and platforms there as everywhere else—this despite the existence of web, e-mail, and text processing applications in Squeak. So despite Squeak's ambition to be an "exquisite" personal comput- ing environment, i n practical use it is an application like the rest. Does Squeak bring us closer to the Dynabook vision, then? Absolutely, if for no other reason than it has re-animated the virtues inherent i n the original 1970s vision and spread them to a m u c h larger and more varied community of users and developers spread all across the world. Squeak itself has not commanded the attention of a mass audience—and certainly not w i t h i n educational circles—but perhaps the various bits and pieces of Squeak have an aggregate—possibly subversive—effect. The question w h i c h Squeak begs—is it really "back to the future?" O r is it "too little too late?"—cannot quite/yet be answered i n its own terms. O n l y finished projects have an essence, Latour tells us. It w o u l d appear, on the whole, that the amount of energy w i t h i n the Squeak communit(ies) is increasing, not decreasing. Whether or not this ultimately matters is a question for future historians. Squeak and Croquet at OOPSLA'04 In 2004, A l a n Kay's star seemed to be on the rise. Early i n the year, he (along w i t h P A R C colleagues C h u c k Thacker, Butler Lampson, and Robert Taylor) was awarded the Charles Stark Draper Prize from the U S National Academy of Engineering. The Draper award is given to "those who have contributed to the advancement of engineering and to improve public understanding of the importance of engineering and technology" ( N A E website)—in this case, the award was i n honour of their work on the X e r o x A l t o , "the first networked Chapter 7: Squeak's Small but Mighty Roar 255 personal computer." Later that spring, two other awards were announced, beginning w i t h the 2004 Kyoto Prize by the Japanese Inamori Foundation, i n honour of "those who have contributed significantly to the scientific, cultural, and spiritual betterment of mankind" (Inamori website)—this specifically w i t h respect to Kay's educational work. This year's Kyoto laureates (alongside Kay) were biologist Alfred K n u d s o n and philosopher Jurgen Habermas. The third award was the Association of C o m p u t i n g Machinery's Turing Award, given to those who have made "contributions of a technical nature made to the computing community... of lasting and major technical importance to the computer field." The T u r i n g A w a r d is a prize of unparalleled prestige w i t h i n computing circles; it is usually attended by a "Turing Lecture" given by the recipient at an A C M function. Kay chose to deliver his lecture at the A C M ' s Object-Oriented Programming, Systems, Languages, and Applications ( O O P S L A ) conference i n Vancouver, i n October 2004.1 took the opportunity to attend, and brought my notebook. Interestingly, the O O P S L A conference was established i n the early 1980s by Adele Goldberg, who, after leaving X e r o x P A R C , served for a time as the president of the A C M . O O P S L A was, i n the 1980s, the conference for the Smalltalk community. In later years, Smalltalk became a smaller and smaller part of O O P S L A , w i t h the rise of object-oriented systems based on the languages C++, Java, and more recently, Microsoft's C# and . N E T architecture. The 2004 conference program was dominated by technical papers on profes- sional software engineering concerns, though w i t h a tiny undercurrent of, if not Smalltalk itself, at least some of the practices that the Smalltalk community had pioneered, such as design patterns and eXtreme programming.16 A l a n Kay gave two talks at O O P S L A ' 0 4 : his Turing Lecture (Kay 2004c) plus a keynote address to the "educators' symposium" track of the conference. Both talks featured demon- strations of both Etoys and the 3 D Croquet environment. The Etoys demonstrations served 16. The design patterns movement is an application of architect Christopher Alexander's "pattern language" (Alexander et al. 1977) concepts to software design, extreme Programming is a software development methodology which originated in the early Smalltalk community of the 1980s; see Beck 2000. Chapter 7: Squeak's Small but Mighty Roar 256 to point out to the audience (professional programmers largely paid to manage and construct systems of enormous complexity) how very simple implementations can acheive great power and flexibility. Croquet served to demonstrate the range of what could be done w i t h a relatively simple system like Squeak. It is difficult to tell what k i n d of impact Kay's talks had o n their audiences. Perhaps Kay's impassioned address on behalf of simplicity and elegance i n the face of (arguably needless) complexity strikes a chord among the listeners. I was aware, sitting i n the audience w i t h members of the Squeak illuminati, of a sort of conspiratorial glee over Kay's critique of the mainstream, relished by the Squeak and Small- talk " i n crowd," but, just as likely lost upon or dismissed by the m u c h larger audience of professional software engineers. The following day, I attended an open-mic session billed as "breakout ideas i n c o m p u - ter science." Speaker after speaker lined up to address the hundred-odd person crowd o n what he or she (there were relatively few women in attendance, but the proportion was stronger in this particular session than in others I attended) felt was wrong or missing i n their contemporary software development environments and careers. A surprising number of complaints (the session quickly became an airing of grievances against the field) spoke of the need for innovations w h i c h had been i n Smalltalk, and i n Kay's and Ingalls' and others' visions, since the 1970s but of w h i c h even this 'mainstream' remained ignorant. The speakers—computer programmers—complained of inhuman systems for manag- ing complexity, of having to bend their own minds and practices around rigid and inflexible systems. One speaker went so far as to comment that what the w o r l d of programming needed was the equivalent of the M a c i n t o s h , w h i c h had allowed end users to do great things. He wondered "what w o u l d the equivalent be for general-purpose programming?" I hope the reader at this point recognizes the irony of these comments, especially at an event honouring Kay's contributions to these very concerns. D u r i n g Kay's Turing Lecture, he presented a rough schema i n w h i c h the general popula- tion is broken down along two axes, as follows: Chapter 7: Squeak's Small but Mighty Roar 257 externally focused internally motivated instrumental reasoners lovers of ideas Figure 7.6: Kay's s c h e m a from the 2 0 0 4 T u r i n g Lecture. A c c o r d i n g to Kay's model, 95% of the population are driven by instrumental reason—that is, the application of tools and technologies to pre-existing goals—while the remaining 5% are motivated by a love of ideas. O n the vertical axis, Kay suggested, 15% of people tend to be internally motivated while 85% are externally focused. M a p p i n g these two against each other, Kay pointed out the 1% overlap: internally motivated, idea-driven people who occupy the "creative, investigative space." Next, there are 4% of people who are externally focused and idea-driven; these are popular leaders and charismatic figures. O n the other hand, the 14% who are internally motivated instrumental thinkers are "a dangerous lot" w h o m Kay identified w i t h certain corporate executives. Finally, Kay claimed, are the 80% of the popula- tion who are externally focused instrumental reasoners, and these are the people who make up the vast majority of the population. These people don't change their thinking easily— unless they begin to perceive that everyone around them is changing, too. It is, at face value, a decidedly pessimistic schema, and one that suggests that real reform—in education, or computer science, or politics, or anywhere—is not something that can be acheived, i n many cases, w i t h i n any project or even one person's lifetime. It is, however, a schema that Kay seems to have made peace with; he explicitly acknowledged that the kinds of reform he wanted to see would not likely happen while he is around to ! Chapter 7: Squeak's Small but Mighty Roar 258 witness them. This does not apparently dampen his enthusiasm or energy for the task, though. S Q U E A K : M O U S E T H A T R O A R E D ? I alluded earlier to the question that Squeak's now eight-year existence begs: is it really, as Ingalls et al. suggested, back to the future, or—given the sheer inertia of the dominant tradi- tion of personal and corporate computing, and the relatively tiny size of Squeak's communities (both developers and educators)—simply too little too late? Squeak is many things. It is, undeniably, the resurrection of the Smalltalk project from X e r o x P A R C , especially given the personnel behind it. Squeak is also, undeniably, a very cool and fun open-source software project. I am tempted to go so far as to call Squeak the most powerful piece of educational software ever created, given its scope and its direct connection to some of the most "powerful ideas" i n educational technology. But Squeak is not the Dynabook. It is no more the Dynabook than the Smalltalks of the 1970s were. It is better conceived as the culmination of or the wrapping up of these projects. I w o u l d suggest that the "Squeak Central" team's departure from the core development of Squeak i n 2003—in favour of more far-reaching projects like Croquet—is evidence of this wrapping-up process. There is nothing particularly ironic or surprising about this; Kay has maintained all along that Squeak's goal was to be the means of its o w n obsolescence, the vehicle for getting to the next thing. In searching for the Dynabook, therefore, we are advised not to spend too m u c h time looking at Squeak itself. That Squeak—on its o w n i n the world, the public property of a distributed, open-source development community—has not emerged as the "next great thing" tells us something important about the dynamics of technocultural systems. T o take Squeak as important i n itself is to attempt to isolate it, to reify it, to prevent its translation into something else. This is something very different from the case of Latour's Aramis, w h i c h struggled to be, to cross some k i n d of threshold into actual, sustainable existence. T o contexualize Latour's admon- ishment that no technosocial assemblage is truly irreversible—not even the "100 year old Chapter 7: Squeak's Small but Mighty Roar 259 monsters" of the Paris Metro—the objects, of his analysis were intended to be stable, irre- versible instruments; their ultimate and ongoing success would be defined by their unfailing resistance against being translated, yet again, out of existence. By contrast, Squeak and the Smalltalks which preceded it in a sense succeed by their very mutability. It is not an accident that Kay chose to name the original system "small talk." This is not a technology, nor a technocultural assemblage, that aims to resist being trans- lated into something else. Rather, this is an entity whose ultimate telos is to be the agent of that translation. It's the very point. It is instructive to recall Ingalls' distinction between the vision—which has persisted, and which has attracted and bound individuals to the project over three decades—and the image—that which can be critiqued, challenged, and evolved. The actual and active rela- tionship between these two entities is not well accounted for in a materialist sociology. Something more is needed to do justice to the cultural history of these systems. Chapter 7: Squeak's Small but Mighty Roar 260 Chapter 8: Drawing Things Together I would like to offer a summing up of the story so far. W h a t is required at this point, after a brief recap of the ground that has been covered, is a refocusing o n the larger issues: of the telos of this framing, and of its relevance to contemporary society and culture and, of course, to education. W H E R E W E ' V E B E E N In Chapter 2,1 positioned this study as a k i n d of computer criticism, after Papert (1987), conducted from a place partly inside and partly outside of the tradition it is considering. I have set this work up as cultural history, w i t h the emphasis on the historically embedded meaning-making that computing (like anything else) comprises. That is, this is not a techni- cal history, tracing the development of technical artifacts like computers or software programs; nor is it a biography or history of the institutions "behind" particular technolo- gies. Rather, my intent has been to trace the development of the meaning of a technocultural artifact: the Dynabook, itself partly an idealized vision and partly a series of actual technologies. In tracing what the Dynabook or personal computing or technology means i n changing historical contexts, I must necessarily begin w i t h what it means to me, and this has required that I reconcile my own history w i t h the history presented here. In Chapter 3,1 outlined a theoretical framing of technology as translation—both symbolic and material—in order to foreground its constructedness, its contingency, and its essentially political nature. In my discussion of the semiotics of texts and machines, I have attempted to better frame the dynamics of layering and abstraction i n technocultural assemblages, drawing on the actor-network theorizing of Latour and C a l l o n , but not, I think confining myself to this framework. Finally, I have introduced the theme of simulation i n order to highlight the metaphorical function of information technology and thus position us as actors and interpreters. Chapter 8: Drawing Things Together 261 In Chapter 4,1 sketched the key components of the vision of personal and educational computing developed by A l a n Kay over the past three and a half decades. This vision began w i t h an insight regarding children's use of computers inspired by the early work of Seymour Papert. Kay's exploration of this area led h i m to articulate a novel (and still fairly unique) approach to systems design, culminating i n the development of a new paradigm of software development: object-oriented programming. Further, Kay and his research team elaborated an application of the developmental psychology of Jerome Bruner i n such a way that all three of Bruner's "mentalities"—enactive, iconic, and symbolic—can be employed i n human-computer interaction; this work laid the foundations for our contemporary graphi- cal user interfaces and metaphor of "direct manipulation." Kay's vision of personal computing involves a substantial connection to the history of communications: building on the idea that the printing revolution i n early modern Europe saw the rise of a new mode of discourse—structured argumentation. This mode made it possible to articulate ideas not possible i n traditional narrative forms, and this shift i n discursive practices gave rise to modern science. Newton's arguments on the laws of motion, for instance, are not easily representable i n narrative, but rather take the form of sequences of logical assertions—Bruner (1996, p. 89) identifies this mode as explanation, . distinct from interpretation. Kay's contribution begins w i t h the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. W h a t discursive possibilities does this new modality open up, and/or whom7. Kay argues that this latter communications revolution should i n the first place be i n the hands of children. W h a t we are left w i t h is a sketch of a possible new literacy; not "computer literacy" as an alternative to book literacy, but systems literacy—the realm of powerful ideas i n a w o r l d i n w h i c h complex systems modelling is possible and indeed commonplace, even among children. Kay's fundamental and sustained admonition is that this literacy is the task and responsibility of education i n the 21st century. The Dyna- book vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. Chapter 8: Drawing Things Together 262 In Chapter 5,1 moved from a theoretical treatment of Kay's vision to a consideration of the actual unfolding of his research work, loosely i n line w i t h Latour's method of "following the actors." Here, I traced the development and evolution of the Smalltalk environment Kay designed i n the early 1970s as the platform for his Dynabook vision to the point where it intersects w i t h (a) industrial software engineering practices, and (b) the advent and popu- larization of the microcomputer i n thelate 1970s and early 1980s. I outlined here the various translations of Smalltalk from an instrument for researching children's "personal dynamic media" to its derivations and incarnations i n the computer industry. Accompany- ing the evolution of this particular artifact (or family of artifacts) are a number of related translations—of the role of "users," of the public perception of computing, and of the role of (mass-) market forces. Kay's work in the 1980s, working w i t h i n the paradigm of mass-market microcomput- ers, led h i m to a deeper focus on the specifics of children's interaction w i t h systems modeling tools, via a long-term research programme at a magnet school i n Los Angeles. But concurrent with this in-depth research, the historical trajectory of the personal computer led increasingly away from Kay's ideal of "personal dynamic media" and toward pre-pack- aged, commoditized, black boxes. Interestingly, nearly concurrent with Kay's behind-the- scenes research in a single Los Angeles school was Seymour Papert's endeavour to popular- ize and market the Logo system internationally. Logo's trajectory through these years could easily be the subject of another book-length treatment; suffice to say that while Papert and the Logo project enjoyed considerable popularity i n the early years of the microcomputer revolution, by the 1990s, the shift i n popular conceptions of computing and its significance meant that the ideas being advanced by both Papert and Kay—and thus the Dynabook— were farther than ever from the mainstream. In Chapter 6,1 examined the computing cultures i n ascendance i n the 1990s, when the Internet and W o r l d - W i d e W e b were coming to prominence. The "powerful ideas" tradition to w h i c h Papert and Kay had been central was i n retreat, and i n its place was a growing conception of computers and the Internet as a kind of mass medium akin to television or Chapter 8: Drawing Things Together 263 print i n w h i c h a end users, and, by extension, "learners" play a mostly spectatorial or consumer role. However, at the same time, an older tradition—descended from the 1970s U n i x culture—was also gaining momentum, due to the importance of open systems and open standards in the infrastructure of the Internet. The tension between these two trajec- tories have defined m u c h of popular computing culture i n the 1990s and early 2000s. "Educational" computing i n this period must be evaluated w i t h these currents i n m i n d : both the vendor-centric notion of commodity software and the Unix-derived open systems culture are i n play, often i n ironic combinations. But despite the ideological allure and sheer momentum of the Free and O p e n Source Software (FOSS) movement, I have argued that we should be wary of it as a educational force; for all its overtly democratic rhetoric and anti- corporatist politics, its conceit and stubborn exclusivity (recall the analogy w i t h Latin) makes it a questionable ideal, especially when we contrast it with Kay's child-centric ideal of "personal dynamic media." T o repeat the question I posed i n Chapter 6, what if we were to be presented w i t h an equally venerable computing tradition, sharing most of the virtues of openness, sharing, simplicity, and modularity inherent i n U n i x / F O S S culture, but one w h i c h was actually designed w i t h children and education i n mind? Clearly, the mere existence of a supposedly better alternative does nothing to ensure or even advance its popularity; Latour and Callon's network model of sociotechnical systems sheds ample light on the dynamics inherent here, and toward the end of Chapter 61 have included some analysis of how these dynamics seem to play out w i t h i n computing cultures themselves. This leaves us with the focus for Chapter 7, i n w h i c h I have sketched the general features of the contemporary candidate for such a computing artifact: simple, open, shared, modular, and based on a r i c h and venerable cultural tradition, and yet conceived w i t h c h i l - dren and education i n m i n d from the start. This artifact is Squeak, an open-source, Internet-era reincarnation of Smalltalk, born in a curious "back to the future" topos1 i n 1. I use the word topos here in the sense elaborated by media historian Erkki Huhtamo (1995): cultural-practical assemblages which recur or may be (intentionally or unintentionally) "re-activated" in new technological contexts. Chapter 8: Drawing Things Together 264 w h i c h the core of Kay's development team from the 1970s were seemingly able to re-focus on the agenda set i n those early years, but i n a w o r l d now populated by ubiquitous personal computers, widespread Internet connectivity, and an established culture of shared software development. O n this face, Squeak seems to be the best of all possible worlds. A n d yet, as the details related i n Chapter 7 reveal, Squeak's now nearly decade-long life has been mostly marked by a struggle to define its existence, i n ways that interestingly paral- lel and yet contrast with the ontologically indeterminate Aramis of Latour's technocultural whodunnit. This is not to say that Squeak's existence as a piece of software available on the Internet is i n question; rather, the struggle is over Squeak's ambitions. Does Squeak repre- sent the coming of the Dynabook, or is it just an open-source Smalltalk implementation, of interest only to a small cadre of programmers? Is Squeak really an "idea processor for kids of all ages" or is it just another authoring environment? Is Squeak an important and revolu- tionary contribution to educational computing, or is it, as a recently published taxonomy of novice programming environments (Kelleher & Pausch 2005) presents it, merely one of dozens and dozens of kicks at this particular can. O n the face of it, Squeak's claim to the loft- ier of each of these alternatives is questionable. Bluntly put, Squeak's user communities are small and fragmented; its out-of-the-box user interface is baroque; compared w i t h popular computing environments like U n i x and M S W i n d o w s , it comes across as somewhat alien and peculiar; it is decidely not foolproof (as with any serious development environment, Squeak offers more than enough "rope to hang yourself with"); and there is little c o m m o n understanding of just what Squeak is for (Smalltalk development? multimedia presentation? Etoys simulations? user-interface research? web application development?) A n d yet, at the same time, i n some very real and overt ways, Squeak is a vastly better computing environment than anything anyone has come up with: i n terms of sheer flexibil- ity it is unsurpassed; as an exercise i n software portability and device independence it is probably without peer; it has the facility to enable children and novice users to work with sophisticated systems modelling very quickly and easily; it is a latter day version of one of Chapter 8: Drawing Things Together 265 the simplest and most elegant programming languages ever developed; and it is completely and unrestrictedly open, from top to bottom. Despite its virtues, the extent of Squeak's network (in Latour and Callon's sense) is extremely limited—compared w i t h the vast community of developers working i n the U n i x / F O S S tradition, or, alternatively, the staggering market-driven scope of Microsoft's version of personal computing. A n d yet, most interestingly, despite the weaknesses and strengths I've listed here, A l a n Kay and his team seem almost uninterested i n the fate of Squeak i n the marketplace of ideas.2 Kay is explicit about this: Squeak is only a vehicle to get to the next place; it is decidedly not the destination, not the end itself. This echoes the trajectory of Smalltalk at X e r o x P A R C i n the 1970s, a cycle of re-invention and self-tran- scendance. W i t h Smalltalk-80 and its release to the wider world, Kay lamented, Smalltalk stopped being reinvented every few years and instead became cemented as a black box, its own "inflexible religion." Squeak, i n yet another back-to-the-future move, is seemingly a return to this aesthetic; born as a Smalltalk-80 implementation, its core developers have not shied away from altering nearly every aspect of the software; from its user-interface frame- work i n 1997 to a 2005 movement toward an architecture called "traits," w h i c h threatens to replace even the class-and-instance model core to Smalltalk. 3 So, to watch the contempo- rary trajectory of Squeak as it spawns new projects like Croquet is not surprising i n this light. W i t h each iteration, new facets of the Dynabook vision are prototyped, realized, perfected, evaluated, and sometimes abandoned. Squeak, as I pointed out i n Chapter 7, is not the Dynabook. T o get an idea of what the Dynabook really is, we are required to take a longer view. 2. As Kim Rose points out (personal communication, Oct 2004), there is no marketing effort behind Squeak, nor are there plans for one. 3. See the squeak-dev mailing list in 2005 for ongoing discussion of this possible development. Chapter 8: Drawing Things Together 266 D Y N A B O O K : A R T I F A C T O R I D E A ? W h a t are we to make of such a case? Is the Dynabook real, or not? Smalltalk is real, but it isn't the Dynabook. Squeak is real, but it isn't the Dynabook either. N o r is my laptop computer, which bears more than a subtle resemblance to Kay's early mock-ups. Like X e r o x ' Alto minicomputers circa 1973, these are all "interim dynabooks" or facets thereof. So, when can we drop the "interim" qualifier? Surely at some point, a lightwight, network- connected portable computer running some k i n d of open and user-modifiable object- (or perhaps message-) oriented software—in the hands of children—must add up to the real thing. Or, alternatively, perhaps this is the tragic story of a failed vision, an artifact who has failed to be, like A r a m i s . The trouble w i t h this sort of framing is a precisely the technocentric thinking that Papert warned against i n his 1987 call for "computer criticism." The point of casting this as cultural history is that we cannot evaluate the Dynabook as one or other assemblage of hardware and software. N o w , here is an obvious parallel w i t h Latour's brand of techno-soci- ology; Latour would tell us that any technological system must be viewed as an assemblage of social, technical, human, nonhuman, physical, and political actants. Latour's treatment of A r a m i s is precisely the elaboration of the multitude of networks w h i c h constitute that i l l - fated project. But here I feel I must depart somewhat from Latour's otherwise powerful framework. The fate of Aramis, Latour tells us, lay i n the unresolved contestation among the many actors: The war of interpretations continues for Aramis; there are only perspectives, but these are not brought to bear on anything stable, since no perspective has been able to stabilize the state of things to its o w n profit. (Latour 1996, p. 79) Latour warns us that "There is no such thing as the essence of a project. O n l y finished projects have an essence" (p. 48). But perhaps we must make a distinction between techno- logical paradigms.4 The 'object' of Latour's analysis is a large-scale, immensely expensive, Chapter 8: Drawing Things Together 267 technosocial assemblage, composed of vast quanitites of steel, rubber, concrete, workers, passengers, governmental agencies, funding bodies, magnetic couplers, navigational computers, and so on. It is all A r a m i s can do to struggle to come into existence; indeed, the sheer inertia A r a m i s must overcome is enormous and multifaceted: physical, technical, political, budgetary. The chief requirement is co-ordination, the establishment of a c o m m o n frame of reference, and indeed, Latour's account is one of the dynamics of the work of establishing (by means of alignment, displacement, translation, and the rest of Latour's conceptual repertoire) a c o m m o n trajectory defining a single technosocial artifact. The Dynabook, i n contrast, almost begins its life w i t h a c o m m o n frame of reference: a vision shared by many contributors and conceptually linking several instantiations. Where A r a m i s ' essence follows (or fails to follow) its existence in implementation—and where the details of implementation (in terms of design, engineering, funding, political w i l l , etc.) are the primary site of struggle i n Latour's account, the various implementation(s) of the D y n a - book are almost emergent from the interplay of ideas and technocultural contexts. The implementation itself is, as Kay's repeated appeal for Smalltalk's self-transcendence points out, relatively unimportant—as it was w i t h M c C a r t h y ' s Lisp, w h i c h first saw embodiment on paper, and w h i c h later—perhaps less profoundly—became a family of software programs. But if, as this suggests, the actual implementation is not important, what are we left with? W i t h o u t the concretion of black boxes w h i c h may be closed, objectified, reified, what are techno-social networks to be composed of? If we take away the necessity of actual technical artifacts, are we not left w i t h human beings, w i t h pure society? D o we not find ourselves back w i t h social constructivism—a la Bijker, Hughes, & Pinch's Social Construction of Technological Systems (1987)—a stance from w h i c h Latour spent the better part of two decades attempting to distance himself. I am not prepared to return to a naive social constructivist position—far from it. The sticking point is, I think, the stubborn materialism inherent i n actor-network theory. 5 1 4. I use the term "paradigm" here in its mundane, pre-Kuhnian sense of a defining example. 5. John Law (1992) even goes so far as to coin the term "relational materialism" in the context of actor-network theory. Chapter 8: Drawing Things Together 268 refer here not to "materialism" i n the sense of belief or nonbelief i n an "external reality," but rather the materialism w h i c h is shot through contemporary sociology, inherited originally from M a r x and more recently from Garfinkel, and which underlies most of contemporary science and technology studies. I am most certainly not prepared here to write an idealist critique of materialism i n social science, but I do want to pose, as a specific challenge, the concept of a "powerful idea" and how we can account for it i n a materialist epistemology. In most sociological accounts ideas must either take the form of embodied representations, like inscriptions (see Latour & Woolgar 1986) or documents (see B r o w n & D u g u i d 1996), or, alternatively, as "discourse effects" as w i t h Foucauldian method (within STS, we would say "network effects")—that is, somehow emergent from the materialist base. Latour's notion of a "parliament of things" and his emphasis on including the nonhumans w i t h the humans is huge improvement over the social construction of technology, but its materialist underpinnings mean it is still a blunt instrument for considering the broader cultural dynamics of ideas themselves. Something like Gadamer's phenomenological hermeneutics, w i t h its emphasis on the experience of interpretation and meaning-making, is required to do this work. I do not mean to suggest here some non-mediated essence for ideas; my starting point i n this discussion has been the foundational mediatedness of all human experience. W h a t I am calling for here, then, is a broadening of what counts as mediation. A n idea like the Dynabook—like more powerful ideas such as the theory of evolution; general relativity; democracy; or complex systems modelling—exists by way of its currency in "discursive communities" or "communities of practice"—its mediation is by way of artifactual embodi- ment, textual and documentary inscription, individual utterances, and probably some (as yet) elusive cognitive representation as well. M y point here is that this laundry list of media does not do a very good job of shedding light on what "currency" might mean w i t h respect to ideas.6 Let me wrap up this methodological aporia w i t h the following suggestion: theories of technoscience have so far been successful i n explaining the concretion of theories, Chapter 8: Drawing Things Together 269 constructs, and systems—the accounts made i n STS research of the laboratory practice, systems of classification, microbe theories, bicycles, jet planes, and rapid transit systems. But these forms are not the whole of technoscience. They may well be the tip of the iceberg; the visible, empirically studiable 10% of what actually comprises actual practice. A n d , i n a moment that recalls the fish's difficulty w i t h the concept of water, it maybe that the cultural aspect of technoscience is still woefully underrepresented—the aesthetic dimension; the virtues, values, and vices; the drives and forces that actually move technoscience along; these are mostly absent from most STS accounts. Cultural history is not Biography Having now done uncertain violence to accepted sociological method, cultural materialism, and, for that matter, media studies, I must address a further possible question, one w h i c h w i l l , at this point have no doubt been raised i n my readers' minds: that the Dynabook is an idea which exists i n the head of an individual man, that of A l a n Kay, and that its social or cultural existence is dependent upon his continual—stubborn, even—reiteration of it. This possibility is also uncomfortably linked w i t h the notion that what I have been writing here is nothing but a "great man" theory of history. Admittedly, there is m u c h i n Kay's story that lends itself to this k i n d of treatment. I must try to defend against this charge, however. Despite Kay and his philosophy being the primary narrative vehicle for my account, and despite his inescapable centrality to the Dynabook, this story is not his. In the very first place, the Dynabook traces itself to Seymour Papert, whose early (1968) and continued relationship w i t h Kay and his work is absolutely key to this story. Second, the role of Kay's colleagues—Dan Ingalls, T e d Kaehler, Adele Goldberg, Diana M e r r y — i s enormous. It is perhaps a pitfall of this and other accounts that Ingalls may appear as the implementor of Kay's ideas. In fact, Ingalls is no less a conceptual innovator, and the ongoing tension between Kay's and Ingalls' vision is one of the central 6. Richard Dawkin's popular notion of a "meme" as the unit of cultural transmission is no doubt raised here, though I find this concept lacking on a number of counts; suffice it to say that "transmission" is a poor way to talk about the performative prac- tice of culture. That said, at least Dawkins' notion at least attempts to account for the spread and currency of ideas within com- ' munities. Chapter 8: Drawing Things Together 270 dynamics here. Ingalls' 1981 article, "Design Principles Behind Smalltalk" is every bit a manifesto on the order of Kay's and Goldberg's writings; here is a text w h i c h has as m u c h or more to say to the question of why we should care about digital technology as it does to the particular architecture of object-oriented systems. But beyond merely pointing out that the Dynabook story involves a larger group of people than Kay, the truly compelling point of the story is how this vision is shared by vari- ous participants, in the absence of any extant, concrete instantiation. O f the team at X e r o x P A R C , Kay recalled, I needed a group because I had finally realized that I did not have all of the temperaments required to completely finish an idea. I called it the Learning Research Group (LRG) to be as vague as possible about our charter. I only hired people that got stars i n their eyes when they heard about the notebook compu- ter idea. I didn't like meetings: didn't believe brainstorming could substitute for cool sustained thought. W h e n anyone asked me what to do, and I didn't have a strong idea, I would point at the notebook model and say, "Advance that." L R G members developed a very close relationship w i t h each other—as D a n Ingalls was to say later: "... the rest has enfolded through the love and energy of the whole Learning Research Group." A lot of daytime was spent outside of P A R C , playing tennis, bike riding, drinking beer, eating Chinese food, and constantly talking about the Dynabook and its potential to amplify human reach and bring new ways of thinking to a faltering civilization that desperately needed it (that k i n d of goal was c o m m o n i n California i n the aftermath of the sixties). (Kay 1996a, p, 527) This is a story of a cultural phenomenon, not a personal agenda. It crystallized i n Kay's m i n d , or perhaps i n his lab, i n the late 1960s and early 1970s (and, as I have noted, this makes the job of the historian much easier) but since that time it has been the site of the practice and participation of scores of people. Interestingly, despite the myriad transla- tions—some of w h i c h I have traced i n the preceding chapters—the core vision remains today, quite recognizable, still a centre of gravity for those who share it. This is, again, an aspect poorly accounted for i n actor-network theory or the sociology of translation, w h i c h states that for a technosocial system to be successful and achieve currency, it must necce- Chapter 8: Drawing Things Together 271 sarily undergo various translations w h i c h move it from an isolated, perhaps "pure" conception to a strongly reinforced, network of aligned forces. This k i n d of account serves well to explain the trajectory of Smalltalk over three decades, and even the relative frailty of Squeak, but it speaks nothing to the cohesion of the idea, of the vision, over thirty-plus years. Backfrom the Future Yet another accounting I must address here is a m u c h bleaker version of the story: simply that the Dynabook does not exist, nor is it likely to. In this version, I need to go to no pains to account for the coherence of the vision over time, nor the respective translations and survivals of facets of the Dynabook, because there aren't any. In this version, Kay's early vision was quickly turned into other things—object-oriented programming, the mass- market P C — a n d any latter day reminiscence o n the romantic vision driving the story is but the nostalgia of those whose day is, frankly, past. It is not difficult to find commentaries w h i c h take this basic stance (e.g. Dugan 1994; Bardini & Horvath 1995). I do not, however, find these analyses comprehensive or convincing, mostly because I reject the assumed 'irre- versibility' and complacency i n such accounts. Let me engage i n a brief thought experiment: suppose the year is 2016; by this point, i n Canada, what used to the called the Internet is now taken for granted to the point of being almost invisible; similarly, computing devices have been miniaturized and proliferated to the point where people do not even talk about "computers" anymore. W i t h i n such a fine- grained digitally mediated environment, software programs have ceased to be distinct things unto themselves; rather, software is a vast conversation of distributed message-pass- ing objects; application software has given way to network-hosted services. Schoolkids commonly use computing technology i n their classes—usually i n the construction and modification of virtual reality models of processes and systems, some taken from the natural w o r l d (as i n life sciences) but many more from the digital sphere. In this future, an episode like the one i n Kay's 1972 paper—children sitting on a grassy lawn collaboratively playing Chapter 8: Drawing Things Together 272 w i t h the workings of a video game—is commonplace; the difference is that instead of each child holding a personal computer, children tote around small notepad-like devices w h i c h serve as user interface gadgets, connecting them to a global computing grid. This scenario, as far as I can see, and barring some k i n d of global catastrophe, is an entirely reasonable one given current trends i n digital technology. 7 If such a scenario were indeed to come to pass, historians w o u l d be i n a position to trace the roots of such techno- social assemblages. A n d i n doing so, some of that historiography w o u l d no doubt intersect w i t h m u c h of what I have written i n the present document. The very real possibility of such histories being written i n the future makes any present claim that the Dynabook is simply a failed vision (as of 2006) untenable. If i n , 2016, we witness a very different k i n d of digital world, we w i l l then perhaps be able to say that the Dynabook was a failed vision w h i c h never existed. But for now, i n 2006, such a scenario is still as likely as not, and as such, it w o u l d not be legitimate nor even reasonable to foreclose on its historical potential. This raises an interesting question—or nest of questions—about the future of the D y n a - book vision. The simplest question is whether, as time goes on, and the partial and contingent implementations of various facets of the Dynabook multiply, the Dynabook vision becomes diluted; that, by extension, i n ten years time w i l l this historical thread even be identifyable i n the face of its sheer plurality? A second simple question—a less interesting one, I believe—is whether the Dynabook 'project' w i l l simply run out of steam or be substantially translated i n ways far more serious than what we have seen to date. W h a t if A l a n Kay himself—already i n his 60s—simply retires to his pipe organ and chamber music, and leaves a younger generation to maintain the vision? Is there enough coherence i n the Dynabook idea (and its manifestations) that it would survive the absenting of the original cast—Kay, Ingalls, Kaehler, for instance? It is difficult to mount answers to these questions that go beyond mere speculation. A few comments about future directions are warranted, however: 7. While not "Utopian," I would say this scenario is wholeheartedly optimistic. I can easily come up with equally feasible versions featuring nasty corporate domination—more on that later. Chapter 8: Drawing Things Together 273 • Core Squeak developers, led by Andreas Raab, are currently working o n a successor user-interface framework called Tweak. Tweak is aimed at what Kay has called the "omniuser"—that is, novice and nonprofessional users i n general, as opposed to just children. However, the entire Etoys system appears to be being re-imple- mented i n Tweak. This may have the effect of expanding the immediate applicability and user-base of Squeak beyond the Etoys-oriented school c o m m u - nity and the programmer-oriented Smalltalk community to include a "middle" tier. • The Croquet environment, a networked, multiparticipant, 3 D environment is pick- ing up momentum well beyond Kay's own team, at a number of U S universities interested i n it as a higher-educational platform. Its appeal to a generation raised on first-person 3 D video games is clear, as well. A s an Internet application, Croquet is forward-looking to the extent that it not only substantially leapfrogs Squeak's existing desktop-client environment but perhaps the W o r l d - W i d e W e b itself. Indeed, the Croquet project received attention i n the blogosphere i n N o v , 2005, billed as a "Web 3.0" application—a tongue-in-cheek reference to current hype about an emerging "Web 2.0" paradigm. • Mainstream computing—and, by extension, the kinds of systems ultimately found in schools and other educational contexts—shows evidence of a gradual, i f frag- mented, trend toward more Smalltalk-like systems: newer programming languages like P y t h o n 8 and Ruby have established better compromises between the U n i x tradition and the Smalltalk aesthetic, and i n recent years there is substantial discus- sion of the merits of "dynamic languages" over Java (Tate 2005). It may be that elements of the Dynabook vision may gain currency despite Squeak's or Smalltalk's relative success.9 8. The Python community has a dedicated educational thread within it, embodied on the edu-sig@python.org mailing list. In spring 2006, a "summit" of sorts was hosted by the South African Shuttleworth Foundation, bringing together representatives from the python community as well as Alan Kay, Kim Rose, and other Squeak people. The possibility of a python-based Etoys- like environment was discussed on mailing lists on both sides of this cultural divide. See http://wiki.tsf.org.za/shuttleworthfoun- dationwiki/Day_one Chapter 8: Drawing Things Together 274 mailto:edu-sig@python.org http://wiki.tsf.org.za/shuttleworthfoun- • A recent development I find myself at a loss to quite/yet evaluate, pro or con: i n November 2005, at the W o r l d Summit on the Information Society i n Tunisia, U N Secretary-General K o f i A n n a n presided over the unveiling of the prototype for M I T ' s "One Laptop Per C h i l d " project, an endeavour of astonishing ambitions led by M I T M e d i a Lab director Nicholas Negroponte. The plan is to produce a $100, handcrank-powered notebook computer i n the tens of millions and distribute them to schoolchildren i n developing nations. The laptops run open-source soft- ware (based on Linux), feature wireless connectivity and software for creating (decentralized) mesh networks. The laptops w i l l reportedly have Squeak installed; A l a n Kay and Seymour Papert are among the 14 principals behind the project. 1 0 Whatever we make of these individual items, it does seem clear that there is plenty of energy and even momentum at work here, despite the fact that it is hard to tell exactly i n what direction it is going. W H O C A R E S A B O U T T H E D Y N A B O O K ? Theoretical issues and speculation aside, there remains the question of the relevance of this story to education and educators. I have, I believe, outlined a story of a particular innova- tion's gradual movement away from the mainstream; however brilliant Kay's ideas may have been i n the 1970s, there is no denying that i n terms of actual practice, they must appear marginal and perhaps even quaint i n the face of large scale institutional e-learning, distance education, virtual schooling, computer-mediated communications, standardized assess- ment, classroom podcasting, and what-have-you. Here, as elsewhere i n this analysis, the 9. As of early 2006, Dan Ingalls is employed at Sun Microsystems and has been working on a Java-based implementation of the Squeak virtual machine. 10. A press release dated Dec 13,2005 states, "The One Laptop per Child (OLPC) board of directors today announced that Quanta Computer Inc. of Taiwan was chosen as the original design manufacturer (ODM) for the $100 laptop project. [...] In announcing the selection of Quanta, OLPC Chairman Nicholas Negroponte said, 'Any previous doubt that a very-low-cost laptop could be made for education in the developing world has just gone away' Quanta has agreed to devote significant engineering resources from the Quanta Research Institute (QRI) in Q1 and Q2,2006, with a target of bringing the product to market in Q4. The launch of 5-15 million units will be both in large-scale pilot projects in seven culturally diverse countries (China, India, Bra- zil, Argentina, Egypt, Nigeria, and Thailand), with one million units in each of these countries." See http://lap- top.media.mit.edu/news.html Chapter 8: Drawing Things Together 275 http://lap- http://top.media.mit.edu/news.html contemporary meaning of educational computing is wrapped up tightly w i t h that of personal computing. W h a t the latter has come to represent has everything to do with the former. Kay's three decades of work remain i n ambiguous relation to both. Kay's relevance to education I w i l l begin w i t h a frank observation: almost no one I have talked to i n the educational estab- lishment—teachers, administrators, professors, graduate students—has any clue who A l a n Kay is. M o s t have heard of Seymour Papert, and can even give a basic outline of Papert's project. But Kay and his work remain almost entirely unknown. T h i s we must contrast w i t h Kay's profile w i t h i n computer science, where he is most certainly known—in 2004 Kay was the recipient of the T u r i n g A w a r d Association of C o m p u t i n g Machinery's highest honour— and his influence (and critique) widely felt. O f course this is not surprising; Kay's profes- sional career has been spent among computer scientists, and his practical association w i t h schools and education limited to a handful of schools. Papert, i n contrast, was something of a public celebrity i n the 1980s, and the author of an enormously popular book (Papert 1980). Nevertheless, and without wanting to diminish Papert's contributions at all, it must be pointed out that Kay's actual influence on personal and educational computing is vastly greater. The dominant desktop paradigm and our contemporary understanding of "author- ing" software had their origins i n Kay's lab. If we consider the small but significant impact of the object-oriented paradigm in information technology and computer science curriculum, one could say that Kay's influence is felt i n a third area as well. The first point that the present study seeks to establish is that all these now-commonplace motifs had their Origins i n the pursuit of an educational platform, one seriously informed by the educational psychology of Piaget, Bruner, and Montessori. Desktop computing and multimedia author- ing were not conceived as tools for office workers or even media professionals—they were i n the first place tools for children and elements of a vision of literacy for the 21st century. That this emphasis is lost o n us now is perhaps easily attributed to historical distance; the world is a different place than it was in 1972—and even the w o r l d of 1972 was not well Chapter 8: Drawing Things Together 276 represented by the San Francisco Bay A r e a of the day. 1 1 This has not been a study of the emergence of the personal computer; that topic is m u c h larger than any of the several books already published on the topic, and w h i c h generally give short shrift to the role played by Kay's group at Xerox. Rather, I have tried here to stay focused on the educational vision, and, for simplicity's sake to treat the emergence of personal computing as a function of the that. Complicating this is the serious divergence and disconnect between the discourse of computer science and that of education, even educational technology. Those who are capa- ble of speaking fluently i n both realms are few and far between, a fact easily and often underestimated by both the readers and authors of educational technology texts. T o under- score this, consider that in the 1970s, a number of the leading thinkers i n the field of computer science (at places like X e r o x P A R C , M I T , and Stanford) had turned their atten- tions to the needs and concerns of education. H o w different this is today! That the 2004 T u r i n g Lecture was delivered by someone w h o m I w o u l d i n all seriousness consider—and I hope I have made this apparent in the preceding pages—an educational theorist is a complete anomaly. W h y should we care about this? W h y should it be important that computer scientists have their minds—god forbid their hands—on education? There are several possible answers to this question. One possible answer is that the education system—in the U n i t e d States, for i n s t a n c e - is seen to be badly i n need of reform, and the introduction of computers is seen as a possible vector for reform. Papert has suggested as m u c h i n several of his writings (though I would not go so far as to say this has been his primary motivation), as has Kay (1972, p. 1). A second possible answer has to do w i t h improving the administrative and logistical efficiency of educational systems: this is clearly the'aim of learning management systems, computer-driven assessment systems, and a host to drives toward standardization. But little of this area can be of serious interest to computer science as an intellectual discipline. 11. John Markoff's (2005) What the Dormouse Said: The Untold Story of How the Sixties Counterculture Shaped the Personal Computer Industry sheds much light on the cultural Zeitgeist at labs like Xerox PARC in the early 1970s. Chapter 8: Drawing Things Together 277 A third, and more interesting answer has to do w i t h the now half century-old tradition of cognitive science. This is the realm i n w h i c h the early researchers i n artificial intelligence were operating (including John M c C a r t h y and Seymour Papert). In this framing, research into the modeling of the m i n d and research into how learning takes place are seen as facets of a single, larger pursuit. A simple example of this concept is the now well-known concept of "multiple intelligences" advanced by H o w a r d Gardner (1993), and its application to the design of multimedia curriculum resources. But there is another possible answer to the question of why computer science is impor- tant to education, and that is the one having to do w i t h "powerful ideas"—this is clearly the primary driver for both Papert's and Kay's projects. In this framing, computer science itself is not the important thing, but what it makes possible: the study and engagement w i t h complex or dynamic systems—and it is this latter issue w h i c h is of key importance to educa- tion. Indeed, Kay has aggressively questioned the very existence of a "computer science" as a discipline unto itself (attacking the notion of "software engineering" i n the same breath— see Kay 20006). But a "systems science," of w h i c h computing is legitimately a part, in the same way that print literacy is an integral part of modern science, is surely of real impor- tance to education. Note that this is not the same knee-jerk, means-ends argument that has been repeated over and over again in critiques of computers i n education; that is, this is not the same as saying that educational ends must take precedence over technological means. Kay has read his M a r s h a l l M c L u h a n , and we would do well to recall h i m here, and not suppose that the "message" is somehow independent of the medium. But it does not mean either that the medium is the important thing i n and of itself. Kay says it best: The reason, therefore, that many of us want children to understand computing deeply and fluently is that like literature, mathematics, science, music, and art, it carries special ways of thinking about situations that i n contrast w i t h other knowledge and other ways of thinking critically boost our ability to understand our world. (Kay 1996a, p. 548) Chapter 8: Drawing Things Together 278 This is the point of the Dynabook, and the k i n d of literacy it suggests: like the printed book before it, the computer has the possibility to "carry special ways of thinking" that provide access to the most powerful ideas of our age. The underlying political question here is who w i l l have access to those ideas and their application? Education and Powerful Ideas W h a t are "powerful ideas"? BJ A l l e n - C o n n and K i m Rose, following Papert, say they are "intellectual tools" (2003, p. v). W h a t makes an idea powerful? Its connectedness to other ideas, and how far they can take you; using Latour's vocabulary, we w o u l d say that "power- ful" ideas are those w i t h extensive networks. Kay's repeated critique of education and schooling is that most powerful ideas have been either deliberately or inadvertently evacu- ated from curriculum. M u c h of "school math" is not mathematics at all but attempts to train children , i n various kinds of calculation using patterns and recipes. Mathematics is actu- ally about representing and thinking clearly about ideas. Science goes further: to try to come up w i t h plausible ideas about the universe that are worthwhile thinking clearly about. (Kay 20046) It is not, in this day and age, terribly difficult to come up w i t h examples of how the school system is failing, and it is not my agenda to do so here. Rather, I want to underscore a simple idea or principle: that powerful ideas of the k i n d described here—whether they be the geometry of triangles, the theory of evolution, the idea of liberal democracy, or the dynamic modelling of population dynamics—are precisely the responsibility of education. Powerful ideas so defined imply particular ideas of literacy. T o be fluent w i t h trigonometry, one must be literate with respect to diagrams, lines, points, angles, algebra, arid so on—to read them and to construct them oneself. T o fully grasp the idea of liberal democracy, one must be able to read, analyse, and critique a number of document forms, and to be able to construct such arguments oneself. The kinds of powerful ideas emergent i n the digital age—simulations, systems dynamics, multimedia, collaborative works, and so on—mean one must become conversant w i t h the genres and methods of elaborating such constructs, and to be able to Chapter 8: Drawing Things Together 1 279 create and manipulate them oneself (see Lemke 2001). A r e these not the special province of education? Is not the ideal of public education, mass education, that these ideas be accessi- ble to all, and that the very structure and health of democratic society depends upon their reasonably widespread fluency (Dewey 1916; Kellner 2006)? A n d furthermore, as my emphasis on the ability of the literate to themselves construct and actively participate in each of these forms hopefully suggests, is it not the responsibility of education to nurture not the givenness of these ideas, but our own role i n their ongoing construction and defini- tion. Is this not what democractic citizenship—as framed by Dewey—is all about? The Politics of Software Revisited A l a n Kay's conception of powerful ideas is focused primarily on science and math. But it important to not limit the scope of the argument to particular curricular areas. Indeed, Kay has argued that education in the U n i t e d States has not done justice to m u c h of the print culture of the past three or four centuries. Mathematics and science are but emblematic of this larger interest in what literacy really means. M o s t of the important ideas i n our civilization are not on the Internet yet, but they are already available in free public libraries. The haves and have-nots during these coming years w i l l not be the people who do or do not have access to the Internet. The real gap w i l l be between those who are discerning about the information they access and those who are not—between those who use technology to help them understand powerful ideas and those who do not. (Kay 1997, p. 19) I wish at this point to move this discussion beyond Kay's o w n framing, and to take a page from Richard Stallman, who in the 1980s founded the Free Software Foundation and who arguably remains the most important thinker of the Free and O p e n Source Software (FOSS) movement. 1 2 In an interview given i n 2001, Stallman commented, 12. Stallman would balk at being identified with "open source"software, since he has remained vehement that the issue is one of "freedom." However, I use the more inclusive term here to refer to the cultural movement as a whole. Chapter 8: Drawing Things Together 280 I've dedicated 17 years of my life to working on free software and allied issues. I didn't do this because I think it's the most important political issue i n the world. I did it because it was the area where I saw I had to use my skills to do a lot of good. But what's happened is that the general issues of politics have evolved,and the biggest political issue i n the w o r l d today is resisting the tendency to give business power over the public and governments. I see free software and the allied questions for other kinds of information that I've been discussing today as one part of that major issue. (Stallman 2001) Stallman's focus here is very different from Kay's. H e is not talking about education, and he is not concerned w i t h the status of mathematics or science perse. W h a t Stallman is alluding to is the issue of ownership of knowledge, and from where he stands, this issue is not an academic or philosophical one, but a very real, concrete problem. A s our daily work and lives are increasingly conducted and mediated via digital media—and I don't think anyone who has not been asleep for the past decade would downplay this—software is more and more the 'material' substrate of our culture. Stallman led a charge, beginning i n a very marginal way in the 1980s and growing to enormous importance i n the late 1990s, about the relationship between software (both development and distribution) and market capitalism. Stallman was appalled at the extent to w h i c h his work as a programmer came to be at the behest of and w i t h i n the strictures of corporate capitalism. H i s response to what he defined as a "stark moral choice" was to found the Free Software Foundation and establish the G N U project in order to construct a computing platform on terms not governed by the increas- ingly mainstream terms of copyright exploitation and restrictive licensing. A t the time, the only people who cared about this were U n i x programmers. But, w i t h the popular rise of the Internet and W o r l d - W i d e W e b i n the 1990s—and the translation of Stallman's G N U project into the G N U / L i n u x project and the wider FOSS movement, Stallman's concerns and his response have intersected w i t h mainstream thinking about information technology. But, to date, most thinking about Free and O p e n Source software has focused o n the mechanical aspects: that it is possible to develop and distribute software without demand- ing payment for it; that the quality of the software developed under such conditions is i n Chapter 8: Drawing Things Together 281 some cases of higher quality due to the operation of something like peer review i n the devel- opment community; the existence of a commonwealth of software code increases the benefit for all. W h a t has so far been downplayed—indeed the emergence of the "Open Source" branding is precisely such amove—is the political element of Stallman's argument; that the important thing is not the price of software, nor the formula for quality, nor the effi- ciency of the development community. W h a t is truly important, Stallman argues, is the freedom we enjoy i n being able to develop, use, share, distribute, and build upon the fruits of one another's efforts. Stallman's is a political argument first, not an engineering or a marketing argument. H o w do we as a society ensure freedom? The mechanical conception of F O S S places the emphasis on a particular application of copyright law; the G N U General Public License, among others, seeks to ensure that software released under its terms remain free i n all the above mentioned ways. But the license is not the end of the argument; the license—and the laws it draws upon—are expressions of a particular culture. So the deeper and more difficult question is how do we as a culture ensure freedom? Legal scholar Lawrence Lessig has perhaps tackled this aspect of the question more comprehensively than most. In perhaps his best-known work, a lecture entitled "Free Culture" (2002), Lessig characterizes the current fight over copyright laws—that w h i c h Stallman calls "the biggest political issue i n the world today"—as the struggle for who owns and controls creative expression i n our society. In his lecture, Lessig repeats this "refrain": • Creativity and innovation always builds on the past. • The past always tries to control the creativity that builds upon it. • Free societies enable the future by l i m i t i n g this power of the past. • Ours is less and less a free society. (Lessig 2002a) H o w does a society ensure freedom? By valuing it, in the first place—and i n the second place, by knowing what it means to be free. In a technologically mediated society—and all societies are technologically mediated—this means attending to the dynamics of expres- sion, and the modes of mediation. In the era of print, liberal modernity enshrined the idea of Chapter 8: Drawing Things Together 282 a "free press" and the extent to w h i c h our nations and societies have taken this idea seriously (vs. paying lip service to it) has been an ongoing debate of crucial importance. In the era of print, we are fortunate that we have at least been aware of this debate; even when we are most cynical about the reality of a free press. It is at the very least an open political question, and thus a cultural touchstone or reference point to w h i c h most "educated" people can allude. But we are today rapidly altering our media landscape. A n d as we do so, as we move into unknown waters, we cling to our existing constructs and ideals as though they were lifejack- ets. W e know deep down, however, that the political constructs of a prior age are not likely to be quite apt i n a new era, i n w h i c h our culture operates by different means, i n w h i c h our thoughts and expressions and interactions are mediated by different systems. The current controversy over copyright law and copyright reform should at least highlight this issue. If, as I suggested above, i n this new digital age, software is more and more the substrate of our culture, the material of its expression, and thus the site of political articulation and struggle, then software freedom is indeed the issue of our time, as both Stallman and Lessig would say. T o this I must add; it is not going to be sufficient to ensure freedom i n software merely by releasing it under a particular license, written according to the terms of copyright l a w . 1 3 W h a t is going to be required is a change i n culture: how we understand software, and how we understand freedom. In Chapter 6,1 made a case for why I believe that the FOSS movement itself is inade- quate for bringing about such changes at a society-wide level. The F O S S movement has succeeded i n changing the cultural context of software for those already i n the know, or those who for other reasons are prepared to take the time and effort to understand the issues it addresses. But, by virtue of the intellectual and cultural heritage FOSS draws from, it is unlikely to make headway more generally in our society. T o the contrary; in its exclusiv- ity, it actively works against its own mainstreaming. A n d , at the end of that Chapter, I 13. As Lessig and numerous others point out, copyright law is an artifact of the advent of print; aimed specifically at circumscribing the political power of those who controlled the presses. Chapter 8: Drawing Things Together 283 suggested that the tradition represented by Kay's work was perhaps a better candidate, on the grounds that here was a paradigm of software development equally free and open, yet m u c h more pedagogically sound, having its roots and first principles firmly rooted i n a vision of personal computing, articulated i n a peer-to-peer, do-it-yourself, learner-centric, less-is-more aesthetic. Note that these latter are not 'features' of a software product; they are virtues of a computing culture. Software is political. This is obvious i n the mundane sense, as anyone who has ever had to warp their personal practices and expressions into a pre-conceived digital framework can attest. It is also obvious at the level of high-level poltical-economic analyses of the comput- ing industry or the military industrial complex (e.g. Edwards 1996). But it is also true, more importantly, at a more profound and subtle level: at the level of culture—of the construction and co-construction of meaning and significance. W h a t should we as individuals expect from digital media? Should we learn to expect it to be malleable, manipulable, combinable, and understandable? O r w i l l we be content to accept pre-packaged, shrinkwrapped capabil- ities, defined by corporations and marketers? A n d where w i l l we learn such things? In school, w i l l our children learn the logic and dynamics of digital media, in the sense I outlined in Chapter 3? W i l l our children learn that they are born into a mediated w o r l d more malleable, recombinable, and translatable than any other age has even dreamt of? O r w i l l they learn i n school that sharing is a crime, that Microsoft Office defines the set of possibilities for document communication, and that the limits of cultural expression are basically those set forth by Hollywood? The fact that our education system is woefully underequipped to even begin to answer these questions is nothing new; a generation of critical educators have already elaborated school's impotence and even complicity with structured dominance (e.g. M c L a r e n 1993; Kincheloe & Steinberg 1995; Torres 1998; Apple 2004). I am not about to suggest that the adoption of a technolog- ical platform w i l l do anything to address such a large-scale systemic issue, nor to throw my lot i n w i t h those who have seen the computer as the "trojan horse" for school reform (Greenberg 2001). I rather appeal to the notion that education is reflective of the culture it Chapter 8: Drawing Things Together 284 serves, and as such see more promise i n the evolution of computing and media cultures online. Even the FOSS movement, for all that I have said i n critique of it, at least carries a. certain weight, and the political messages emanating from it at least point i n a constructive direction. A s for the furtherance of the Dynabook ideal, I hope that by now I have made a case for this being a rather diffuse process, rather than a singular trajectory. I w i l l end w i t h a state- ment of optimism: that there is enough constructive chaos i n the digital w o r l d today, enough energetic searching for wisdom and meaning on the part of millions of disorganized (at least by conventional definitions of organization) people and enough potential for the discovery—or better, the construction—of powerful ideas that the virtues embodied i n the computing culture of the Dynabook vision are as likely to prevail as the opposite, corporatist alternative, w h i c h also boasts formidable resources and momentum. O u r challenge, then, as educators, practitioners, and philosophers, is to attend to the business of recognizing, appreciating, criticizing, and articulating the good. M y sincere hope is that the preceding work is a contribution to that effort. Chapter 8: Drawing Things Together 285 Bibliography Aarseth, E. (1997). Cybertext: Perspectives on Ergodic Literature. Baltimore: The Johns Hopkins University Press. Aarseth, E. (2004). Genre Trouble. Electronic Book Review, 3. Abelson, H. & Sussman, G. J. (1996). Structure and Interpretation of Computer Programs (Second ed.). Cambridge: MIT Press. Agalianos, A., Noss, R., & Whitty, G. (2001). Logo in Mainstream Schools: The Struggle over the Soul of an Educational Innovation. British Journal of Sociology of Education, 22(A). Alexander, C. et. al. (1977). A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press. Allen-Conn, B. J. & Rose, K. (2003). Powerful Ideas in the Classroom:: Using Squeak to Enhance Math and Science Learning. Glendale: Viewpoints Research Institute. Ambron, S. & Hooper, K. (Eds.). (1990). Learning with Interactive Multimedia: Developing and Using Multimedia Tools in Education. Redmond: Microsoft Press. Apple, M . W. (2004). Ideology and Curriculum. (Third ed.). New York: RoutledgeFalmer. Apple Computer Inc. (1995). Teaching Learning and Technology: A Report on 10 Years ofACOT Research. Apple Computer Inc. Retrieved October 5,2006, from http://images.apple.com/education/kl2/leadership/acot/pdf/10yr.pdf Bardini, T. & Horvath, A. T. (1995). The Social. Construction of the Personal Computer User. Journal of Communication, 45(3). Baudrillard, J. (1995). Simulacra and Simulation (S. F. Glaser Trans.). Ann Arbor: University of Michigan Press. Bazerman, C. (1998). Emerging Perspectives on the Many Dimensions of Scientific Discourse. In J. Martin & R. Veel (Eds.), Reading Science: Critical and Functional Perspectives on Discourses of Science (pp. 15-28). London: Routledge. Beck, K. (2000). Extreme Programming Explained: Embrace Change. New York: Addison-Wesley. Berners-Lee, T., Henders, J. & Lassila, O. (2001, May 17). The Semantic Web. Scientific American. Retrieved October 5,2005, from http://www.sciam.com/print_version.cfm?articleID=00048144-10D2-lC70- 84A9809EC588EF21 Bernstein, R. J. (1983). Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis. Philadephia: University of Pennsylvania Press. Bibliography 286 http://images.apple.com/education/kl2/leadership/acot/pdf/10yr.pdf http://www.sciam.com/print_version.cfm?articleID=00048144-10D2-lC70- Bezroukov, N. (1999). Open Source Software Development as a Special Type of Academic Research: Critique of Vulgar Raymondism. First Monday, 4(10). Bhabha, H. (1994). The Commitment to Theory.- In The Location of Culture. London: Routledge. Bijker, W. E. & Law, J. (Eds.). (1992). Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge: MIT Press. Bijker, W. E., Hughes, T. P., & Pinch, T. J. (Eds.). (1987). TheSocial Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge: MIT Press. Bloor, D. (1999). Anti-Latour. Studies in the History and Philosophy of Science, 30(1), 81-112. Booch, G. (1991). Object Oriented Design with Applications. Redwood City: Benjamin/Cummings. Boshier, R. & Wilson, M . . (1998). Panoptic Variations: Surveillance and Discipline in Web Courses. Paper presented to the Adult Education Research Conference 1998, San Antonio, TX. Bowers, C. A. (2000). Let Them Eat Data: How Computers Affect Education, Cultural Diversity, and the Prospects of Ecological Sustainability. Athens: University of Georgia Press. Bowker, G. & Star, S. L. (1999). Sorting Things Out: Classification and its Consequences. Cambridge: MIT Press. Brand, S. (1972, December). Spacewar: Fanatic Life and Symbolic Death Among the Computer Bums. Rolling Stone. Brand, S. (1987). The Media Lab: Inventing the Future at MIT. New York: Viking. Bricmont, J. & Sokal, A. (2001). Remarks on Methodological Relativism and "Antiscience." In J. A. Labinger & H. Collins (Eds.), The One Culture: A Conversation about Science (pp. 179-183). Chicago: University of Chicago Press. Bromley, H . & Apple, M . (Eds.). (1998). Education/Technology/Power: Educational Computing as a Social Practice. Albany: State University of New York Press. Brown, J. S. & Duguid, P. (1996). The Social Life of Documents. First Monday, 1(1). Bruckman, A. S. (1994). Programming for Fun: MUDs as a Contextfor Collaborative Learning. Paper presented at the National Educational Computing Conference, Boston, M A . Bruckman, A. S. (1997). MOOSE Crossing. Unpublished doctoral dissertation, Massachusetts Institute of Technology. Bruner, J. S. (1966). Toward a Theory of Instruction. Cambridge: Harvard University Press. Bruner, J. S. (1990). Acts of Meaning. Cambridge: Harvard University Press. Bruner, J. S. (1991). The Narrative Construction of Reality. Critical Inquiry, 18(1), 1-21. Bibliography 287 Bruner, J. (1996). The Culture of Education. Cambridge: Harvard University Press. Bryson, M . & Castell, S. d. (1994). Telling Tales Out of School: Modernist, Critical, and "True Stories" about Educational Computing. Journal of Educational Computing Research, 10(3), 199-221. Bryson, M . & de Castell, S. (1998). New Technologies and the Cultural Ecology of Primary Schooling: Imagining Teachers as Luddites In/Deed. Educational Policy, 12(5), 542-567. Burk, J. (1998). The Play's the Thing: Theatricality and the M O O Environment. In C. Haynes & J. R. Holmevik (Eds.),//(g/z Wired: On the Design, Use, and Theory of Educational MOOs. Ann Arbor: University of Michigan Press. Callon, M . (1981). Struggles and Negotiations to Define What is Problematic and What is Not: The Socio-logic of Translation. In K. D. Knorr, R. Krohn, & R. Whitley (Eds.), The Social Process of Scientific Investigation (pp. 197-219). Dordrecht: D. Reidel. Callon, M . (1986). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St. Brieuc Bay. In J. Law (Ed.), Power, Action and Belief: A New Sociology of Knowlege? (pp. 196-233). London: Routledge & Kegan Paul. Callon, M . (1991). Techno-economic Networks and Irreversibility. In J. Law (Ed.), A Sociology of Monsters: Essays on Power, Technology, and Domination. London: Routledge. Callon, M . & Latour, B. (1981). Unscrewing the Big Leviathan: How Actors Macro-structure Reality and How Sociologists Help Them to Do So. In K. Knorr-Cetina & A. V. Cicourel (Eds.), Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macro-sociologies. Boston: Routledge & Kegan Paul. Canadian Internet Project. (2005). Canada Online. Retrieved February 5,2006, from http://www.cipic.ca Carr, D. (1998). Narrative and the Real World: An Argument for Continuity. In B. Fay, P. Pomper, & R. T. Vann (Eds.), History andTheory: Contemporary Readings"(pp. 137-152). Oxford: Blackwell Publishers. de Castell, S., Bryson, M . , & Jenson, J. (2002). Object Lessons: Towards an Educational Theory of Technology. First Monday, 7(1). Retrieved October 5,2005, from http://www.firstmonday.org/issues/issue7_l/castell Chakraborty, A., Graebner, R., & Stocky, T. (1999). Logo: A Project History. Unpublished paper for MIT's 6.933 course. Chesley, H. et. al,. (1994). End User Programming: Discussion of Fifteen Ideals. RN-94-13. Apple Computer, Inc.. Cohen, R. (1986). History and Genre. New Literary History 17,203-218. Cole, M . (1996). Cultural Psychology: A Once and Future Discipline. Cambridge: Harvard University Press. Bibliography 288 http://www.cipic.ca http://www.firstmonday.org/issues/issue7_l/castell Collins, H, & Yearly, S. (1992). Epistemological Chicken. In A. Pickering (Ed.), Science as Practice and Culture, (pp. 301-326). Chicago: University of Chicago Press. Cringley, R. X. (1996). Triumph of the Nerds, Part 3. Television Broadcast, PBS. Cuban, L. (1986). Teachers and Machines: The Classroom use of Technology Since 1920. New York: Teachers College Press. Cuban, L. (1993). Computers Meet Classroom: Classroom Wins. Teachers College Record 95(2), 185-210. Cuban, L. (2001). Oversold and Underused: Computers in the Classroom. Cambridge: Harvard University Press. . . Curtis, P. (1992). Mudding: Social Phenomena in Text-Based Virtual Realities. Paper presented at the Directions and Implications of Advanced Computing conference. Retrieved October 7,1996, from ftp://parcftp.xerox.com/pub/MOO/papers/DIAC92.txt Davy, J. (1985). Mindstorms in the Lamplight. In D. Sloan (Ed.), The Computer in Education: A Critical Perspective. New York: Teachers College Press. Dewey, J. (1916). Democracy and Education: An Introduction to the Philosophy of Education. New York: The Macmillan Company. diSessa, A. A. (2000). Changing Minds: Computers, Learning and Literacy. Cambridge: MIT Press. Dijkstra, E. W. (1989, December). On the Cruelty of Really Teaching Computer Science. Communications of'the ACM, 32(12). Doyle, M . (1995). An Informatics Curriculum. InTegrate 19. Retrieved October 5,2005, from http://acitt.digitalbrain.com/acitt/web/resources/pubs/Integrate%2019/inforrnatics%20curri c.htm Drucker, J. (1995). The Alphabetic Labyrinth: The Letters in History and Imagination. London: Thames & Hudson. Ducasse, S. (2005). Squeak: Learn Programming with Robots. Berkeley: APress. Dugan, B. (1994). Simula and Smalltalk: A Social and Political History .Retrieved Cotober 5, 2005, from http://www.cs.washington.edu/homes/dugan/history.html Edwards, P. N. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge: MIT Press. Ehn, P. (1988). Work-Oriented Design of Computer Artifacts. Stockholm: Arbitslivscentrum. Engelbart, D. & English, W. (2004). A Research Center for Augmenting Human Intellect. In N. Wardrip- Fruin & N. Montfort. (Eds.), TheNewMedia Reader (pp. 93-105). Cambridge: MIT Press. (Original work published 1968) Bibliography 289 ftp://parcftp.xerox.com/pub/MOO/papers/DIAC92.txt http://acitt.digitalbrain.com/acitt/web/resources/pubs/Integrate%2019/inforrnatics%20curri http://www.cs.washington.edu/homes/dugan/history.html Feenberg, A. (1999). Questioning Technology. New York: Routledge. Feenberg, A. (2004). Democratic Rationalization: Technology, Power, and Freedom. In D. M . Kaplan (Ed.), Readings in the Philosophy of Technology (pp. 209-226). Lanham, MD: Rowman and Littlefield. Fenton, J. & Beck, K. (1989). Playground: An Object Oriented Simulation System with Agent Rules for Children of All Ages. Paper presented at OOPSLA '89. Fish, S. (1980). Is there a Text in this Class? Cambridge: Harvard University Press. Franklin, U. M . (1999). The Real World of Technology. Revised Edition. Toronto: Anansi. Friesen, N. (2001). What are Educational Objects? Interactive Learning Environments, 9(3), 219-230. Fullan, M . & Hargreaves, A. (1996). What's Worth Fighting for in Your School (Second ed.). New York: Teachers College Press. Gabriel, R. (1991). The Rise of'Worse is Better'. In Lisp: Good News, Bad News, How to Win Big. Lucid, Inc. Retrieved October 5,2005, from http://www.dreamsongs.com/WIB.html Gadamer, H - G . (1999). Truth and Method (Second, Revised ed.; J. Weinsheimer & D. G. Marshall, Trans). New York: Continuum. (Originally published in English 1975; in German 1960) Galas, C. (2001, October). School Squeaking. SqueakNews, 1(4). Gardner, H. (1985). The Mind's New Science: A History of the Cognitive Revolution. New York: Basic Books. Gardner, H . (1993). Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books. Gardner, M . (1970, October). The Fantastic Combinations of John Conway's New Solitaire Game "Life." Scientific American 223,120-123. Garfinkel, H . (1967). Studies in Ethnomethodology. Englewood Cliffs: Prentice-Hall. Geertz, C. (1973). The Interpretation of Cultures. New York: Basic Books. Gillespie, T. (2002, Fall.). Hard Fun... Squeak! Technos Quarterly, 11(3). Goldberg, A. (1979). Educational Uses of a Dynabook. Computers & Education 3,247-266. Goldberg, A. (1984). Smalltalk-80: The Interactive Programming Environment. Reading, MA: Addison-Wesley. Goldberg, A. (Ed.). (1988). A History of Personal Workstations. Reading, MA: Addison-Wesley. Bibliography 290 http://www.dreamsongs.com/WIB.html Goldberg, A. (1998). The Community of Smalltalk. In P. H. Salus (Ed.), Handbook of Programming Languages, Volume 1: Object-Oriented Programming Languages (pp. 51-94). New York: MacMillan Technical Publishing. Goldberg, A. & Kay, A. C. (Eds.). (1976). Smalltalk-72 Instruction Manual. Xerox Palo Alto Research Center. Goldberg, A. & Robson, D. (1983). Smalltalk-80: The Language and its Implementation. Reading, MA: Addison-Wesley. Goldberg, A. & Ross, J. (1981, August). Is the Smalltalk-80 System for Children? BYTE 6(8), 348-368. Goldberg, A., Abell, S. T., & Leibs, D. (1997). The Learning Works Development and Delivery Frameworks. Communications of the ACM, 40(10), 78-81. Goldfarb, C. & Rubinsky, Y. (1991). The SGML Handbook. Oxford: Oxford University Press. Goldman-Segall, R. (1995). Configurational Validity: A Proposal for Analyzing Ethnographic Multimedia Narratives. Journal of Educational Multimedia and Hypermedia, 4(2/3). pp. 163-182. Goldman-Segall, R. (1998). Points of Viewing Children's Thinking. Hillsdale, NJ: Lawrence Erlbaum and Associates. Goldman-Segall, R. & Maxwell, J. W. (2003). Computers, the Internet, and New Media for Learning. In W. M . Reynolds & G. E. Miller (Eds.), Handbook of Psychology, vol. 7: Educational Psychology (pp. 393-427). New York: John Wiley & Sons. Graham, P. (1993). On Lisp: Advanced Techniques for Common Lisp. New York: Prentice Hall. Graham, P. (2001). The Roots of Lisp. Retrieved October 5,2005, from http://www.paulgraham.com/rootsoflisp.html Graves, W. I. (1995). Ideologies of Computerization. In M . A. Sheilds (Ed.), Work and Technology in Higher Education: The Social Construction of Academic Computing (pp. 65-87). Hilldale, NJ: Lawrence Erlbaum Associates. Graves, W. H. (1999). The Instructional Management Systems Cooperative: Converting Random Acts of Progress into Global Progress. Educom Review, 34(6). Greenberg, D. (2001). The Trojan Horse of Education. Technos Quarterly, 10(1). Grint, K. & Woolgar, S. (1997). The Machine at Work: Technology, Work, and Organization. Cambridge: Polity Press. Grumet, M . (1988). Bitter Milk: Women and Teaching. Amherst: University of Massachusetts Press. Groff, D. & Steele, K. (2004). A Biased History of Interactive Design. Smackerel. Retrieved January 5,2006, from http://www.smackerel.net/smackerel_home.html Bibliography 291 http://www.paulgraham.com/rootsoflisp.html http://www.smackerel.net/smackerel_home.html Guzdial, M . (1997). OOPSLA 97 Notes. Retrieved October 5,2005, from http://www- static.cc.gatech.edu/fac/mark.guzdial/squeak/oopsla.html Guzdial, M . (2000). Squeak: Object Oriented Design with Multimedia Applications. New York: Prentice Hall. Guzdial, M . & Rose, K. M . (2001). Squeak: Open Personal Computing and Multimedia. New York: Prentice Hall. Habermas, J. (1984). The Theory of Communicative Action, Volume 1: Reason and the Rationalization of Society (T. McCarthy, Trans.). Boston: Beacon Press. Hadamard, J. (1954). An Essay on the Psychology of Invention in the Mathematical Field. New York: Dover. Hamilton, A., Madison, J., & Jay, J. (1987). The Federalist Papers (I. Kramnick, Ed.). New York: Penguin. Harasim, L. (1990). Online Education: Perspectives on a New Environment. New York: Praeger. Haraway, D. J. (1991). Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge. Haraway, D. J. (1997). Modest_Witness@Second_Millennium.FemaleMan©_Meets_OncoMouse"~. New York: Routledge. Harel, I. & Papert, S. (Eds.). (1991). Constructionism. Norwood, NJ: Ablex Publishing. Harvey, B. (1991). Symbolic Programming vs. theA.P. Curriculum. Cambridge, MA: The Logo Foundation. Retrieved October 5,2005, from http://el.media.mit.edu/Logo- foundation/pubs/papers/symbolic_vs_ap.html Havelock, E. A. (1980). The Coming of Literate Communication to Western Culture. Journal of Communication, 30(1), 90-95. Hayles, N. K. (2005). My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago and London: University of Chicago Press. Haynes, C. & Holmevik, J. R. (Eds.). (1998). High Wired: On the Design, Use, and Theory of Educational MOOs. Ann Arbor: University of Michigan Press. Heidegger, M . (1993). The Question Concerning Technology. W. Lovitt (Trans.). In D. F. Krell (Ed.), Basic Writings. Revised and Expanded Edition. San Francisco: HarperCollins. (Originally published 1953) Hemetsberger, A. & Reinhardt, C. (2004). Sharingand Creating Knowledge in Open-Source Communities: The Case ofKDE. Paper presented at the Fifth European Conference on Organizational Knowledge, Learning, and Capabilities, Innsbruck 2004. Hiltz, S. R. & Turoff, M . . (2000). Breaking Down the Walls of the Classroom: A Tutorial on Distance Learning. Paper presented at the AIWoRC '00 Conference & Expo on Virtual Organizations. Bibliography 292 http://www- http://static.cc.gatech.edu/fac/mark.guzdial/squeak/ http://el.media.mit.edu/Logo- Hiltzik, M . A. (1999). Dealers of Lightning: Xerox PARC and the Dawn of the ComputerAge. New York: HarperBusiness. Hobart, M . E. & Schiffman, Z. S. (1998). Information Ages: Literacy, Numeracy, and the Computer Revolution. Baltimore: Johns Holpkins Press. Horn, B. (nA.). Joining the Mac Group. Retrieved February 5,2006, from http://folklore.org/StoryView.py?project=Macintosh&story=Joining_the_Mac_Group.txt Huhtamo, E. (1995). From.Kaleidoscomaniac to Cybernerd: Towards an Archeology of the Media. Retrieved January 5, 2006, from http://www.debalie.nl/artikel.jsp?articleid= 10104 IMS Project. (2002). Instructional Management System Global Learning Consortium. Project website. Retrieved October 7,2004, from http://www.imsproject.org/. Ingalls, D. H. H. (1978). The Smalltalk-76 Programming System Design and Implementation. Paper presented to the A C M Symposium on Principles of Programming Languages. Ingalls, D. H. H. (1981, August). Design Principles Behind Smalltalk. BYTE 6(8), 286-298. Ingalls, D. H. H. (1983). The Evolution of the Smalltalk Virtual Machine. In G. Krasner (Ed.), Smalltalk-80: Bits of History, Words of Advice (pp. 9-28). Reading, MA: Addison-Wesley. Ingalls, D. H. H. (1996). The Birth of Squeak. Message posted to the Usenet group comp.lang.smalltalk. Archived at http://minnow.cc.gatech.edu/squeak/1985 Ingalls, D. H. H. (1997). Where Squeak Was Headed. Retrieved Oct 5, 2005, from http://www.squeak.org/about/headed-99-98.html Ingalls, D. H. H. (2001, Sept). Squeak's 5th Birthday: AnE-View with Dan Ingalls. SqueakNews, 1(3). Ingalls, D. H. H., Kaehler, T., Maloney, J., Wallace, S., & Kay, A.. (1997). Back to the Future: The Story of Squeak, A Practical Smalltalk Written in Itself. Paper presented at OOPSLA '97. Retrieved October 5,2005, from http://users.ipa.net/~dwighth/squeak/oopsla_squeak.html Jauss, H. R. (1982). Toward an Aesthetic of Reception (T. Bahti; Trans). Minneapolis: University of Minnesota Press. Kay, A. C. (1968). The Reactive Engine. Unpublished doctoral dissertation, University of Utah. Kay, A. C. (1972). A Personal Computerfor Children of All Ages. Paper presented at the A C M National Conference, Boston. Kay, A. C. (1977, September). Microelectronics and the Personal Computer. Scientific American 237, 230-244. Kay, A. C. (1978, March). Ideas for Novice Programming in a Personal Computing System. Paper presented to the Infotech Staterof-the-Art Conference on User-friendly Systems, London. Bibliography 293 http://folklore.org/StoryView.py?project=Macintosh&story=Joining_the_Mac_Group.txt http://www.debalie.nl/artikel.jsp?articleid= http://www.imsproject.org/ http://minnow.cc.gatech.edu/squeak/1985 http://www.squeak.org/about/headed-99-98.html http://users.ipa.net/~dwighth/squeak/oopsla_squeak.html Kay, A. C. (1979). Programming Your Own Computer. In Science Year 1979 (pp. 183-195). World Book Encyclopedia. Kay, A. C. (1984, September). Computer Software. Scientific American, 251(3), 53-59. Kay, A. C. (1987). Doing with Images Makes Symbols. Video presentation. University Video Communications. Kay, A. C. (1990). User Interface: A Personal View. In B. Laurel & S. J. Mountford (Eds.), The Art of Human-Computer Interface Design (pp. 191-207). Reading, MA: Addison-Wesley.. Kay, A. C. (1991, September). Computers, Networks and Education. Scientific American 265,138-148. Kay, A. C. (1994, September/October). Which Way to the Future? Earthwatch: The Journal ofEarthwatch Institute, 13(5), 14-18. > Kay, A. C. (1995, October 12). Powerful Ideas Need Love Too! Written remarks to Joint Hearing on Educational Technology in the 21st Century, Science Committee and the Economic and Educational Opportunities Committee, US House of Representatives, Washington D.C. Kay, A. C. (1996a). The Early History of Smalltalk. In T. J. Bergin & R. G. Gibson (Eds.), History of Programming Languages II (pp. 511-578). New York: A C M Press. Kay, A. C. (1996b, Fall). Alan Kay Lecture. Multimedia Pioneers Series. San Francisco State University. Kay, A. C. (1996c, July/August). Revealing the Elephant: The Use and Misuse of Computers in Education. Educom Review, 31(4). Kay, A. C. (1997, July). Technology and Powerful Ideas: The real value of computers is in helping us understand the powerful ideas that force us to change our ways of thinking. American School Board Journal, 97,16-19. Kay, A. C. (1998a, December 11). Configuring a Dynabook. Message posted to Squeak-dev mailing list. Kay, A. C. (1998b, October 10). Re: Prototypes vs. Classes. Message posted to Squeak-dev mailing list. Archived at http://lists.squeakfoundation.org/pipermail/squeak-dev/1998- October/017019.html Kay, A. C. (1999, March/April). Software Design, the Future of Programming and the Art of Learning. Educom Review, 34(2). Kay, A. C. (2000a). Dynabooks: Past, Present, and Future. Library Quarterly, 70(3), 385-395. Kay, A. C. (2000b). Software: Art, Engineering, Mathematics, or Science?. In M . Guzdial (Ed.), Squeak: Object-oriented Design with Multimedia Applications (pp. xi-xiii). Upper Saddle River: Prentice Hall. Bibliography 294 http://lists.squeakfoundation.org/pipermail/ Kay, A. C. (2001, October 18). Lots of concurrency. Message posted to the squeak-dev mailing list. Archived at http://lists.squeakfoundation.org/pipermail/squeak-dev/2001- October/029823.html Kay, A. C. (2002a, March). The Computer Revolution Hasn't Happened Yet. Lecture given to the Digital Cultures Project: Interfacing Knowledge: New Paradigms for Computing in the Humanities, Arts and Social Sciences. UC Santa Barbara. Kay, A. C. (2002b). The Dynabook Revisited: A Conversation with Alan Kay. The Book and the Computer. Retrieved April 5, 2005, from http://www.honco.net/os/kay.html Kay, A. C. (2003a). Daddy Are We There Yet?. Presentation to O'Reilly & Associates Emerging Technologies Conference. Retrieved October 5,2005, from http://www.lisarein.com/videos/oreilly/etech2003/alankay/tour.html Kay, A. C. (2003b, March). Etoys history. Message posted to squeak-dev mailing list. Kay, A. C. (2004a). The Power of the Context. Remarks upon being awarded the Charles Stark Draper Prize of the National Academy of Engineering, Feb 24, 2004. (2004b). The Last Time You Thought Clearly Was. Retrieved January 5,2006, from http://squeakland.org/school/HTML/essays/habit_of_mind.htm (2004c, October).^! CM Turing Lecture: Introductions To Computing Should Be Child's Play. Lecture given at OOPSLA'04, Vancouver. & Feldman, S. (2004, December/January). A Conversation with Alan Kay. ACM Queue, 2(9). & Goldberg, A. (1976). Personal Dynamic Media. Xerox Corporation. Kelleher, C. & Pausch, R. (2005, June). Lowering the Barriers to Programming: A Taxonomy of Programming Environments and Languages for Novice Programmers. ACM Computing Surveys 37(2). 83-137. Kellner, D. (2006). Technological Transformation, Multiple Literacies, and the Re-visioning of Education. In J. Weiss, J. Nolan, J. Hunsinger, &P. Trifonas (Eds.), The International Handbook of Virtual Learning Environments, pp. 241-268. Dordrecht: Springer. Kimber, E. (1998). Practical Hypermedia: An Introduction to HyTime (Review ed.). Retrieved October 7, 1999, from http://www.drmacro.com/bookrev/practhyt/practicalhypermedia.html Kincheloe, J. & Steinberg, S. R. (Eds.) (1992). Thirteen Questions: ReframingEducation's Conversation New York: Peter Lang. Kittler, F. (1995, October). There is No Software. CTheory.net. Retrieved January 5,2006, from http://www.ctheory.net/articles.aspx?id=74 Bibliography 295 http://lists.squeakfoundation.org/pipermail/squeak-dev/2001- http://www.honco.net/os/kay.html http://www.lisarein.com/videos/oreilly/etech2003/alankay/tour.html http://squeakland.org/school/HTML/essays/habit_of_mind.htm http://www.drmacro.com/bookrev/practhyt/practicalhypermedia.html http://CTheory.net http://www.ctheory.net/articles.aspx?id=74 Klassen, P., Maxwell, J.W., & Norman, S. (1999). Structured Information and Course Development: An SGML/XML Frameworkfor Open Learning. Paper presented at ED-MEDIA: World Conference on Educational Multimedia, Hypermedia and Telecommunications, Seattle, WA. Koestler, A. (1964). The Act of Creation. New York: Macmillan. Koschmann, T. (1996)! Paradigm Shifts and Instructional Technology: An Introduction. In T. Koschmann (Ed.), CSCL: Theory and Practice of an Emerging Paradigm. Mahwah, NJ: Lawrence Erlbaum Associates. Krasner, G. (1983). Smalltalk-80: Bits of History, Words of Advice. Reading, MA: Addison-Wesley. Kuhn, T. S. (1996). The Structure of Scientific Revolutions. 3rd Edition. Chicago: University of Chicago Press. Landow, G. P. (1992). Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins University Press. Lanier, J. (2006). The Gory Antigora: Illusions of Capitalism and Computers. CATO Unbound, January 2006. Retrieved August 5,2006, from http://www.cato-unbound.org/2006/01/09/jaron- lanier/the-gory-antigora/ Laszlo, E. (1972). The Systems View of the World. New York: George Braziller. Lather, P. (1991). Issues of Validity in Openly Ideological Research: Between a Rock and a Soft Place. Interchange, 17(4), 63-84. Latour, B. (1987). Science in Action. Cambridge: Harvard University Press. Latour, B. (1992). Where are the Missing Masses? The Sociology of a Few Mundane Artifacts. In W. E., Bijker & J. Law (Eds.), Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge: MIT Press. Latour, B. (1993). We Have Never Been Modern (C. Porter, Trans). Cambridge: Harvard University Press. Latour, B. (1996). Aramis, or the Love of Technology. Cambridge: Harvard University Press. Latour, B. (1999). Pandora's Hope: Essays on the Reality of Science Studies. Cambridge: Harvard University Press. Latour, B. (2003). The Promises of Constructivism. In D. Ihde & E. Selinger (Eds.), Chasing Technoscience: Matrix for Materiality. Bloomington: Indiana University Press. Latour, B. & Woolgar, S. (1986). Laboratory Life: The Construction of Scientific Facts Second Edition. Princeton: Princeton University Press. (Original edition 1979) Laurel, B. & Mountford, S. J. (Eds.). (1990). The Art of Human-Computer Interface Design. Reading, MA: Addison-Wesley. Bibliography 296 http://www.cato-unbound.org/2006/01/09/jaron- Laurel, B. (1993). Computers as Theatre. Reading, MA: Addison-Wesley. Lave, J. (1988). Cognition in Practice: Mind, Mathematics, and Culture in Everyday Life. Cambridge: Cambridge University Press. Lave, J. & Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press. Law, J. (1992). Notes on the Theory of the Actor Network: Ordering Strategy and Heterogeneity. Centre for Science Studies, Lancaster University. Retrieved October 5, 2005, from http://www.comp.lancs.ac.uk/sociology/papers/Law-Notes-on-ANT.pdf Lemke, J. (2001). Towards a Theory ofTraversals. Retrieved January 30, 2006, from http://academic.brooklyn:cuny.edu/education/jlemke/papers/traversals/traversal-theory.htm Lessig, L. (1999). Code: And Other Laws of Cyberspace. New York: Basic Books. Lessig, L. (2002a). Free Culture. Retrieved February 5, 2006, from http://randomfoo.net/oscon/2002/lessig/ Lessig, L. (2002b). The Future of Ideas: The Fate of the Commons in a Connected World. New York: Vintage Books. Licklider, J. C. R. (1960, March). Man-Computer Symbiosis. IRE Transactions, HFE-1, 4-11. Lombardi, M . M . (2004, April). Croquet Anyone? Designing a More Responsive Online Learning Environment. Teaching With Technology Today, 10(5). Lowry, M . (1979). The World of Aldus Manutius: Business and Scholarship in Renaissance Venice. Ithaca: Cornell University Press. Lyman, P. (1995). Is Using a Computer Like Driving a Car, Reading a Book, or Solving a Problem? The Computer as Machine, Text, and Culture. In M . A. Shields (Ed.), Work and Technology in Higher Education: The Social Construction of Academic Computing (pp. 19-36). Hillsdale, NJ: Lawrence Erlbaum Associates. Maclntyre, A. (1984). After Virtue (Second ed.). Notre Dame: University of Notre Dame Press. Makreel, R. A. (2004). An Ethically Responsive Hermeneutics of History. In D. Carr, T. R. Flynn, & R. A. Makkreel (Eds.), The Ethics of History. Evanston: Northwestern University Press. Maloney, J. (2001, August). Interview with lohn Maloney Part II: The Apple Days. SqueakNews, 1(2). Maloney, J., Burd, L., Kafai, Y., Rusk, N., Silverman, B., & Resnick, M . (2004). Scratch: A Sneak Preview. Paper presented at the Second International Conference on Creating, Connecting, and Collaborating through Computing. Kyoto, Japan. Retrieved February 5,2006, from http://llk.media.mit.edu/projects/scratch/ScratchSneakPreview.pdf Bibliography 297 http://www.comp.lancs.ac.uk/sociology/papers/Law-Notes-on-ANT.pdf http://academic.brooklyn:cuny.edu/education/jlemke/papers/traversals/traversal-theory.htm http://randomfoo.net/oscon/2002/lessig/ http://llk.media.mit.edu/projects/scratch/ScratchSneakPreview.pdf Manovich, L. (2003). New Media from Borges to H T M L . In N. Wardrip-Fruin & N. Montfort (Eds.), The New Media Reader (pp. 13-25). Cambridge: MIT Press. Margolis, J. & Fisher, A. (2002). Unlocking the Clubhouse: Women in Computing. Cambridge: MIT Press. Marion, A. (1993). Playground Paper: Reviewing an experiment in designinga computer programming environmentfor children within an elementary school. Unpublished report: Apple Computer's Advanced Technology Group. Markoff, J. (2005). What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry. New York: Viking. Maxwell, J.W. (1996). House of Words: Designing Text and Community in Multi-User Object-Oriented Environments. Unpublished masters thesis, Simon Fraser University. McCarthy, J. (1960, April). Recursive Functions of Symbolic Expressions and Their Computation by Machine. Communications of the ACM, 3(4). . McCarthy, J. (1981). The History of LISP. In R. L. Wexelblat (Ed.), History of Programming Languages. New York: Academic Press. McCullough, M . (1998). Abstracting Craft: The Practiced Digital Hand. Cambridge: MIT Press. McLaren, P. (1993). Schoolingas a Ritual Performance: Towards a Political Economy of Educational Symbols and Gestures (Second ed.). New York: Routledge. McLuhan, M . (1962). The Gutenberg Galaxy: The making of Typographic Man. NewYork: Routledge & Kegan Paul. McLuhan, M . (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill. Meeker, M . (2005). Internet Trends. Retrieved February 5,2006, from http://www.morganstanley.com/institutional/techresearch/pdfs/GSB112005.pdf Menzies, H . (1989). Fast Forward and Out of Control: How Technology is Changing Your Life. Toronto: Macmillan. Menzies, H . (1999). Digital Networks: The Medium of Globalization, and the Message. Canadian Journal of Communication 24(4) Miller, C. R. (1994a). Genre as Social Action. In A. Freedman & P. Medway (Eds.), Genre and the New Rhetoric (pp. 23-42). London: Taylor and Francis. (Originally published 1984) Miller, C. R. (1994b). Rhetorical Community: The Cultural Basis of Genre. In A. Freedman & P. Medway (Eds.), Genre and the New Rhetoric (pp. 67-78). London: Taylor and Francis. Miller, J. A. (2004). Promoting Computer Literacy Through Programming Python. Unpublished doctoral dissertation, University of Michigan. Bibliography 298 http://www.morganstanley.com/institutional/techresearch/pdfs/GSB112005.pdf Montessori, M . (1972). The Discovery of the Child (M. J. Costelloe, Trans). New York: Ballantine Books. Moglen, E. (2000, Winter). The Encryption Wars: An Interview with Jay Worthington. Cabinet 1. Retrieved June 6,2005, from http://www.cabinetmagazine.0rg/issues/l/i_moglen_l.hphp Moglen, E. (2003, June). Freeing the Mind: Free Software and the Death of Proprietary Culture. Paper presented at the University of Maine Law School's Fourth Annual Technology and Law Conference. Muller-Prove, M . (2002). Vision and Reality of Hypertext and Graphical User Interfaces. Unpublished masters thesis, Universitat Hamburg. National Research Council. (1999). Funding a Revolution: Government Support for Computing Research. Washington, DC: National Academy Press. New London Group. (1996). A Pedagogy of Multiliteracies: Designing Social Futures. Harvard Education Review 66(1), 60-92. Noble, D. (1999). Digital Diploma Mills Part IV: Rehearsalfor the revolution. Retreived Oct 7,2003, from http://communication.ucsd.edu/dl/ddm4.html Norman, A. P. (1998). Telling It Like It Was: Historical Narratives On Their Own Terms. In B. Fay, P. Pomper, & R. T. Vann (Eds.), History and Theory: Contemporary Readings (pp. 119-135). Oxford: Blackwell Publishers. Noss, R. & Hoyles, C. (1996). Windows on Mathematical Meanings. Kluwer. Ong, W. J. (1995).' Orality and Literacy: The Technologizing of the Word. London and New York: Routledge. Oppenheimer, T. (1997, July 1). The Computer Delusion. The Atlantic Monthly, 280, 45-62. O'Reilly, T. (1999). Hardware, Software, and Infoware. In C. DiBona, S. Ockman, & M . Stone (Eds.), Open Sources: Voices from the Open Source Revolution, First ed. Sebastopol: O'Reilly & Associates. Pacey, A. (1983). The Culture of Technology. Oxford: Basil Blackwell. Papert, S. (1980a). Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books. Papert, S. (1980b). Teaching Children Thinking. In R. Taylor (Ed.), The Computer in the School: Tutor, Tool, Tutee (pp. 160-176). New York: Teachers College Press. (Originally published 1972) Papert, S. (1987). Information Technology and Education: Computer Criticism vs. Technocentric Thinking. Educational Researcher, 16(1), 22-30. Papert, S. (1991). Situating Constructionism. In I. Harel & S. Papert (Eds.), Constructionism. Norwood, NJ: Ablex Publishing. Papert, S. (1992). The Children's Machine. New York: BasicBooks. Bibliography 299 http://www.cabinetmagazine.0rg/issues/l/i_moglen_l.hphp http://communication.ucsd.edu/dl/ddm4.html Papert, S. (1999, March 29). Papert on Piaget. Time. p. 105. Retrieved Jan 5,2006, from http://www.papert.org/articles/Papertonpiaget.html Pea, R. & Sheingold, K. (Eds.). (1987). Mirrors of Minds. Norwood NJ: Ablex. Phillips, M . S. (2000). Society and Sentiment: Genres of Historical Writing in Britain, 1740-1820. Princeton: Princeton University Press. Postman, N. (1970). The Reformed English Curriculum. In A. Eurich (Ed.), High School 1980: The Shape of the Future in American Secondary Education. New York: Pitman. Postman, N. (1986). Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York: Penguin Books. Prigogine, I. & Stengers, I. (1985). Order out of Chaos: Man's New Dialogue with Nature. London: Flamingo. Raymond, E. S. (1999). The Cathedral and the Bazaar, version 1.46. Retrieved February 7, 2004, from http://tuxedo.org/~esr/writings/cathedral-bazaar/ Raymond, E. S. (2003). The Art of Unix Programming. New York: Addison-Wesley. Rheingold. H. (1985). Tools for Thought: The People and Ideas Behind the Next Computer Revolution. Engleood Cliffs, NJ: Prentice-Hall Ricoeur, P. (1991a). The Model of the Text: Meaningful Action Considered as Text. In J. B. Thompson & K. Blarney (Trans.), From Text to Action: Essays in Hermeneutics, II. Evanston, 111: Northwestern University Press. (Originally published 1971) Ricoeur, P. (1991b). The Hermeneutical Function of Distanciation. In J. B. Thompson & IC. Blarney (Trans.), From Text to Action: Essays in Hermeneutics, II. Evanston, 111: Northwestern University Press. Rose, IC. (2001, October). An In-Depth I n t e r v i e w with Kim Rose. SqueakNews, 1(A). Rose, IC. (2001, November). An In-Depth Interview with Kim Rose Pt 2. SqueakNews, 1(5). Rose, E. (2003). User Error: Resisting Computer Culture. Toronto: Between the Lines. SRI International. (1995). The Los Angeles Open Charter School. U.S. Department of Education. Retrieved March 8, 2005, from http://www.ed.gov/pubs/EdReformStudies/EdTech/opencharter.html Saltzer, J. H., Reed, D. P., & Clark, D. D. (1984). End-to-end Arguments in System Design. A CM Transactions on Computer Systems, 2(4), 277-288. Salus, P. H. (1994). A Quarter Century of Unix. Reading, MA: Addison-Wesley. Schoch, J. F. (1979). An Overview of the Programming Language Smalltalk-72. A C M SIGPLANNotices, 14(9). Bibliography 300 http://www.papert.org/articles/Papertonpiaget.html http://tuxedo.org/~esr/writings/cathedral-bazaar/ http://www.ed.gov/pubs/EdReformStudies/EdTech/opencharter.html Scollon, R. (2001). Mediated Discourse: The Nexus of Practice. London: Routledge. Shields, M . A. (Ed.). (1995). Workand Technology in Higher Education: The Social Construction of Academic Computing. Hillsdale, NJ: Lawrence Erlbaum Associates. Sloan, D. (Ed.). (1985). The Computer in Education: A Critical Perspective. New York: Teachers College Press. Smith, D. C. (1993). Pygmalion: An Executable Electronic Blackboard. In A. Cypher (Ed.), Watch What I Do: Programming by Demonstration. Cambridge: MIT Press. Smith, D. K. & Alexander, R. C. (1988). Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer. New York: Morrow. Solomon, C. (1986). Computer Environments for Children: A Reflection on Theories ofLearningand Education. Cambridge: MIT Press. Stallman, R. (1985). The G N U Manifesto. Dr. Dobbs Journal, 10(3), 30-35. Stallman, R. M . (1998). The G N U Operating System and the Free Software Movement. In C. DiBona, S. Ockham, & M . Stone (Eds.), Open Sources: Voices from the Open Source Revolution. Sebastopol: O'Reilly and Associates. Stallman, R. M . (2001). Copyright and Globalization in the Age of Computer Networks. MIT Communications Forum. Retrieved February 5, 2006, from http://web.mit.edu/m-i- t/forums/copyright/transcript.html Stallman, R. M . (2003). Software Patents: Obstacles to Software Development. Talk presented at the University of Cambridge Computer Laboratory. Retrieved October 5,2005, from http://www.cl.cam.ac.uk/~mgk25/stallman-patents.html Star, S. L. (1999). The Ethnography of Infrastructure. American Behavioral Scientist, 43(3), 377-391. Statistics Canada. (2004). Connectivity and Learning in Canada's Schools. Summarized report. Retrieved February 5,2006, from http://www.pwgsc.gc.ca/onlineconsultation/text/statistics- e.html#Schools Steele, G. & Gabriel, R. (1993, March). The Evolution of Lisp. ACM SIGPLANNotices 28(3), 231-270. Stephenson, N. (1999). In the Beginning was the Command Line. New York: Avon. Stone, A. R. (1995). The War of Desire and Technology at the End of the Mechanical Age. Cambridge: MIT Press. Stoyan, H. (1984). Early LISP History. Paper presented at the A C M Symposium on LISP and Functional Programming. Bibliography 301 http://web.mit.edu/m-i- http://www.cl.cam.ac.uk/~mgk25/stallman-patents.html http://www.pwgsc.gc.ca/onlineconsultation/text/statistics- Stroustrup, B. (1998). A History of C++. In P. H. Salus (Ed.), Handbook of Programming Languages, Volume I: Object-Oriented Programming Languages (pp. 197-303). New York: MacMillan Technical Publishing. Sutherland, I. (1963). Sketchpad: A Man-machine Graphical Communication System. Unpublished doctoral dissertation, Massachussetts Institute of Technology. Swales, J. M . (1990). Genre Analysis: English in Academic and Research Settings. Cambridge: Cambridge University Press. Tapscott, D. (2000). The Digital Divide. In The Jossey-Bass Reader onTechnology and Learning. San Francisco: Jossey-Bass. Tate, B. (2005). Beyond Java. Sepastopol: O'Reilly Associates. Thomas, D. (n.d.). Travels with Smalltalk. Retrieved Oct 5,2005, from http://www.mojowire.com/TravelsWithSmalltalk Torres, C. A. (Ed.) (1998). Education, Power, and Personal Biography: Dialogues with Critical Educators. New York: Routledge. Travers, M . (1988). Agar: An Animal Construction Kit. Unpublished M.S. thesis,'Massachussetts Institute of Technology. Traweek, S. (1988). Beamtimes and Lifetimes: The World of High Energy Physics. Cambridge: Harvard University Press. Tuomi, I. (2000). Internet, Innovation, and Open Source: Actors in the Network. First Monday, 6(1). Turkle, S. (1984). The Second Self: Computers and the Human Spirit. New York: Simon and Schuster. Turkle, S. (1995). Paradoxical Reactions and Powerful Ideas: Educational Computing in a Department of Physics. In M . A. Sheilds (Ed.), Work and Technology in Higher Education: The Social Construction of Academic Computing, (pp. 37-64). Hillsdale, NJ: Lawrence Erlbaum Associates. Turkle, S. & Papert, S. (1991). Epistemological Pluralism and the Revaluation of the Concrete. In I. Harel & S. Papert (Eds.), Constructionism. Norwood, NJ: Ablex Publishing. Tyler, S. A. (1986). Post-Modern Ethnography: From Document of the Occult to Occult Document. In J. Clifford & G. E. Marcus (Eds.), Writing Culture. Berkeley and Los Angeles: University of California Press. Unsworth, J. (1995). Living Inside the (Operating) System: Community in Virtual Reality (Draft). Retrieved February 7, 2006, from http://www3.iath.virginia.edu/pmc/Virtual.Community.html Vygotsky, L. S. (1978). Mind in Society: The Develpoment of Higher PsychologicalProcesses. Cambridge: Harvard University Press. Bibliography 302 http://www.mojowire.com/TravelsWithSmalltalk http://www3.iath.virginia.edu/pmc/Virtual.Community.html Waldrop, M . M . (2001). The Dream Machine: J. C. R. Licklider and the Revolution that Made Computing Personal. New York: Viking. Wardrip-Fruin, N. & Montfort, N. (Eds.). (2004). TheNewMedia Reader. Cambridge: MIT Press. Wall, L. (1999, August). Perl, the First Postmodern Programming Language. Paper presented to the Linux World Conference, San Jose CA. Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W. H. Freeman and Company. White, H. (1973). Metahistory: The Historical Imagination in Nineteenth-Century Europe. Baltimore and London: The Johns Hopkins University Press. White, F. (1999). Digital Diploma Mills: A Dissenting Voice. First Monday, 4(7). Whitehead, A. N. (1960). Process and Reality: An Essay in Cosmology. New Yrok: Harper. WikiWikiWeb. (n.d.) Cunningham & Cunningham. Available at http://c2.com/cgi/wiki Willinsky, J. (1998). Learning to Divide the World: Education at Empire's End. Minneapolis: University of Minnesota Press. Winograd, T. & Flores, F. (1986). Understanding Computer and Cognition: A New Foundation for Design. Norwood, NJ: Ablex. Wolfram, S. (2002). A New Kind of Science. Champaign, IL: Wolfram Media. Woolley, D. R. (1994). PLATO: The Emergence of Online Community. Computer-Mediated Communication Magazine, 2(3). Retrieved March 8,2005, from http://www.december.com/cmc/mag/1994/jul/ Yaeger, L. (1989). Vivarium History. Retrieved Oct 5,2005, from http://homepage.mac.com/larryy/larryy/VivHist.html Bibliography 303 http://c2.com/cgi/wiki http://www.december.com/ http://homepage.mac.com/larryy/larryy/VivHist.html Appendix A : U B C Research Ethics Certificate of Approval Appendix A: UBC Research Ethics Certificate of Approval 304