Fulbright Project Statement:

       Bridging the Culture / Science Divide

Phoebe Sengers

Relations between the sciences and cultural studies in the United States are in a state of crisis.  The cultural studies of science -- also called cultural critique of science or science studies -- study how cultural metaphors inform scientific work and uncover deeply-held but unstated assumptions that underly it.  These insights can contribute great value to science's self-understanding [4].  However, productive exchanges between cultural critics and scientists interested in the roots of their work are hampered by the disciplinary divide  between them [12].  This divide blocks cultural critics from access to a complete understanding of the process and experience of doing science, which can degrade the quality of their analyses and may lead them to misinterpret scientific practices.  At the same time, scientists have difficulty understanding the context and mindset of critiques of their work [26], making them unlikely to consider such critiques seriously or realize their value for their work, potentially even leading them to dismiss all humanistic critiques of science as fundamentally misguided [10].

The unfortunate result of this situation is a growing polarization of the two sides.  In the so-called ``Science Wars'' [13], pockets of fascinating interdisciplinary exchanges and intellectually illuminating debate are sadly overwhelmed by an overall lack of mutual understanding and accompanying decline of goodwill.  While most participants on both sides of the divide are fundamentally reasonable, communication between them is impaired when both sides feel  isunderstood and under attack.  This siege mentality not only undermines the possibility for productive  cooperation; with unfortunate frequency, it goes as far as cross-fired accusations of intellectual bankruptcy in academic and popular press and nasty political battles over tenure.  These unpleasant incidents not only help no one but also obscure the fact that both the academic sciences and the humanities are facing crises of funding in an economy that values quick profit and immediate reward over a long-term investment in knowledge.

The premise of my project is that things could be different.  I believe that, rather than being inherently antagonistic, science and humanistic studies of science-in-culture could benefit greatly from each other's strengths; in fact, such interdisciplinary forays have already begun  [16, 18, 19, 20, 21, 22, 23].  My technique for integrating science and critique uses detailed cultural critique of specific scientific work to develop new science and technology, thereby aiding the goals of both science and the humanities.  With this technique, the insights of cultural studies into the foundations of science can be made deeper by intimate knowledge of science; these insights, in turn, can be used to make improvements to science and technology both critics and scientists can be enthusiastic about, without putting either discipline in the position of preaching to the other. For the last 7 years I have been working on methodologies that draw on the strengths of cultural theory to analyze and understand my technical field, Artificial Intelligence (AI); I then use these results to develop new technologies in AI as well as deeper theories of science in cultural theory.  The goal of my proposed work is to exchange strategies for negotiating the science / culture divide with work premised on the fundamentally different European split between the science and the humanities; this work will be accomplished by focusing on a concrete scientific project involving both sociocultural and technical components at an interdisciplinary institution, the Zentrum fuer Kunst und Medientechnologie.

Proposed Project: The Avatar Interface

My proposed project uses the science-studies technique of metaphorical analysis to analyze and find solutions for a technical problem in AI.  I will first give the technical background of the problem, then explain how metaphorical analysis can help solve it; the resulting system will unite the concerns of both technical researchers and cultural critics.

An important topic in current computational research is the creation of virtual environments, richly detailed simulations designed to give the user a feeling of being immersed in an alternative world.  These simulations may contain artificial creatures, or ``agents,'' who interact with the user.  In these systems, the user becomes a part of the environment by controlling an ``avatar,'' a computational agent which represents the user and acts on behalf of him or her in the world.  For example, in an educational simulation of a beaver dam, the user will learn about being a beaver by controlling a virtual beaver avatar; other users of the system will `see' the user as a beaver, and the user's view of the environment may be rendered as a beaver would see it.

In most existing virtual environments, the avatar is considered a direct embodiment of the user in a virtual environment.  The user therefore directly control their avatar, either by giving simple commands (``go north;'' ``pick up the box'') or, in relatively physical  environments, by using hardware that monitors the user's body movements and converts them into movements for the avatar's body. However, as the complexity of virtual environments increases and, with it, the scope and complexity of possible avatar behavior, it becomes more difficult for the user to control the avatar using simple low-level  commands.  In response, avatars have been built that allow the user to specify behavior at various levels - from ``go north'' to ``find me an appropriate article'' to ``negotiate the release of hostages'' - while the avatar uses its own intelligence to fill in the details  [31, 32, 34].

As avatars become more independent, the idea that the avatar is simply a representation of the user becomes problematic.  Technical problems arise, since the avatar must engage in autonomous action while still accurately reflecting the desires of the user.  These problems are hard to solve when the metaphor of avatar-as-user keeps researchers themselves from fully recognizing the difficulties in avatar-user coordination [30].

Critics of science have problems with this metaphor as well.  The idea of avatar as simple extension of the user has worried several critics  [37, 38, 40], because, as J. MacGregor Wise points out [42], the invisible interface makes it difficult for both researchers and users to develop a critical understanding of the possibilities and constraints imposed by the interface.  For example, while promising the user full engagement, avatars are frequently only able to do a small part of what the user wants, and, for more complex avatars such as information-gathering programs on the Web, may confound the user by acting on idiosyncratic, unstated interpretations of the user's commands.

Both technical and critical considerations imply that the currently dominant metaphor of ``user $=$ avatar'' is not up to the task of describing or innovatively solving this interface problem.  The avatar interface project lays the groundwork for solutions to the user-avatar-system interface by (1) analyzing the presuppositions of current avatar interfaces, (2) exploring the space of alternative metaphors by which the agent-user relationship could be understood, (3) understanding the technical challenges and philosophical commitments involved in implementing systems based on these metaphors, and (4) building a concrete technical implementation of a novel avatar interface that incorporates results of this exploration.  The project is intended to advance the state of the art while developing critical understanding of avatar interfaces among both researchers and users. The proposed interfaces will be designed to have two properties: (1) mostly technical: provide users with an intuitive way to interact with a complex agent; (2) mostly cultural: allow users an intuitive understanding of the limitations and constraints the system places on them.

Specifically, my proposed project has the following components:

1. Analysis of Previous Work: I will review and critically analyze the state of the art in avatar interfaces in textual and graphical virtual environments and on the internet.  Here, by ``avatar interface'' I do not mean the hard- or software by which a person can be attached to a computer, but the AI problem of finding conceptual ways to influence an intelligent agent's behavior with a user's desires.  This study will lay the groundwork for improving current systems by addressing the following questions: how is the relationship between the user and the avatar understood?  how is the user's feel for the constraints and possibilities in the system shaped by the avatar interface?  how is the full engagement of the user in the system limited by the avatar interface?  what are the technical limitations of the interface approach?  what philosophical presuppositions are made in the construction of the interface?

2. Generation of Alternatives: Starting from the metaphors underlying current systems, I will explore possible alternative metaphors for the avatar interface.  Possible dimensions of this exploration include interfaces that are less invisible to the user, that do not have to be directly controlled by the user, or that subjectively manipulate presentation of the virtual environment (as explored in [33]).  In general, this study will provide a basis for evaluating the unstated assumptions of current interfaces by understanding what interfaces based on different assumptions would be like.

3. Detailed Technical Designs: Based on these new metaphors, I will sketch the design of several novel interfaces (on the order of 5).  The design sketches will include the philosophical commitments and implications of these systems, as well as their technical difficulties and proposed solutions to the newly introduced technical problems.

4. Concrete Implementation: I will anchor my cultural analysis of the state of the art by simultaneously building a prototype of an avatar interface based on a novel metaphor of avatar-user interaction. The avatar in this interface is an independent being who is not directly commanded but indirectly influenced by the user.  An interesting metaphor from psychology that reflects this viewpoint is the ``Influencing Machine,'' a paranoid delusion first described by Victor Tausk [44] that a machine is indirectly controlling one's thoughts  and actions by projecting hallucinations, producing and removing thoughts, feelings, and physical sensations, and altering one's bodily constitution.  I will build an interface where the human user is an `influencing machine' for the avatar, who responds to the user's attempted manipulations according to the avatar's own personality and goals.

The interface to the agent is represented to the user by a menu of indirect controls that are appropriate to the current situation.  Previous work in indirect control has studied user manipulation of the agent's environment [35]; here, the user will be able to affect the avatar's internal state.  The initial set of indirect controls are based directly on the metaphor of the Influencing Machine: allowing the user to project hallucinations in the environment, to change the avatar's emotional levels, and to cause the avatar to have particular thoughts.  This set will subsequently be expanded based on the results of the analysis of other avatar interfaces.  The Influencing Machine interface is intended to fulfill the dual goals of the project by making clear that the avatar is not a direct reflection of the user, while still allowing the user a rich variety of interaction with their avatar and, through the avatar, the virtual environment.  The interface will be evaluated by a qualitative study of style of user interaction with various indirect controls.

I believe that I can accomplish this work in 10 months because it builds on the intellectual and technical accomplishments of my thesis, in which I build agents in a virtual environment that combine many different behaviors while remaining coherent and understandable to a user.  In this work, I have created and applied design concepts that challenge and transform current paradigms in AI. In particular, I focus on `socially situated AI;' this means understanding the agent, not, as in much previous work, as an autonomous being without reference to its sociocultural situation, but rather as a kind of
communication between a builder who is creating it and an audience of users whom the builder is trying to reach.  The work proposed here is an extension of this work, intensifying its focus on the agent-in-social-context by examining more closely the relationship between user and agent. It also builds on my work in a practical sense, since my interface prototype can be built on top of my implemented thesis system, a graphical virtual environment containing two agents with a variety of behaviors.

Proposed Location: Bildmedien Institut, Zentrum fuer Kunst und Medientechnologie, Karlsruhe, Germany

This work should be done at ZKM for a number of reasons.  On the most general level, I want to learn from the differences in disciplinary configurations in Europe vs. the United States.  Because European institutions have a different disciplinary history than American ones, and because the European sciences and humanities have often had different focuses from American traditions, it seems likely that disciplinary boundaries are drawn differently.  This implies that strategies for negotiating these boundaries have developed in different directions, opening the door for a meaningful exchange of such strategies between myself and European researchers.

In addition, there is currently a push by the European Union to develop technology that can integrate social and cultural factors.  i3, the ESPRIT project on integrated intelligent information systems, is a multi-institution project building information systems where technical, social, and human factors are integrated and developing ``new forms of interaction that will place the human as an active participant in rather than a passive recipient of information.''  The i3 subproject ``Inhabited Information Spaces,'' of which ZKM is a member, focuses directly on building virtual environments; the development of avatar interfaces for them that are practical to use and simultaneously give the user a feeling of the constraints of the system will be an important contribution to fulfilling the goal of having an informed user as full participant.

The ZKM itself is a center of media art and technology.  The goal of the Bildmedien Institut is to research and develop media technology, including virtual environments, while promoting the use of this technology in artwork.  As part of its mission, prominent media artists from around the world are invited to spend 2-3 year sabbaticals developing media technologies and artworks based on them in collaboration with technical staff.  While media research is not unique to ZKM, the Bildmedien Institut has integrated art and artists with the development of technology to an unusual degree.  The approach of interdisciplinary collaboration can inform my proposed work, and can in turn be enriched by my interdisciplinary experience.  The avatar interface technology integrates well with ZKM's work on i3 and virtual environments, giving me opportunities to collaborate.  In addition, my experience in transforming critical analysis into detailed technological designs could be a useful resource when artists are transforming their visions into technology.

ZKM itself is also well-situated for interactions with other institutions.  It is near and has links with the University of Karlsruhe, which has strong computer science and robotics programs.  I can interact with these program's technical researchers and share the technical expertise I have gained on believable agents, action-selection, and agent architectures while a graduate student at Carnegie Mellon.  By being a part of the i3 project, ZKM shares the goal of virtual environments that integrate social and technical factors with a dozen or more other important European computer science research institutes, including the Gesellschaft fuer Maschinelle Datenverarbeitung, the Deutsches Forschungsinstitut fuer Kuenstliche Intelligenz, and the Vrije Universiteit Brussels. Regular meetings between the groups involved encourage the interchange of ideas and strategies.  Being in Europe would also give me an opportunity to engage in closer collaboration with Pit Schultz and Geert Lovink, organizers of the Nettime movement for interdisciplinary debate about technology and new media.  While much European academic discussion is in English, my command of German and Dutch is sufficient to allow for the easy interchange of ideas in casual conversation as well as technical written exchanges.


General Science Studies

[1] Donna Haraway. Simians, Cyborgs, and Women: The Re-Invention of Nature. London: Free Association, 1990.

[2] Sandra Harding. Whose Science?  Whose Knowledge? : Thinking From Women's Lives. Ithaca, NY: Cornell University Press, 1991.

[3] N. Katherine Hayles.  Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Ithaca, NY: Cornell University Press, 1990.

[4] Evelyn Fox Keller.  Reflections on Gender and Science.  New Haven: Yale University Press, 1985.

[5] Thomas Kuhn. The Structure of Scientific Revolutions, 2nd. ed.  Chicago: University of Chicago Press, 1970.

[6] Bruno Latour and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press, 1979.

[7] Steven Shapin and Simon Schaffer. Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life.  Princeton, NJ: Princeton University Press, 1985.

[8] Sharon Traweek. Beamtimes and Lifetimes: The World of Higher Energy Physicists. Cambridge, MA: Harvard University Press, 19988.

Science Wars

[9] Stanley Aranowitz.  ``Alan Sokal's `Transgression'.'' Dissent, Winter 1997, pp. 107-110.

[10] Paul R. Gross and Norman Levitt.  Higher Superstitions: the Academic Left and Its Quarrels With Science.  Baltimore: Johns Hopkins University Press, 1994.

[11] Arkady Plotnisky.  ``But It Is Above All Not True:'' Derrida, Relativity, and the ``Science Wars.'' Postmodern Culture, 7(2), 1997.

[12] C.P. Snow.  The Two Cultures and the Scientific Revolution.  London: Cambridge University Press, 1969.

[13] Social Text.  Special Issue on the Science Wars.  Ed. Andrew Ross.  14(1-2), April 1996.

[14] Alan Sokal.  ``A Physicist Experiments with Cultural Studies.''  Lingua Franca, May/June  1996.

Scientists Culturally Critiquing Science

[15] Philip E. Agre.  ``The Soul Gained and Lost: Artificial Intelligence as a Philosophical Project.'' Stanford Humanities Review 4(2):1-19, 1995.

[16] Martha L. Crouch.  ``Debating the Responsibilities of Plant Scientists in the Decade of the Environment.'' The Plant Cell, 2:275-277, April 1990.

[17] Richard C. Lewontin. Biology as Ideology: the Doctrine of DNA.  New York: Harper Collins, 1991.

Interdisciplinary Work in AI and Humanities/Art

[18] Philip E. Agre.  The Dynamic Structure of Everyday Life.  PhD Thesis, MIT Artificial Intelligence Laboratory, 1988.

[19] Joseph Bates. ``Virtual Reality, Art, and Entertainment.''  PRESENCE: Teleoperators and Virtual Environments, 1(1):133-138, 1992.

[20] Brenda Laurel.  Computers as Theatre.  Reading, MA: Addison-Wesley, 1991.

[21] Lucy Suchman.  Plans and Situated Actions: the Problems of Human-Machine Communication.  Cambridge, UK: Cambridge University Press, 1987.

[22]Terry Winograd and Fernando Flores. Understanding Computers and Cognition: A New Foundation for Design.  Reading, MA: Addison-Wesley, 1986.

[23] Francisco J. Varela, Evan Thompson, and Eleanor Rosch.  The Embodied Mind: Cognitive Science and Human Experience.  Cambridge, MA: MIT Press, 1991.

Cultural Studies of Computer Science and AI

[24] J. David Bolter.  Turing's Man: Western Culture in the Computer Age.  Chapel Hill: University of North Carolina Press, 1984.

[25] Hubert L. Dreyfus.  What Computers Can't Do: A Critique of Artificial Reason.  New York: Harper and Row, 1972.

[26] P.J. Hayes, K. M. Ford, and N. Agnew.  ``On Babies and Bathwater: A Cautionary Tale.''  AI Magazine 15(4): 15-26.

[27] Stefan Helmreich.  ``Artificial Intelligence, Artificial Life, and Alternatives to Computationalism and Objectivism.''  Stanford Humanities Review, 4(2).

[28] Nettime.  Netzkritik: Materialien zur Internet-Debatte.  Ed. Pit Schultz and Geert Lovink.  Berlin: Edition ID-Archiv, 1997.

[29] Simon Penny, ed. Critical Issues in Electronic Media.  Albany: SUNY Press, 1995.

Avatar Interfaces - Technical

[30] Meredith Bricken.  ``Virtual Worlds: No Interface to Design.''  Technical Report R-90-2.  University of Washington Human Interface Technology Laboratory, 1990.

[31] Bruce Blumberg and Tinsley A. Galyean.  ``Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments.''  In Proceedings of SIGGraph, 1995.

[32] Barbara Hayes-Roth and Robert van Gent.  ``Story-Making with Improvisational Puppets.'' Proceedings of the First International Conference on Autonomous Agents.  Ed. W. Lewis Johnson. New York: ACM Press, 1997.  1-7.

[33] Michael Mateas.  ``Computational Subjectivity in Virtual World Avatars.''  AAAI-97 Fall Symposium on Socially Intelligent Agents.  Ed. Kerstin Dautenhahn.  To appear, Fall 1997.

[34] Ken Perlin and Athomas Goldberg. ``Improv: A system for scripting interactive actors in virtual worlds.'' Computer Graphics, 29(3).

[35] Iris Tabak, Brian K. Smith, William A. Sandoval, and Brian J. Reiser.  ``Combining General and Domain-Specific Strategic Support for Biological Inquiry.'' Proceedings of the 3rd International Conference on Intelligent Tutoring Systems.  NY: Springer-Verlag, 1996.

Avatar Interfaces - Critical

 [36] Chris Chesher: ``Colonizing Virtual Reality: Construction of the Discourse of Virtual Reality, 1984-1992.''  Cultronix 1(1), 1994.

[37] Richard Doyle.  ``Mobius Bodies.'' Society for Literature and Science, 1995.

[38] Paul Edwards. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press, forthcoming.

[39] N. Katherine Hayles.  ``The Materiality of Informatics.''  Configurations 1(1): 147-170, 1993.

[40] Jaron Lanier.  ``Agents of Alienation.'' Consider This....  Voyager Web Project. http://www.voyagerco.com/consider/agents/jaron.html.

[41] Simon Penny.  ``Virtual Reality as the End of the Enlightenment Project.''  Pub. simultaneously in Cultures on the Brink: The Ideologies of Technology, eds. Bender and Druckrey, Bay Press, 1994, and The Virtual Reality Casebook, eds. Anderson and Loeffler, Van Nostrand 1994.

[42]  J. MacGregor Wise.  ``Intelligent Agency.''  Society for Literature and Science, 1996.

Influencing Machines

 [43] Bruno Bettelheim.  The Empty Fortress: Infantile Autism and the Birth of the Self.  New York: The Free Press, 1967.

[44] Victor Tausk.  ``The Influencing Machine.'' Incorporations.  Ed. Jonathan Crary and Sanford Kwinter.  NY: Zone, 1992.  542-569.