Monday, April 16, 2012

Nick Bostrom, the Transhumanist Declaration, and Governance of Emerging Technologies

Engaging transhumanism is becoming a preoccupation of many academics in departments including law, religion, literature, and anthropology. Humanities departments are getting generous grants from places like the Templeton Foundation to deal constructively with contested claims over mankind's future. This seems part of a growing trend in academia toward considering the ethical, social, and legal implications of various emerging and converging technologies.

As of yet, transhumanist topics haven't impacted American political party platforms. If the hype over tech development keeps building, however, these are clearly going to be defining issues of tomorrow's politics. James Hughes in a recent conference talk at Arizona State University laid out the polling figures he compiled as head of H+ mapping the transhumanist demographic onto the American political spectrum. In 2007, 47% of responders to H+ questionnaires self-identified as Left-leaning, up from 36% in 2004 and 39% in 2005. There are, of course, right-wing transhumanists such as libertarian transhumanist Glenn Harlan Reynolds who think leftist-transhumanists will likely endorse totalitarianism. There is a spectrum of right-wing bioconservatives that bunch together in groups to oppose or counter transhumanist trends, and liberal/progressive groups such as IEET and the Global Bioethics Initiative that wish to promote ethical responsiveness to incremental advances toward transhumanism. Responsible bioconservativism and ethical progressivist transhumanism often agree on many details, such as the need for oversight of the incremental advance of enabling technologies. American politics has not been forced to deal with transhumanism in a highly politicized fashion because the technologies are only beginning to emerge.

But what about the sites of scientific and technological production where incremental advances are in progress? Since the Human Genome Project began in the 1990s, the US government has written into controversial R&D enterprises authorizations for research into the ethical, social, legal, and environmental implications of emerging technologies. The National Nanotechnology Initiative, for example, called for societal implications research that resulted in a network of research centers pursuing this function, including the Center for Nanotechnology in Society at Arizona State University (CNS-ASU). Research institutions around the globe are funding these types of programs, from the Netherlands to Australia.

There are many ways to analyze efforts to influence the development of enabling technologies by directly engaging their sites of knowledge production. One useful framework is the stream metaphor: upstream, mid-stream, and downstream. Upstream engagement efforts focus on technologies in early conceptual or R&D stages that haven't diffused throughout society through finished products and productive entrepreneurship. The benefit of engaging many segments of society in discussion of social implications and ethical research at this phase is that specifics of the future haven't been locked into a single development trajectory. The need to seize this opportunity to shape the development of a technology is often explained in terms of the Collingridge Dilemma: "When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming."

Mid-stream modulation entails engaging scientists and engineers as they are researching and developing such emerging techologies. There are many programs underway in this category, including the SynBERC and STIR projects.

If the funding comes through, I will soon be involved in efforts at mid-stream modulation at a "transhumanist laboratory" in Tsukuba, Japan through the Socio-Technical Integration Research (STIR) project at Arizona State University. Specifically, I will be working at a molecular computation laboratory where efforts at integrating molecular computers with human neurons are underway.

Both upstream and mid-stream interventions in the development of emerging technologies justify this research in terms of ethical reflexivity, capacity-building, steering, and other such language.

What strikes me from reading the publications of avowed transhumanists, however, is that they too are concerned with responsible innovation, ethical leadership, technology assessment, and the like. Terms like "technological determinism" often combine in the popular imagination with scenarios like Kurzweill's Heaven of superintelligence and the end of suffering and Bill Joy's Hell of grey goo to produce a stereotype of the transhumanist as ethically reckless. The more I read, the further from the truth this stereotype becomes.

Consider Nick Bostrom's Transhumanist Declaration:
Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
In light of Bostrom's declaration, it becomes clear that just about everyone wants to use the methods of upstream engagement and mid-stream modulation to do just about everything in contemporary society. Bioconservatives use these techniques to alert people to the dangers and lobby Congress for a moratorium. Transhumanists use the same techniques to encourage responsible progress toward ethically-acceptable transhuman outcomes. Academic researchers with little concern for transhumanist discourse use the techniques to make their careers as social scientists concerned with emerging technology. Policymakers clamor for these techniques to ensure economic competitiveness, prevent cultural backlash, and distribute risk.

The tools we use for thinking critically about transhumanism are boundary objects. In other words, upstream engagement and mid-stream modulation exercises can be utilized to simultaneously address the concerns of a massive number of conflicting perspectives on transhumanism. The very same laboratory engagement study at a "transhumanist laboratory" could convince bioconservatives that progress is being made toward a bioconservative agenda, progressive transhumanists that their agenda is on track, academic researchers that quality research is being conducted, policymakers that a broad capacity for making responsible decisions in the public interest are on-going, and various publics that someone is looking out for their children's children. Even the scientists themselves can be convinced that they are thinking reflexively about the social implications of their research.

But the technological determinist perspectives loom. Can all the upstream and mid-stream intervention in the world prevent what's coming down the pipe?

Are we not collectively in a situation of knowing that we have a power in theory that we cannot really exercise in practice? Are we attempting to build capacities that cannot be developed sufficiently to actually accomplish the sort of steering that we recommend for the species? Are we fooling ourselves?

Such is the view of the technological Stoic. On this view, wisdom comes from intense inquiry into separating what is under your control from what is beyond your control. Successful inquiry leads to a peace of mind that ensures fulfillment, The Good Life. Failure in this quest is sure to produce suffering. Failure can take many forms, including the belief that what is not in fact under your control really is under your control. This belief causes the individual to pursue control anyway, exhausting attention and wasting time. Better to inquire deeply and clearly, discover the impossibility of control, and accept the difficult task of controlling what can actually be controlled.

Is this our situation in light of transhumanism?

For some, the answer is yes. For these folks, perhaps a libertarian view of minimal regulation and personal choice takes over. Others pursuing this view could advocate community cohesion, developing interpersonal relsationships that will endure whatever comes to pass. Using the same logic, one could argue for worshiping a personal god who takes over responsibility for the development of technology by subtly controlling the humans who cannot control technology by their own lights.

In the absence of real capacity for steering technology, the question of whether or not anyone is in control becomes a boundary object in itself. That is, the question itself organizes all views, allowing a forum for competing responses. Academic and public engagement venues can be constructed for asking the question, answering the question in a thousand ways, developing solution options for coping with the uncertainty of mankind's role in its own Overcoming.

If the answer to the question of real capacity for social control of emerging technologies and transhumanism is ultimately No, social actors of all kinds can still proceed as if it were possible to shape outcomes and determine the character of whatever future transpires. If they do succeed in shaping outcomes, does this mean that they had control?

Perhaps control is a matter of degree and stylistic preference. It depends how you wish to talk about technology and human agency in the presence of the bigness of the earth and the diversity of the human condition.

In truth, I think the situation is characterized by such uncertainty that all questions and answers related to social control of emerging technology are open. In the words of Henry Miller, "It takes all kinds to make a world." We are doomed to struggle as always to shape our lives and the lives of others. There are incommensurable disagreements and no clear ways of adjudicating disputes except for those we or our ancestors might have devised. We can espouse a variety of ethical convictions and pursue these convictions to the hilt. The future of technology and its social impacts will be shaped by these efforts, but I feel the earth is much bigger than we want to admit. The great North-South inequality question is ever important, for example. Perhaps anticipations of the post-human future of Kurzweill where human suffering is comprehensively overcome, North and South, are more comforting than the realization of earth's bigness.

I'll conclude with words from a masterful presentation by Dan Sarewitz:

...If we were to imagine a better world where humans and humanists are better, it would be a world with more justice, more equality, more peace, more freedom, more tolerance and friendship, more beauty, more opportunity. Such conditions, and the social and political changes that can encourage them, are not internalizable in the technologies of human enhancement. Even less can they be designed to emerge from the aggregate efforts of enhanced individual traits in many humans. Transhumanism and the technological program of human enhancement turn out to be the mirror of, not the cure for, the modern human condition.

No comments: