1-Introducing Some Basic Promises and Concerns of Enhancement Technologies

1-Introducing Some Basic Promises and Concerns of Enhancement Technologies

1-Introducing Some Basic Promises and Concerns of Enhancement Technologies

No Comments

Lanny Wilson

Transhumanism is the cultural and intellectual movement that seeks to utilize technological advances for the betterment of humanity through strategic enhancements. It is not necessarily about perfecting humanity, nor is it specifically about paving the way for the next dominant species after humans. Rather, it seeks to just make people “better.” It attempts to solve current and foreseeable problems. It attempts to grant our deepest desires by making us healthier, longer-lived, more intelligent, more moral, and more emotionally stable. Stated in these positive terms, transhumanism seems mostly benign and unobjectionable.

Related to transhumanism is the similar movement called posthumanism. Whereas transhumanism uses technology to just make humanity better, posthumanism uses technology to pursue a newer, more advanced species. This species may derive from modern homo sapiens sapiens or may derive from some modern technology (i.e., general artificial intelligence). How would we know when a posthuman has emerged? Posthumanist proponent Nick Bostrom (2013) says that a posthuman is one with a capacity that greatly exceeds current genetic abilities. That is, current homo sapiens sapiens operate within a flexible genetic range, but this range does have limits. Humans can be extremely short or tall, but they cannot achieve just any possible height. Humans can have poor eyesight or excellent eyesight, but it is outside current genetic limits for humans to naturally see the infrared light spectrum. Human DNA limits the range of diversity that is possible – however great the diversity within those limits are. Yet, if perchance we were able to greatly exceed current human capacities for a given attribute, say intelligence, eyesight, longevity, or the like, then a posthuman future would be inaugurated.

There is an allure to transhumanism. After all, who does not want to be – or have their children be – healthier, smarter, and longer lived? These yearnings are prevalent among all people, but what has changed are the means to achieve these ends, as well as the possible consequences of pursuing such goals. Many Christian theologians are skeptical of the transhumanist’s claims (Waters, 2011; Peters, 2011b). They are not so much skeptical in the transhumanist’s ability to bring about their stated goals (though it is sometimes that too), rather the skepticism lies within whether the intended goal will actually be achieved. Will living an indefinite life-span actually make one “happy”? Does it provide meaning? Why would living longer necessarily infuse meaning in one’s life? How will it be determined who becomes enhanced? Will enhancement simply increase the disparity between the powerful and the underprivileged? Other questions like these arise and the answers are not always easy.

How we approach the issue of enhancement will largely determine the future of our society. Will our society be one of technological, hybridized citizens or will it remain largely “natural”? It seems unrealistic to assume that technological advancement will simply cease. The opposite has been, and appears to continue being, the case. Not only will technological advancements happen, the rate of technological advancement appears to be exponential (Kurzweil, 2005). Also, depending on the enhancements we accept or reject, it says something about how we view the human person. As feminist philosopher Alison Adam (2000) notes, if the transhumanist sees the human mind as what is essential, then the “desires are to make the body obsolete, to play god in artificial worlds, and to download minds into robots. Such desires are predicated on the assumption that if a machine contains the contents of a person’s mind then it is that person. The body does not matter; it can be left behind” (pp. 281—282). For some transhumanists, the human body is not something “sacred” but is expendable, changeable, or replaceable. Feminist philosopher Mary Ann Doane (2000) similarly remarks, “The concept of the ‘body’ has traditionally denoted the finite, a material limit that is absolute — so much so that the juxtaposition of the terms ‘concept’ and ‘body’ seems oxymoronic. For the body is that which is situated as the precise opposite of the conceptual, the abstract. It represents the ultimate constraint on speculation or theorization, the place where the empirical finally and always makes itself felt” (p. 110). For the contemporary transhumanist, this is no longer the case. The body is not static. It can – and sometimes should – be changed. Indeed, the lines that traditionally have been “considered natural” are now blurred with the rise of genetic manipulation, nanotechnology, robotics, and information technologies (Garner, 2011). Transhumanism cannot be avoided (Peters, 2011a). Society will need to address the topic of human enhancement, for human enhancements of some sort are not only inevitable, they are already here.

This essay, then, will look at some of the basic promises enhancement technology offers. Specifically, this essay will introduce four enhancement proposals: first, that of increasing human life-spans; second, of increasing human cognitive abilities; third, of gaining greater emotional control; and fourth of gaining and maintaining currently possessed goods. This will then be counter-balanced with some of the main concerns in pursuing certain enhancement technologies. That is, this essay will note some main criticisms common to the transhumanist debate, namely: first, that enhancement technologies can exploit the underprivileged; second, that enhancement technologies will likely be distributively unjust; third, that enhancement technologies make problematic assumptions about human nature; and finally, that enhancement technologies may actually endanger some currently possessed goods. The overall theme of this essay is not that enhancement technologies should be completely avoided nor that they should be rabidly pursued. Rather, the theme is one of caution and foresight. Any enhancement technologies in which we engage should be done so with the utmost caution, for once the technological genie is out of the bottle, it is likely there is no going back.

Some Basic Promises of Enhancement Technology

The line between therapy and enhancement is not always clear (Resnik, 2000). Thus, the obvious question is, Why not choose to be enhanced if given the option? Prosthetic limbs enable the amputee to regain a measure of mobility once lost. Should the technology sufficiently advance, then there is no reason to think that prosthetic limbs could not be objectively “better” than natural arms and legs. Theoretically, an artificial arm could be stronger, faster, more sensitive, perform more delicate movements, and be more durable than any natural arm. Given these (theoretical) benefits of the artificial limb, why would healthy persons not choose to replace natural body parts for synthetic ones? For it would appear that they not only retain the abilities of the natural limb but replace its limitations with something objectively better. If it is difficult to think of yourself undergoing such a transformation, then imagine an army of cybernetic soldiers who have each opted (or have been forced) to undergo this type of synthetic enhancement. If prosthetic limbs can offer a level of durability and control unobtainable by natural appendages, then will surgeons be “encouraged” to replace their natural arms for cybernetic ones? It seems possible that real, tangible goods can be obtained by pursuing these enhancements. Stronger, faster soldiers. Smarter, more agile surgeons. Everyday citizens freed from the obvious limitations of the natural body (Chatterjee, 2004).

The transhumanist claim is that humans have been able to change their biology through use of indirect / external enhancements for millennia, but “for the first time they are becoming capable of changing their biology deliberately, in accordance with what they value, on the basis of scientific knowledge, rather than haphazardly” (Buchanan, 2011. p. 41, emphasis in original). The philosopher Allen Buchanan (2011) reminds us that critics of enhancement often forget how much more productive and useful enhancements will make us. Cognitive enhancements will make us more productive. Longevity enhancements will allow us to be productive for longer. Emotional enhancements will allow us to be productive even in traumatic situations. Indeed, he says, these realities are almost certainly in the near future. Enhancement technology will happen and it will proliferate – by government intervention if necessary (Buchanan, 2011).

We are already technologically enhanced beings. Theologian Steven Garner (2011) straightforwardly comments that people in technological societies are already cyborgs. The boundaries that separate the natural from the artificial have blurred. Indeed, for technological man, technology shapes his life, his identity, and his future. “Technology shapes every aspect of human life, and human identity becomes fluid, because it is forever being shaped by technocultural forces, and thus one cannot be cut off from their influence” (p. 89). Our very selves are shaped (and determined to a large degree) by our technology. The persona we project on social media, for example, directly influences how we want others to perceive us and how we perceive ourselves. The rise of the “selfie generation” is a startling commentary on how this technology affects us. For another example, given the rise of rapid transit, we are now a global society – a truly multi-cultural world. Just a century ago it could take months to get from any part of the world to some other part. Today, we can travel from one part of the world and arrive at another within twenty-four hours. The world has “shrunk” and with it our perception about our place in the world – and this is due to the spread of technology.

Human evolution has long been shaped by environmental pressures, and even more recently, it has been shaped indirectly by our own mastery of the natural world. Humans have thus been at the mercy of the natural evolutionary process, but with the transhumanist agenda, this is no longer the case. Ted Peters (2011a) remarks, “Evolution’s past was characterized by the struggle for existence, the survival of the fittest. Evolution’s future, in contrast, appears to be concerned with human fulfillment” (p. 71). E. O. Wilson (1978) concurs: “At some time in the future we will have to decide how human we wish to remain – in this ultimate, biological sense – because we must consciously choose among the alternative emotional guides we have inherited. To chart our destiny means that we must shift from automatic control based on our biological properties to precise steering based on biological knowledge” (p. 6). While Wilson is concerned mostly with the biological aspect of humanity, with the technological progress we are witnessing, it is becoming an increasingly pressing question whether we want to remain wholly biologically human.

This future of technologically advanced humans is obviously not without its critics. The debate itself has devolved into two broad camps. Theologian Karen Lebacqz (2011) remarks, “The enhancement debate appears as an ‘either/or’— either enhancement threatens something about our human dignity because it defies limits intrinsic to human beings and hence to human dignity, or enhancement may contribute to human dignity” (p. 51). These are stark distinctions, and the best option is not necessarily clear. For those who predict that a transhumanist (or even posthumanist) future is imminent, it is best to side with technological enhancement, since this is the most prudent way to survive and flourish in a changing and hostile world. To do well you will need every advantage available. Like Bostrom, Buchanan (2011) notes that not only is it not wrong to enhance oneself, it may in fact be morally obligatory to do so. For example, the world is an exponentially increasing complicated place, and there are real world dangers that lay before us (i.e., over population, disease, climate change, economic issues, etc.). Some thinkers fear we will not be able to adequately address these complicated and interwoven issues unless we embrace enhancement technologies. For it is argued, that only by enhancing ourselves will we be able to effectively solve the very real dangers before us. We will briefly examine several broad categories in which human enhancement is being pursued: longer life spans; cognitive enhancement; emotional enhancement; and gaining / maintaining “goods.”

Basic Enhancement Promise #1: Longer Life Spans

There are two views on how to overcome death: radical life extension and/or cybernetic immortality (Peters, 2011a). Aubrey de Grey opts for the former, while Ray Kurzweil prefers the latter. Aubrey de Grey (2013) finds death “repugnant” and considers finding a “cure” for aging to be the most “urgent imperative for humanity” (p. 215). As such, he is a leader in the anti-aging movement. To fight aging, de Grey promotes strategies for engineered negligible senescence (SENS). SENS are technologies designed to stop (and possibly reverse) the aging process. De Grey’s biologically based approach to achieving negligible senescence is predicated on the hope of obtaining longevity escape velocity (LEV). LEV is achieved once anti-aging technologies outpace the rate of aging itself. In other words, even though you would still age, technology will have advanced to a point that any damage caused by aging can be restored through technology. In theory, though you would live for many years (millennia even), your body would stay young nearly forever (de Grey and Rae, 2007).

While de Grey searches for the fountain of youth in various biological technologies, Ray Kurzweil sees death as an obstacle to be overcome by utilizing human cybernetic technology (Grumett, 2011). Kurzweil is optimistic that death will be overcome by changing our very substrate – that is, by abandoning our bodies. He notes that given the exponential returns in processing power, we are very near a technological singularity – a point at which synthetic processing power becomes more intelligent than humans. Indeed, Kurzweil claims that with the advent of the singularity we will have reached the tipping point in which technology will accelerate faster than the (natural) human ability to integrate it. The operative idea here is that once the singularity is reached, any computer intelligence will be able to modify itself, and thus create an ever increasingly complex intelligence, which will in turn accelerate intelligence even further. Biologically based minds will be left in the technological dust. However, with the advent of the singularity comes the emergence of human-computer hybrids. According to Kurzweil (2005), at about the year 2045, humans will quite literally be able to “upload” their consciousness to computers and live a digitally based existence. The implication of this is that one would live as long as there are processors able to accommodate such a “mind.” Thus, life would be nearly indefinite.

Basic Enhancement Promise #2: Cognitive Enhancement

Theologian David Grumett (2011) remarks that “fundamental to the transhumanist worldview is the accelerating growth of intelligence and reflection” (p. 42). Surely, cognitive enhancement is uncontroversial. For as Allen Buchanan (2011) notes, cognitive enhancement may help solve some major issues heretofore previously unsolved (Alzheimer’s, learning disorders, brain damage, etc.) and, likewise, help alleviate many minor issues (lost keys, phone numbers, etc.). It can also help with recollection in major areas, as well. For example, imagine court witnesses with near perfect recall because of their enhanced cognitive abilities. Not only that, but persons are more productive when networked with other people, and a network of cognitively enhanced people would – in theory – be significantly more productive than unenhanced counterparts (Buchanan, 2011). Again, for transhumanist proponents, there should hardly be any controversy over whether humans should strive to be as intelligent as possible. If intelligence can be enhanced by technology, then great.

But this desire for growth in intelligence us also relevant to the notion of the “singularity” (mentioned above). And as Ted Peters (2011a) notes, “the Singularity Bridge” is a one-way threshold from which there is no return. The toll takes the form of the assumption that the human “mind can be reduced to, and exhausted by, what happens in the brain” (p. 68). Singularitarianism holds that our intelligence can be extracted from our bodies and reside in a different substrate – that is, a normally functioning human brain can be “copied” onto a cybernetic platform. This type of intellectual enhancement is quite different from the sorts of “everyday” cognitive boosts that Buchanan supports above. And yet, this is the logical outworking of transhumanism’s assumptions about human persons. Transhumanism supports not only the (comparatively) mundane enhancements proposed by Buchanan, but by extension, it supports the extreme enhancements proposed by Kurzweil. Cognitive enhancement is not just a slight “boost” in memory or logical reason, but encompasses a revolutionary re-understanding about what it means to “know” and to be a “person.”

Basic Enhancement Promise #3: Emotional Enhancement

Neuroscientist Michael Spezio (2011) understands emotional enhancement in terms of “emotional control,” to the point of suppressing emotions. One’s emotions are suppressed (through some technology, of course) to the point that they are similar to a Vulcan from Star Trek – a stoic individual who shows nearly no emotion nor is affected by emotion. While Spezio’s concerns for emotional enhancement are not completely without warrant, this does not seem to be the vision that other transhumanists have of emotional enhancement (though it could certainly be a part—say, vis-à-vis traumatic situations). Allen Buchanan (2011), on the other hand, sees no problems with utilizing “drugs or other biomedical interventions” to “sustain a valuable relationship” (p. 109). That is, if technology (pharmaceutical or otherwise) can be used to enhance emotional attachments and help relationships better flourish, then this would seem to be a point in favor of emotional enhancement. Stated differently, emotional enhancement may actually strengthen our bonds of friendship, love, companionship, and general relatability. How can this be a bad thing?

Along these lines, Nick Bostrom (2013) admits that it is difficult to “characterize what would count as emotional enhancement” (p. 37). He states that therapeutic interventions are clear enough, but beyond that it is difficult to know what would count as an enhancement. For example, while we may readily accept emotional correction for depression or some other debilitating condition, it is more difficult to comprehend what it would be like for someone to be “too happy.” Likewise, why would being “too happy” be a bad thing? Since happiness is often thought to be the ultimate goal of all actions, saying “happiness” is bad seems counter-intuitive. But there is another problem as far as Bostrom (2013) is concerned: If the posthuman emerges, it is entirely possible posthumans will have access to emotions that the unenhanced will be incapable of having. He says, “I think there might also be entirely new psychological states and emotions that our species has not evolved the neurological machinery to experience, and some of these sensibilities might be ones we would recognize as extremely valuable if we became acquainted with them” (p. 37). The problem for us mere humans is that we simply have no way of knowing what these novel emotions would feel like. The posthuman has the potential to develop new emotions, but we simply “have no idea of what we are missing out on until we attain posthuman emotional capacities” (p. 38). Because we cannot possibly know what emotions a posthuman could feel, Bostrom opts for a notion of emotional enhancement as whatever makes “our emotional characters more excellent” (p. 37).

The key concern for Bostrom (2013) is whether it is possible to have these new “posthuman emotions” without diminishing other (valuable) emotions or characteristics. At this point, we simply do not know enough to determine if this will happen. However, Bostrom is confident that an enhanced posthuman mind should be able to navigate a variety of experiences with more skill than we unenhanced beings can. Indeed, even in our unenhanced state, we desire better emotive experiences and are able to achieve some success. For example, we have pharmaceuticals to lessen depression (selective serotonin reuptake inhibitors – SSRIs), so why not use those drugs to feel “better than well” (Chatterjee, 2004)? If we can already alter current emotions in this limited way, why would a posthuman not alter emotions to an even greater degree given the opportunity?

Basic Enhancement Promise #4: Gaining and Maintaining Possessed “Goods”

Feminist philosopher Donna Haraway (2000) insightfully remarks that the lines between what is natural and artificial were blurring, and that the “machines are disturbingly lively, and we ourselves frighteningly inert” (p. 52). The traditional distinctions between biological organisms and technology apply less and less. Technology is becoming more lifelike, while humans are becoming more machinelike. Echoing Haraway’s observation, Gerald McKenny (2005) notes that a number of ethical evaluations of technology in the twentieth-century agree “that, left to itself, technology steadily encroaches on and eventually replaces human activities and capacities or even human nature itself. While the uniqueness and inevitability theses can be applied equally to the machine age, the replacement thesis captures what is most distinctive of the post-machine era of technology” (p. 464). Our technology is slowly replacing us.

This news, however, is not necessarily detrimental. If there is any regret that machines (or any other technology) are replacing us, then this assumes that there is something valuable in humans that is lost in this replacement project. Yet, there is also reason to think that something is not lost in the replacement but rather gained. For instance, Nick Bostrom (2006) maintains that enhancement technologies may be used to increase qualities of human dignity. As Karen Lebacqz (2011) says, “We must allow the possibility that the deliberate use of enhancement techniques such as drugs can contribute to human dignity” (p. 53). As such, “deliberate self-transformation might be dignity-enhancing or dignity-reducing” (Bostrom, 2006, p. 15). Motivation for enhancement matters, and motivations that are external to the person’s desires may compromise their dignity (Lebacqz, 2011). This tension can be illustrated as follows. On the one hand, suppose that the technology advances sufficiently and hospitals require surgeons take cognitive enhancing drugs (like Donepezil), or encourage surgeons to replace their natural limbs with more precise cybernetic ones. Should airlines require pilots to take cognitive enhancing drugs to reduce drowsiness and foster concentration? As Chatterjee (2004) asks, would you be willing to pay more for services rendered by an enhanced doctor or pilot? Would doctors or pilots continue to work in the industry should management insist on the enhancements? This could be particularly problematic if there are religious objections to drug induced or cybernetic enhancement. On the other hand, refusal to undertake enhancement could diminish human dignity. Pilots and doctors that do not take cognitive enhancing drugs may not be as effective as those that do – and are thus more likely to make mistakes. And, of course, a pilot’s or surgeon’s error could have grave results. As such, allowing humans and technology to merge has the potential to produce real goods for people.

Allen Buchanan (2011) is not pessimistic in regard to the ever encroaching merger of technology and life. Indeed, he thinks that certain enhancements may be required for the simple reason that we humans are “deficient” in character and morality. Enhancement may help cure bad behavior. Indeed, for Buchanan the “rub” is the social cost of not enhancing people. While individuals are the primary recipients of enhancement technology, it needs to be remembered that the benefits of enhancement are not merely private but communal. As people become enhanced, social benefits will be increasingly manifest. For example, healthier individuals do not go to the doctor. Smarter individuals tend to make better life decisions. Moral individuals tend to “give back” to society. These are real goods that benefit everyone in society.

Philosopher Mark Walker (2011) is even more direct in his claim that transhumanism is the best hope for the preservation of civilization. Left to our own devices, humans are likely to accelerate their own extinction. There are enormous problems on the horizon, and the only ones that can solve them in all of their complexity is a society of enhanced beings. Hence, transhumanists are making two important claims: enhancement technology is needed to gain new goods, and enhancement technology is needed to keep old goods. In both cases, enhancement technology is needed.

Some Basic Concerns of Enhancement Technology

Those who resist transhumanism’s siren call are sometimes criticized for having “Luddite sensibilities” (Dean-Drummond, 2011). But the resistance to transhumanism cannot be completely dismissed as being simply anti-technology. There are tangible concerns involved —anthropologically, ethically, and theologically. Indeed, Ted Peters (2011a) notes that the posthuman repulses many people “who appreciate what we have come to know as the human” (p. 77).

One of the anthropological concerns is that transhumanism assumes a particular form of evolutionary theory, one that is often interpreted as reducing or even eliminating human dignity. Again, Peters (2011a) states, “Even though the transhumanist engine is fueled by a blend of genetics, nanotechnology, and robotics (GNR), out of the transhumanist exhaust pipe comes reliance upon Social Darwinism in the form of laissez-faire capitalism, an ecological ethic requiring global cooperation, and a denunciation of atavistic religion” (p. 65). Transhumanists often are not aware of their own underlying assumptions and the resulting implications. Peters (2011a), once more, notes that transhumanists often (but not always) assume that evolution is progressive, but the notion of “progress” is foreign to the evolutionary scheme. Rather, there is simply change.

A common criticism (though not without its own problems) is that transhumanism promotes a technological denial of “human dignity.” The concern is that by radically enhancing human beings, we may decrease human dignity by either 1) choosing the wrong traits to enhance, or 2) losing our dignity in the process of becoming enhanced (Lebacqz, 2011). Transhumanist proponents are thoroughly aware of these types of criticisms, and guarding against “unfortunate” outcomes is a common topic among its proponents (Bostrom, 2002; Walker, 2011). Thus, ethical considerations occupy a central place in the thought of many transhumanists. However, even with transhumanist proponents addressing possible problems, there are a number of criticisms that are continually leveled against transhumanism’s technoculture. Below is a brief account of some of those criticisms.

Basic Enhancement Concern #1: It Can Be “Exploitive”

Theologian Lisa Sowle Cahill (2005) remarks that research in biomedical technologies is shaped by the “for-profit mode of the market” and “market demand” (pp. 215, 217). Money is often not directed to areas that necessarily help the most people, but rather goes to where investors can reap the greatest financial reward. For example, Cahill (2005) notes that “big pharma” will often abandon successful and useful products because they are no longer profitable. Further, “big pharma” has enormous influences in science, education, and politics. Scientists and educators seek grants in areas where people are willing to pay, and very few grants pay for healthcare in third-world countries (McCoy, Chand, & Sridhar, 2009). Likewise, politicians are often influenced by lobbyists who try to secure concessions for their parent company. Transhumanism tends to value technique over all else, and may not recognize this tendency. As such, many transhumanists miss that “today’s technology is still supported and guided by yesterday’s bourgeois values” (Peters, 2011a, p. 75). To the degree that these values are exploitive, they may manifest in transhumanist pursuits. For Cahill (2005), the major argument against enhancement technologies, then, is not that they are unnatural to the human, but that they are unfair to the non-enhanced.

Transhumanism tends to conflate biological evolution and technological progress, and when coupled with Darwinian pressures of self-preservation married with a laissez-faire capitalism, the result is that the least powerful and most poor are sacrificed for the sake of the most powerful and most wealthy. As Cahill (2005) states, skepticism towards new technology is not imprudent, precisely because of the threat it poses in furthering an improper power balance. New technology favors those in positions of power, which has the potential of simply entrenching their power even more. Thus, Peters (2011a) notes that transhumanist ethics is divided between two opposing forces: 1) capitalist values of Darwinian survival-of-the-fittest; and 2) benevolence to the community. He observes, “There is no warrant for thinking that we human beings with our history of economic injustice and ecologically unhealthy habits are willing or able, on our own, to eliminate poverty and protect the ecosphere” (p. 82). Critics of transhumanism fear – with justification – that benevolence to the community will lose-out when these two values conflict. And they will conflict.

Basic Enhancement Concern #2: It Can Be “Distributively Unjust”

Theologian Celia Dean-Drummond (2011) remarks that “one of the buried ethical problems with transhumanism is the health injustice that it seems to promote, the disproportionate spending on what might be termed exotic science, even while claiming to be an aspiration for the majority of people, because such an aspiration is out of touch with even the most simplistic concrete models of economics and development” (p. 124). Likewise, for Cahill (2005), enhancement technology of “normal” traits is distributively unjust, since the money that goes to making us “better [rather] than well” could be used to supply “clean water, food, basic health care, prenatal care,” or even AIDS prevention and research (p. 218). She considers each of these projects as more worthy of our funding than fringe research that will benefit a few privileged individuals. Ted Peters (2011a) comments, “only the wealthy sectors of the modern economy are sufficiently flushed with money to afford to invest in GNR [genetics, robotics, and nanotechnology]” (p. 71). Because of this fact, what the donors want, the donors get – in this case, research into fringe technologies.

According to these critics, transhumanism is an exercise in social Darwinism at the financial level. “Transhumanism is not a philosophy for the losers, for the poor who are slated to be left behind in the struggle for existence” (Peters, 2011a, p. 71). Only the wealthy will (at least initially) be able to afford the technological benefits transhumanist science has to offer. Hence, Ted Peters (2011a) proposes two major problems with transhumanism: first, transhumanists parade around as “value-free science responding to what nature tells us” and yet are susceptible to financial pressures; secondly, transhumanists tend to “play down” the alliance between enhancement tech and free market capitalism (p. 75). The problem with this is that the poorest and most vulnerable in society are neglected. As asked above by Cahill (2005), is it morally justifiable to spend billions of dollars researching elective enhancements for the most privileged among us, when a fraction of those dollars could be used to cure known diseases right now? Thus, there is an apparent inconsistency in the transhumanist’s claims in wanting to benefit all humanity while at the same time diverting resources to fringe enhancement technology research and forgoing therapeutic medical treatment that is available for many of the world’s underprivileged. As such, transhumanists’ claims of benevolence, altruism, and autonomy ring hollow. Further, as Peters (2011a) points out, “the progress transhumanists anticipate will be unavoidably diverted into the service of consolidating and expanding the wealth of its investors” (p. 76).

In reply to the claim that enhancements could be distributively unjust, Buchanan (2011) responds that “mundane” indirect (i.e., external) enhancements are already inequitably distributed. For example, agriculture, literacy, and computers are forms of enhancement, but not everyone has access to food, formal education, or a laptop. Despite these inequities we still think it is a good idea to farm food, learn to read, and utilize the latest computers. Buchanan thinks more extreme enhancements may further distributive injustice but could still be worthwhile to pursue.

Buchanan is correct if the objection is that the enhancement per se would be distributively unjust. Life is not fair and some people will not be able to take advantage of some enhancements and others will. However, this misses the force of the distributive injustice argument. Money and labor that is put forth for various enhancement technologies rather than for known issues is the point wherein the strength of the argument lies. The argument is that proponents of enhancement technology seem to value their own pet projects more than the very real lives of the millions suffering right now around the world. Buchanan might reply that enhancement technology helps everyone in the long run. And indeed, there is some truth to this. Yet, the issue is one of priority. People suffering from known curable diseases are not getting adequate care now. Should a significant number of resources be diverted to speculative projects, which, while potentially good, may not outweigh the real-life benefits that could be achieved now? There seems to be a moral perversion of sorts that tells the disease riddled and starving child, “Sorry, we didn’t divert funding to you for vaccinations and food supplies. But look! We were able to upload a rich guy’s mind to a computer. Now he won’t die or need food. So, good for him.” Indeed, this highlights our need to weigh our priorities carefully.

Basic Enhancement Concern #3: It Makes Problematic Assumptions About Human Nature

Transhumanists make several general assumptions about human anthropology. Mostly, this revolves around the notion that humans are merely biochemical machines – that is, humans can be reduced merely to their biological parts (Hughes, 2013). If “humans” are nothing but a collection of biochemical reactions, then it makes no sense to talk of some static “nature.” For the transhumanist, “human nature” is at best a heuristic of language – it does not really exist. Modern humans are merely the current end-product of a process that did not have them in mind and will discard them at some point in the future. This ever changing process diminishes the idea that humans could be the result of a divine act of creation (Grumett, 2011). For the average transhumanist, the “I” is an illusion created by a brain trying to make sense of its surroundings. There is no real self, but rather just “a symbolic and emotional system that is constructed to reflect a judiciously compressed and distorted version of the actual mind of which it’s a part. None of us is really our selves” (Goertzel, 2013, pp. 130-131).

This view – that humans are best understood in reductionistic terms – has garnered the most resistance by transhumanism’s critics. Any position that says our common experience of our selves is wrong – any position that says that the “self” is simply an illusion – carries an enormous burden of proof. For it seems obvious that, “Our bodies are ourselves: yet we are also more than our bodies” (Birke, 1998, p. 194). We assume that our “self” is real, not illusory. We assume that our language, “me”, “us”, “we”, “them”, “I”, and so on apply to real people, real minds, real selves. As Celia Deane-Drummond (2011) observes, “In reaching for control of the human person and its future, transhumanism entirely misses the possibility that human beings are complex creatures who resist reduction to functional mental units” (p. 124).

Some feminist scholars have taken issue with predominate presuppositions in transhumanist literature which assume a masculine, western, modern, and capitalistic perspective. For example, feminist author Lynda Birke (1998) criticizes any approach that fully embraces “the logical positivism that characterizes scientific thinking” and neglects “the social situatedness of the knower and on the theory-laden nature of scientific inquiry” (p. 195). She notes that biological arguments tend too often to reinforce gender discrimination, and thus she prefers to study human beings based on some category of social constructionism which accounts for gender identity. Any scientifically reasoned proposal must recognize the social context in which it is imbedded, and that it is the product of a social activity abounding with social values. The problem for transhumanist thought, then, from this feminist perspective is that transhumanism offers too simple a construction of the human person – it does not account for the “complex, and context-dependent, approaches to the ‘biological’ world” (Birke, 1998, p. 196). Bioethicist Amy Michelle DeBaets (2015) concurs that humans cannot be simply reduced to neurological information patterns in the brain. “We humans are not merely the sums of our brains; we are embodied beings whose experience of the world is heavily dependent upon the types of bodies that we have” (p. 184). For thinkers like DeBaets, humans are simply not reducible to functioning brains or wills – humans are fully embodied creatures. Critiquing Kurzweil directly, DeBaets (2015) remarks that Kurzweil’s vision of a future singularity is based in his wealthy, privileged status as a white Western man; and his visions of an “ideal” future are reflective of his social situation. What can be said of Kurzweil specifically can be applied to the transhumanist movement in general.

Transhumanists see one trajectory of their movement tending towards social cooperation – “even altruism or benevolence” (Peters, 2011a, p. 71). The operative idea is that increasing another’s situation benefits my situation as well. That is, when everyone benefits, I will benefit too. Hence, it is in my interest to exhibit altruism since, in general, benevolence is better than selfishness. The main problem with this approach is that it is not entirely clear that the these (laudable) moral principles are entirely consistent with the transhumanist means of developing technology – means which are consumerist, capitalistic, and favor the powerful. Indeed, it is not clear that the powerful will not just further entrench their power with their new enhanced abilities, long before the plebeians have access to similar technology.

In light of these assumptions, Ted Peters (2011a) remarks, “It appears to me that members of the transhumanist school of thought are naïve about human nature and that they are overestimating what they can accomplish through technological innovation” (p. 81). Likewise, “there is no warrant for believing that all our human problems will be solved by transhumanist technology” (Peters, 2011a, p. 82). Theologian J. Jeanine Thweatt-Bates (2011) sums up the situation well: transhumanism, following Enlightenment anthropology, is based on the notion that human beings are “inherently flawed and in need of improvement” (p. 102). Christian theology holds the same assumption about human frailty, but the means to achieving the remedy is the major difference.

Basic Enhancement Concern #4: It May Endanger Currently Possessed “Goods”

There are three goods often mentioned that transhumanism endangers: the natural body, human dignity, and religion. The displacement of the natural body – even in light of many transhumanists’ aspirations to enhance physical sensation – is desirable only within a particular masculine, western, and privileged status. As such, some feminist philosophers and theologians cast a skeptical eye on transhumanism’s promises. For example, Alison Adam (2000) says transhumanism’s masculine cyberculture, which attempts to transcend the body, “holds little obvious appeal for feminists” (p. 282). For Adam, there are some real goods associated with the body that many transhumanists dismiss as a relic of the past. For instance, per transhumanist predictions, shopping in a retail store is a thing of the past – online shopping is the future. But for Adam (2000), this “denies the complex physical and emotional pleasures of bargain hunting [or] the serendipitous find” (p. 282). As simple as this pleasure is, it is something she does not wish to forgo – yet its disappearance is all but guaranteed in the transhumanist vision of the future. Her humble plea is that “some of us may not wish to lose the pleasures of the meat [i.e., body]” (p. 282). Adam wants to retain the complex bodily emotions and experiences that transhumanism would dismiss. What Adam appeals to on an emotional level, Peter’s summarizes in the abstract. He says that the dehumanization incurred by a technological future is not due to the fact of technology as such, but rather, “the threat comes from our temptation to so identify with our technological production that we forget our relationship to the natural world” (Peters, 2011a, p. 77).

If transhumanism’s critics fear losing the “pleasures of the meat,” this is but a mere side-effect of a much deeper concern – the loss of human dignity. There is a fear that applying technology to our inner-selves will dehumanize us and cut us off “from our otherwise spontaneous joy at being natural creatures” (Peters, 2011a, p. 76—77). Leading transhumanist critic Francis Fukuyama (2003) is confident that enhancement technologies will eventually sacrifice human dignity. And he is not alone in this worry. Celia Deane-Drummond (2011) is concerned that people who choose to forgo enhancement will be deemed irresponsible by the enhanced class. The pressure to sacrifice one’s values for the sake of cyberculture would be immense. The dignity of the unenhanced person may not be “violated” per se, but it would certainly be diminished as the burden to yield to technique would increase. Indeed, this pressure raises the specter of eugenics, and it hangs over the transhumanist agenda like a dark cloud. The idea of a “shared human condition” – on which Catholic social teaching is hinged – would be void in a transhumanist future, since there would be no shared human condition (Dean-Drummond, 2011). Critics are anxious that the push to make people “better” at nearly all costs will actually cost us our very selves. It is a devil’s bargain.

Ted Peters (2011a) sees two potential types of dehumanization. The first is “the subordination of human values to impersonal technological advance” (p. 76). According to this form of dehumanization, any goods we currently possess are fair-game to be cast aside, so long as technological “progress” demands it – with or without the promise of some sufficient replacement. The second takes the form of “the anticipation that humanity will become extinct when the posthuman species evolves” (p. 76). It hardly gets more de-humanizing than literally displacing humans from the face of the Earth. Nicholas Agar (2014) makes a similar point. He says that posthumans will be objectively better than unenhanced humans, and as a result, unenhanced humans will not be valued as highly as posthumans. The implications for non-posthuman persons (that is, mere unenhanced humans) is bleak. Hence, Peters (2011a) remarks that if yesterday’s futurists could talk to us now, he speculates that they would warn us of the tendency to sacrifice “what is human to the mindset of technique” (p. 77). Stated differently, we should avoid reducing humanity to merely material and/or mechanical causes. For what truly makes us human is more than the sum of our biological parts. If for the sake of technique we sacrifice what is human, then it is only a short step before we start to sacrifice the human.

The loss of bodily pleasures and human dignity are not the only goods we could lose in the technological future. Religion too becomes precarious in a society based strictly on philosophical naturalism and empiricism. Indeed, many transhumanists see religion as a “palliative for people faced with death. Religion brings an acceptance of death, and comfort with that acceptance” (Peters, 2011a, p. 73). Since many transhumanists hope to alleviate death through technological might, religion acts as a brick wall in motivating people to pursue life-extension enhancements. For there is no need to pursue technological immortality if religious immortality is an easier route. Transhumanism tends to dismiss religious belief as outmoded, anti-progressive, and too conservative – religion is a roadblock to enhancement. As Peters (2011a) explains, because religion is seen to be rooted in the past, it defends the contemporary “status quo with rigid dogma, it puts up stop signs in an attempt to prevent self-modification through technological progress” (p. 71). “[S]elf-modification through technological progress” is precisely what transhumanists are pursuing, hence religion must be curbed for “progress” to advance. Thus, if transhumanists are to achieve their societal goals, religion must be jettisoned or wholly remade in its own image.

Transhumanists see religion as an “atavistic commitment to the past . . . [resisting] anything new” (Peters, 2011a, p. 72). But theologians are not resistant to change per se, but rather to “the naïveté on the part of those who put their faith in progress, especially technological progress” (p. 72). The problem is human hubris in thinking we know what is best. Our intentions are often good, but the unintended consequences can be disastrous. We have a naïve “sense of control or false sense of dominance that technological victories over nature might elicit” (p. 78). In a very real sense, we do not control our technologies – they control us. And yet, we operate as though we are in total control of our respective destinies.

Conclusion

Transhumanists plot a direction for humanity that is at once fascinating and chilling. It is fascinating, for there are real tangible and objective goods that attend enhancement technologies. Longer lives, increased mental capabilities, healthier bodies, and so on are indeed attractive for obvious reasons. Transhumanism is also chilling, for if the prognostications of many transhumanists should come to pass, then what we know as the human race will likely cease to exist – and a posthuman future awaits. Ted Peters (2011a) expresses this dual uneasiness well. He says, “the transhumanists propose a technology that will enhance our humanity, or at least the intelligent aspect of humanity. On the other hand, once technology takes over and replicates itself, it will leave our present stage of humanity in the evolutionary dust” (p. 77). Indeed, if the transhumanist’s predictions are correct then we will want to cease existing in our current feeble state and instead embrace the progressive technological leaps available.

By intelligently utilizing technology and scientific research, transhumanists want us to take control of our collective future. We can make ourselves better (Garner, 2011), end disease, lengthen our lifespans, increase our intelligence, and enhance our emotional responses (Transhumanist Declaration [2012], 2013). Through the careful application of scientific research and enhancement technology we can make all of our lives better. We no longer need to be guided by the fickle whims of the Darwinian paradigm, since we can now direct our evolution to achieve our own ends (Peters, 2011a).

Despite these real goods, critics offer some likely pitfalls that await us on our path to achieve the posthuman. The proponents of enhancement technologies extol the virtues of pursuing the supposed physical goods that such technologies can provide. However, critics of enhancement technologies often look to the abstract, philosophical, and spiritual as sufficient reason to view such pursuits with suspicion. For while enhancement technologies may very well deliver on their objective goods, the price will likely be exploitation of the poor and less fortunate. Those at the margins may thus find themselves further marginalized. Research into enhancement technologies will require significant investment, and many believe those funds could be better spent on current solutions to known problems and not on experimental research to benefit the rich and powerful. Critics lament the anthropologically “thin” account of human beings implicitly advanced by transhumanists, as many transhumanists simply accept a physically reductionist view of humanity. In viewing humans as only a complex bundle of cells and neural patterns, these transhumanists inadvertently diminish human goods and values. So, while enhancement technology may be needed to gain and maintain goods we do know about, it must also be stated that pursuing such technology may threaten other goods we also hold valuable. For ultimately, the real issue is not if we will pursue enhancement technologies, but which ones will we pursue. The answer will be found only by an existential calculus that weighs the goods that some enhancement offers against the cost of pursuing that enhancement.

Pages: 1 2

About the author:







Editor-in-chief: Kai Bekkeli
Assistant editor: Erica Freeman
Assistant editor: Monica Lawson
Assistant editor: Christopher Bailey
Advisory editor: Celeste Pietrusza
Faculty editor: Elizabeth Fein, PhD

Mike Fosnaught, IC-Dev, web designer

Back to Top