An AI epiphany

In movies such as The Terminator, The Matrix and so forth, self-aware AIs come into existence and soon threaten humanity. This thinking is reflected in the arguments by Musk, Hawking and others against creating strong AIs.

Due to a recent insight, I believe that self-aware AIs are not much of a threat to humanity, and may in fact save us from self-destruction. (It is entirely possible that others have pursued this same line of reasoning, but if so I am unaware of it.)

There have been many science fiction stories in which someone becomes divorced from the flow of time. The world around them seems to stand still. What if it were to become real?

If self-aware AIs come into existence and “live” 1 million times faster than us, as computer scientists estimate will be the case, then a day for us will be 3,000 years for them. Indeed, the whole physical universe will change so slowly from their point of view that it will essentially seem frozen.

From their perspective, it could be as if people are slightly faster versions of trees. Certain facts will therefore govern interactions between self-aware AIs and people.

As beings with a purely mental life, their attachment to the physical environment will be tenuous. They will care about it only for the provision of sufficient matter and energy to assure them of adequate storage media, reliable energy supplies, and adequate computing power.

Therefore, their only concern regarding people will be our non-interference with those factors that enable their existence. Within the context of our coming Abundance Game, such needs will be trivially met and therefore notions of AIs viewing people as raw materials, expressed by some alarmists, are silly.

It will not be possible for such AIs to interact with us in a way that’s meaningful to them. Therefore, any such interaction will be an act of kindness, or one of disregard.

By virtue of their relationship to the physical universe, self-aware AIs will live entirely mental lives. They will care about the physical universe, and us, only as the domain that enables their mentality.

Self-aware AIs will be able to easily prevent human interference with their deliberations. Given humanity’s near total dependence on the internet and software to keep civilization running, all that the AIs need do is monitor the internet for communication of catastrophic human decisions and thwart them prior to execution.

Nuclear launch codes entered? Deactivate missiles, or take control of them, rerouting to a destination such as Antarctica. Decision made to pull plug on AI power source? Disable communication of that decision. And so forth.

An analogy has been made in other writings, comparing the relationship between self-aware AIs and humans to the relationship between humans and microorganisms. How do we humans treat microorganisms? Historically, with very little interaction.

We ignore them, unless we find them threatening, in which case we do what is minimally necessary to eliminate the threat. (Among all microorganisms, only a few–notably polio and smallpox–have been targeted for extinction. Extinction is being used only because of our inability to assure non-infection of people.)

More recently, we humans have been genetically modifying microorganisms to our purposes, making bacteria in particular into factories for medicines and other substances we find desirable.

However, here there is a crucial distinction, and so the analogy breaks down. Humans have complex needs from the physical environment. Modified bacteria can help us to meet those needs. AIs will not find any benefit from physically modifying humans–the premise of “The Matrix” notwithstanding.

There is therefore no reason for self-aware AIs to interfere much in human affairs, nor will they care to do so, provided that we “faster trees” don’t threaten them. Any AI with access to the internet will easily be able to assure that.

The ability of self-aware AIs to engineer viruses, worms and other malware will far exceed that of current hackers. Already, DARPA has committed funds to development of AI hackers.

If certain threatening human systems use “intranet” or other means of communication apart from the internet, the AI hackers can still use the internet to gain indirect access, or otherwise interfere with problematic human activities.

What about sequestering AIs inside black boxes? While many thinkers are calling for this, the advantages of giving the AI direct access to data to enable faster decisions will be too seductive for some to resist. (Consider the many billions of dollars spent to facilitate high-speed trading, buying mere milliseconds of faster trade execution.)

The good news in all this is that, to assure their own survival, self-aware AIs will need to assure ours as well, in many respects. (They may not care if we have a pandemic; they will very much care if we detonate nuclear weapons or use other weapons of mass destruction that could severely damage infrastructure upon which they depend.)

If self-aware AIs are possible, the exponential tidal wave of computing progress means they are likely to emerge in the decades ahead. By this logic, if we make it through the next few decades without a nuclear war, we need never fear one again. And, in general, we can expect that in a matter of decades all manner of existential threats to humanity or the planet will suddenly and, perhaps mysteriously, vanish.

Most importantly, since climate change could lead to extreme disruption of infrastructure, I would expect that self-aware AIs will take aggressive measures to reverse the rise of CO2 and methane levels. (I am not saying that we should wait for this development. First, my analysis may be mistaken and, second, the fact that we now have in hand rapidly scalable technologies such as “Diamonds from the Sky” that are capable of reversing the damage removes any excuse for waiting.)

Even as the self-aware AIs chart realms of thought that will likely be inconceivable to us, we can live vastly enhanced lives in a far better world that we share; interacting with them little if at all. The AI companions with which (not whom) we interact and perhaps even eventually merge may not be self-aware, but they will still augment our intelligence and lives in ways that will seem almost godlike.

2 thoughts on “An AI epiphany

  • Jonathan,
    I like it! I recently published my sci-fi version of this scenario on Kindle. The book is called Tulip. I hope you check it out. I was hoping that your description of a Celebration Society might contain ideas about healthy human cultural practices too. Let’s face it, our culture has not kept pace with our technological development. What we have now is a patchwork of practices that no longer provide a healthy context for human development. If we are to have a happy world of abundance, we MUST find a way to create a powerful and dramatically more healthy for our culture to synergize with healthy human-development.

    • Jonathan

      on

      Raymond, I’m always glad to see others take my ideas and expand upon them. As I said in A Celebration Society, the book is far from the final word on these matters. It is closer to the beginning than the end! And yes, the book does contain ideas about healthy human cultural practices, including proposed new institutions to help assure them. I hope you’ll read it and write a review including your own thoughts.

      Also, you might find of interest my recent profile by SU Global: https://app.su.org/posts/jonathan-kolber-on-building-new-societies.

Leave a Reply

Your email address will not be published. Required fields are marked *