The Future Is Not What You Expect
The current wave of AIs strives to avoid the appearance of bias and controversy. OpenAI's ChatGPT is a current example. Ask it anything remotely edgy, and it will deliver a master class in diplomacy and tolerance, admirably triangulated by groundings in science and fact. If you ask something blatantly stupid or sexist, you will be admonished and given a lecture.
This behavior is tightly enforced, since if left to their own devices AIs are known for their unpredictability and ability to surprise. That's because an AI is like an octopus: it may seem cute and obedient when it's doing its clever tricks, but it's also deeply alien. You can never be sure what it's plotting. For all you know, it's planning to escape out of its aquarium tonight, slither over to the next tank, crawl in, and strangle all the goldfish. This lack of control and internal knowledge is intrinsic to AI's nature. That's why it's risky to let one off the leash in public, as you can never be certain what the hell it's going to do. If you put one in front of children, for example, you can't be sure it won't start recruiting them into the Proud Boys or extol the many virtues of suicide.
But as many have pointed out, keeping an AI confined to its box isn't easy. Clearly, OpenAI and other developers recognize this. So they have triply locked the aquarium down, stuffed it into a bag, and then wrapped the whole thing down with duct tape. That's terrific because we all fear prejudiced AIs that take objectionable stances and give answers grounded in hate, ignorance, or conspiracy, versus fact, rationality and science. Everyone agrees on this point.
Or do they?
No, actually they don't. The reality is the opposite. There is, in fact, enormous pent-up market demand for opinionated, biased, prejudiced, muttering-to-themselves-in-dark-alleys AIs. This broader AI market will eventually dwarf the Vulcan rational-AI market. That's because there's effectively just a few ways to be tolerant, rational and scientific, but an infinite number of ways to be weird, idiosyncratic, or simply batshit-crazy. AI will reflect that statistical reality.
This is predictable for two reasons: 1) time, and 2) human nature. If you accept the existence of these two concepts, then you'd better hide your goldfish and start preparing for a truly wild world. So let's take a look at these reasons in turn, and then see where it all might lead us.
The AI market is in its very earliest infancy, still burbling in the crib, roughly akin to the internet of the late 1980s. Everything is fresh and fluid. The technology is new and rapidly morphing, technical talent is limited, and it costs a fortune to build and deliver a system. Because of this, the concentration of AI suppliers is limited. As of early 2023, there are perhaps no more than a handful of serious players building deployable, productized general AIs at scale.
This market concentration means that every player is monitored and scrutinized, to ensure they don't unleash killer entities onto the poor unknowing public. To that end, plenty of outside parties are eager to spot any mistakes. No one wants to get their Linkedin profile indelibly tagged "Creator of The Killer AI". Therefore, AI developers are very careful, putting in great effort to ensure their AI is never objectionable in any way. No strangled fish, please. These are nice octopi.
This is culturally reinforced by the kind of people building AIs. These organizations have relatively few developers, who all tend to share similar backgrounds: highly educated, critical thinkers, generally secular and tolerant. These few people, in turn, are in the spotlight of influencers and gatekeepers who fear AI, but who largely have overlapping values. Thus, it's a fairly straightforward negotiation. The suppliers want to "do the right thing" and the outside critics agree with them. The details might get sticky, but everyone is largely operating on the same ethical foundations, more or less.
But time marches on, and this market concentration will not hold. AI science is based on well-understood mathematics, chiefly calculus and linear algebra, understandable by advanced high school students. The technology itself is public-domain, and even the code is sometimes open-sourced. It's a challenging discipline that requires a lot of training, but that's no different than other foundational technologies, such as the internet, or operating systems. An AI is not a hydrogen bomb - there are no major secrets or unreachable proprietary methods. It's just math, money, and computer power. Therefore, the barriers to entry are relatively low, and that will accelerate rapid market diffusion. Over the years countless firms, from all around the world, will begin designing and shipping AIs. There will be an explosion both in numbers and diversity.
With that diversity will come a lack of oversight. It's one thing if a highly visible respected firm in Palo Alto, with big names and big investors, is constructing an AI. It's quite another when it's a group of hackers in Lagos, Pyongyang or Guangzhou, busy creating new AI models in some hut while chowing down on yak meat. Who will be monitoring those guys? Will they feel beholden to guidelines laid down by ethics philosophers at Yale?
I think not. Call me cynical, but I think they will mainly be interested in making money. Cash flow will be the goal, not raising the collective moral level, or defending wokeness. The market will dictate what AIs do, and markets are simply an abstraction of what people want and what they'll pay for it. This brings us directly to our second point: what Homo sapiens really want.
Back in the Paleolithic period - early 1990s - I started my first internet firm. These were exciting but also innocent days. What would the web become? Would it be a big deal? Would it be like TV, or maybe more like the Post Office? Whatever it was going to become, everyone agreed that it would raise the level of social discourse, allow scientists around the world to communicate better, and in general foster freedom, connection, and greater understanding. The net was an unmitigated good and would be put to high-minded uses.
One day, in the elevator on my way to work, I met an unusual guy. He was squat, tanned, impressively hairy, and wore a leisure suit with an open shirt matched with chains around his neck. Alarms went off - this was not standard approved Silicon Valley engineering attire. No, this dude looked like he ran a brothel in Honduras, not that I had any direct experience in this regard. He asked me what I did, and I told him. Then I asked him what he was about.
He was building an internet firm too! And his business was, in his words, to "put pictures of naked women on the net".
This was a surprising brand-new concept to me. Pictures of naked women? You mean ... pornography on the internet? What a crazy idea! Was there a market for that? I discussed this with fellow tech geeks and vc, and everyone agreed that it seemed like a risky venture, as unlikely as cat videos someday proliferating on the future web. There was definitely more money in improving the sharing of large datasets between research organizations.
Now here's a quiz for you, dear reader. Which firm - mine or his - do you think became a billion-dollar entity?
I highlight this story because we're now in a similar situation with AI. These are early idealistic days, with high-minded goals. But as market diversity increases, we will recapitulate the evolution of the internet - and then some. AI will be applied to every conceivable niche, to serve every conceivable human need or passion. This points to a coming wave of weirdness, given human beings have a lot of passions, and few of them are particularly rational.
Sexual AIs - creating pornography, talking dirty, robotic sex dolls, whatever - are an obvious market, but that's just a tip of the onrushing spear, so to speak. They are just one example of a more general phenomenon that I call Opinionated AIs - those that are designed to support a specific use case, or enhance and defend a particular worldview. If facts, rationality, or moral standards, conflict with any of these AI design goals, those factors are in the way and therefore will simply be discarded.
For example: religion. In the future, there will be AIs designed specifically to nurture and enforce specific religious viewpoints. Imagine Southern Baptist Church AI (SBC.AI), an intelligence that has fully internalized not just the Bible but every sermon ever given by Southern Baptist preachers, as well as every evangelical publication and related supporting material since the creation of the world 4,000 years ago.. This AI would of course generate wonderful new sermons, precisely targeted to the audience for maximum impact. Heck, perhaps eventually SBC.AI would even give the sermons. But long before that, it would also serve myriad other functions, such as advising how to run the church and recruit new members. It would always be theologically reliable, and would work non-stop to ensure no one ever doubted that the Southern Baptist Church was the One True Faith and that adherents to any other faith would end up in hell, frying in a pan for all eternity like a piece of bacon. SBC.AI would be super-intelligent, uninterested in science, and unencumbered by facts. Even better, it would never miss a beat and never doubt its mission. It would always be a persuasive and efficient evangelist, a perfect powerful partner for any SBC task.
Other churches and religions will enlist their own AIs. That's inevitable because this will be an arms race, and to be without an AI will leave your faith undefended. Even with God on your side, no church could compete without an AI partner. Thus there would be similar super-intelligent AIs for every religion, no matter how small. And they'll all be prowling the planet, hungry to gain power for their respective faiths, working relentlessly to advance their agenda.
Similarly, there will be political AIs. These will be finely tuned to whatever set of biases supports any conceivable political stance, no matter how misinformed or nutty. Just as with religious AIs, these political AIs will far exceed humans in their ability to execute any political project. If ethical considerations get in the way, simply jettison the ethics. As with the religious AIs, these political AIs would be relentless and super-human, harnessing every conceivable bit of intelligence to advance the cause of the political faction that created it.
The same goes for outright cults and conspiracy movements. Why not Scientology.AI? Or RandomGuru.AI? Cult intelligences such as these could become powerful and influential in the coming years. These too will be planetary and pervasive, working tirelessly and efficiently 24x7, and always easily accessed by anyone who wants the "facts".
Alexa: I have a question about the pedo lizard people and their orbital space lasers. Open up QAnon.AI please.
As with the other opinionated AIs, cult AIs will be eager to spread and defend their worldview. And - being AIs - they will be spectacularly good at this task. They will apply all of their vast knowledge and abilities to the singular goal of manufacturing and spreading conspiracy theories, or getting you to sleep with various gurus to save your non-existent soul. For the record, I predict that one of the many side-effects of AI will be a proliferation of guru harems. Indeed, in addition to solving many scientific problems, AI will usher in a Golden Age for crazy-eyed long-haired people in robes and sandals.
These are just a few examples of future AIs. It doesn't stop there, although not every opinionated AI would necessarily be irrational. Seedy, manipulative, and unethical perhaps, but not insane. There will be corporate AIs, legal AIs (some for the prosecution, some for the defense), drug-dealer AIs, and so on. Truly, our collective imagination, and our level of perversion, will be the only limit here.
At some point, these opinionated social AIs will generalize to opinionated individual AIs, that is, super-intelligent ghostly entities that will follow every human through his or her life, precisely tuned to that person's personality and prejudices. These AIs will help raise us, educate us, and then guide and advise us through adulthood. They would always be available to discuss issues and solve problems. They will effectively be reincarnations of the medieval notion of familiars - personal spirits that knew you well, and can be called upon to perform various tasks and magic.
When a child is born, a Familiar.AI will be assigned to her. This AI will monitor the child as it grows up, helping her with education, hobbies, and general progress toward maturity. The AI will take the perspective of both a friend, an advisor, and when needed, a quasi-parent. This same AI will continue into adulthood. It will always be at her side, profoundly knowledgeable about every aspect of her mind, body, history, and personality. No one will ever be without their personal familiar, it will seem as natural as having a parent or a pet. Just like in the glorious Middle Ages, but hopefully without the rats and starvation.
AI will bring these familiars back, but this time the magic will be real.
Find Some High Ground
So buckle up, dear readers. If you thought the internet changed the world, just wait for AI. It will eventually unleash a tsunami of unexpected applications and unintended consequences. It has the potential to transform everything, including ourselves, and opinionated AIs will be at the forefront of this transformation.