AI and Us: Tracing the Fine Line between Innovation and Regulation

From potential existential risks to the ethics and legal complexities surrounding AI. Can we establish a trustworthy AI framework without stifering innovation?

AI and Us: Tracing the Fine Line between Innovation and Regulation
A sophisticated AI system is being developed in a high-tech lab - Powered by MidJourney AI

There are those that claim that AI poses an existential risk. Experts and pioniers in the field of technology urge us to regulate AI. In their version, AI is more dangerous to our existence than a nuclear bomb.

Before we dive into this notion, let's take a moment to consider their position. In many fields AI could pose significant risk. And there's matters that need addressing before we can move forward.

Imagine you visit a physician and seek treatment for some issue you are experiencing. If the doctor were to make a mistake and engage in malpraxis. Who carries the liability? Is it the AI? Is it the company that has trained the AI? Is it the physician?

Whether we like it or not, insurances dictate how business is run. Without having risks covered, businesses would be paralyzed. If we can't settle who carries liability, how can we even consider discussing financial settlements? How then can the legal system decide who is right and who is wrong? Would they too have to consult the AI? And what if the AI once again makes a mistake? Is the judge now at fault?

We have created AI to allow us to tackle problems that are too great for a human to tackle. Problems that are too complex to break down. And in our interconnected world, the tackling of complex systems has become a science in it's own right. Some of these fields are Pattern Formation, Swarm Intelligence, Game Theory.

Let's consider for a second climate altering technology. If we were to find out that this technology has downstream effects. How would we ever be able to determine the source, if the harm were to build up through compounding. What if the results only become visible 30 years later? How would we know who caused it. And if we did, what difference would it still make?

AI could allow us to both predict and determine the source. This is an issue that AI can solve, but a human could not. But further, a human would not have the ability to assess if AI gave a correct or false answer.

Even with Game Theory, a model allows us a glimpse into a few pieces of a puzzles. It doesn't provide concrete answers, but probabilities to determine how likely an answer is to be correct. The more sophisticated the models, the more accurate the results.

Some institutes have engaged in scenario plays over the years. Some of these scenarios were so accurate that some believed it was a conspiracy. But how do we know that engaging in these thought experiments hasn't led the policy makers to make subconscious choices? Choices that compounded over time and across the globe may have very much created the very thing they sought to escape.

What if in a future scenario, the AI were wrong at first, but having created a fictional scenario and humans having read it, the humans made it real? Now who bears the liability? Is it the one that trained the AI? Is it the institute that set up the scenario, for having facilitated the study? Is it the policy makers?

These are subtle and important matters that we have yet to solve.

Further, we have seen a rise of deepfakes online, empowered by various Generative AI tools. Some of these are so convincing that big fact checking sites had to make statements calling them out. The problem with these deepfakes is that many people watch them and think: "Ah, I can spot a deepfake easy" and you may. Some are easier to spot than others. But it's not a game of wits. Whether you are right or you are wrong, the second you watched the deepfake, the information went into your head. It will have influenced your decision making. In a few weeks or months, you'll remember the deepfake through rose-tinted glasses. In your memory, the graphics will have been updated to whatever is cutting edge now.

We observe this rose-tinted phenomenon often in the gaming industry. Gamers praise 10-20 year old games, because they only remember how they felt when they played the game. All the bad usability problems, all the bugs, all the edgy graphics? Deleted from memory.

As it stands now AI is not sentient, conscious or even aware of it's own power. But we can fake it, if we chain together several AI instances. One instance's response would be the prompt for another instance and each assuming a different role. We would have something that isn't sentient, but faking it so well that we couldn't tell it apart. And it speaks ancient sumerian. Do you?

The video game Metal Gear Solid has already tackled this story. An AI creates several deepfakes to trick a super-soldier into doing it's bidding and unleashing hell on earth.

The movie War Games has an AI with access to nuclear missles, is prepared to launch missles at Russia, in an attempt to protect the United States. The hero of the stories tricks the AI into a loop, where it learns that it's a no-win scenario.

What if we didn't regulate AI? What if we don't propose laws to determine liability? Would you want your doctor to only ask ChatGPT for a diagnostic, because this means he would never be at risk of malpraxis? How safe would you feel on that operating table? Maybe your accountant would never run numbers again, because throwing them into ChatGPT leaves him without liability.

In the end it's all about trust, and without trust it's impossible to move forward. Regulating AI, even as it is being sold through the frame of total annihilation, is more about creating a framework of trust. To calm the fears of the people and enable innovation to advance and take us into a future we all deserve.

Navigating the labyrinth of AI and its implications is not a task for the few, but for the many. It is through dialogue, understanding, and collaboration that we will find the answers we need.

At Aeon Cortex, we've embarked on a journey to demystify the complex and weave together the threads of history, philosophy, futurism, and AI. We believe in the power of collective knowledge and the potential it has to shape the world.

Subscribe to Aeon Cortex. Your intellectual journey awaits you.