AI is Not a Public Utility

Anthony DeRosa
Head of Content and Product,
ON_Discourse
or
Should Bored Apes
Be in Charge of AI?

There are a number of reasons why handing defacto ownership of AI to any one or two companies is not only unfeasible, but also a non-starter from both from a regulatory perspective and an antitrust perspective. Unfeasible because open source AI models are likely to allow those with modest means to run their own AI systems. The cost to entry gets lower by the day. There’s no way to put that genie back in a bottle. 

Charging someone to obtain an exorbitant “AI license” in the realm of several billion dollars seems clean and easy to execute, but how could it possibly be enforced? How would you find every AI system? What if they just ran the servers overseas?

Pushing AI innovation outside of the United States is exactly what adversaries would like to see happen. If you were to somehow find a way to limit the technology here at an infrastructure level, it would simply flourish in places with interests against our own. It would be a massive win for China if we were to abort an American-based AI innovation economy in the womb.

It’s not even clear what “controlling” AI would even mean since no one company can possibly own the means to utilize AI. It’s like trying to regulate the ability to use a calculator. You can’t put a limited number of companies in charge of “AI infrastructure” because there’s no reasonable way to limit someone’s ability to put the basic pieces in place to build one. 

Thinking of AI as a public utility is incoherent. The defining characteristic of a public utility is that the means of distribution, the delivery lines, would not benefit from having multiple parties build them. Unlike utilities, such as phone, power and water, there’s not a finite source for AI and a limited number of ways to deliver it. There’s many ways AI can be built for different purposes and having few companies doing so is not a common good. Making that comparison is a misunderstanding of what AI is and how it works.

Putting government controlled monopolies in charge of AI would create a conflict of interest for those parties, leading to among other things, a privacy nightmare and a likely Snowden-like event in our future that reveals a vast surveillance state.

One might argue that we should at least limit large scale AI infrastructure. As unworkable as that may seem, let’s interrogate that argument with the idea that Apple would “control” that business. Apple might have a solid record of protecting consumer privacy, pushing back on law enforcement and government requests to access phone data. That trust would be shattered once they were an extension of the U.S. government by way of their granted AI monopoly and their market dominance would likely plummet. It would be a bad deal not only for Apple but for consumers as well.

Some of the most potentially useful forms of AI is utilized by private LLMs, which would have more refined, domain-specific, accurate data within a smaller area of focus. Doctors, scientists, and other specialists benefit greatly from this bespoke form of AI. Putting AI in the hands of one or two large companies would stifle innovation in this area. For those reasons alone it’s unlikely to be considered on regulatory and antitrust grounds. 

If we want to put safety around AI, there's a better and more realistic way to approach it.

The best way to deal with AI risks is through reasonable regulation. Independent researchers can document the potential risks and laws can hold them to account. It can happen at both the state and federal levels. Several states are already forming legislation based on the “AI Bill of Rights” and other efforts are happening worldwide. Handing over control of AI to a few companies isn’t feasible, doesn’t make good business sense, and wouldn’t necessarily prevent the calamaties it was intended to mitigate. Instead we will need to be clear-eyed about the specific risks and meet them head-on, rather than expecting them to disappear because a few massive tech companies are in control of the technology.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI