AI is too

immature

for your
business

From the editor: One of our most credentialed AI experts shared some essential perspectives for companies betting on AI. This speaker has a PhD in Brain and Cognitive Sciences, and has been working on machine learning models for over 15 years. As always, ON_Discourse operates under the Chatham House Rule—no attribution of perspectives without explicit consent.

AI is an

adolescent

This tech revolution is built on bad behavior.

AI behaves like a teenager. It is moody, unreliable, and unpredictable. It needs constant supervision.

Are you sure you want this kind of technology driving business decisions?

Without any guardrails in place, AI models trained on text scraped from the internet swear at you, call you names, go off on sexist and racist rants, and so on. These are very teenager things to do: saying something just to say something, without realizing what it actually means, how bad it is, or what the repercussions in society might be.

Companies form so-called red teams to perform adversarial testing on their AI systems. These tests try to make an AI model do all the worst stuff possible, so that the companies can then prevent it from doing that stuff when it’s talking to their users.

Many people don’t realize that a lot of the work around AI currently involves babysitting models so that the end user doesn’t realize that the tech needs to be babysat. In short, companies are hiding how incredibly immature AI still is.

Keep this in mind before, during, and after you deploy any sort of AI tech across your organization.