Hey Siri,
You suck.
Love,
cars
From the editor: Voice tech on smartphones and smart speakers failed to hook users. LLMs could make the car the gateway drug into a voice-first world.
OpenAI released ChatGPT to the world in November 2022, and exactly a year later, in November 2023, rolled out ChatGPT with voice to all its free users. The rollout is part of a long list of voice-based tech launches that overpromise but underdeliver. We’re edging closer to a future where voice can be used as the primary input in increasingly more scenarios (beyond “Google, what’s the temperature outside?” or “Alexa, set a timer for 30 seconds”), but we’re still a long way from a voice-first world.
This article is part of The Intelligently Artificial Issue, which combines two big stories in consumer tech: AI and CES.
Read more from the issue:
USER EXPERIENCE
Augmented Intelligence: from UX to HX
Will prompting replace browsing?
The car is the gateway drug to a voice-first acceleration
The prompt interface needs a redesign
RE-ORG
AI will brainstorm your next reorg
Expect fewer managers and direct-reports
AI is too immature for your business
AI is not a new revolution
BRAND
Should we ignore the hardware?
Can AI help consumers love your brand?
Your brand doesn't have enough data for AI
Can LLMs be optimized like search results?
Good brands will integrate more friction into their C X
The car is the gateway drug to voice-first tech
Since at least March 2023, General Motors, which manufactures Chevrolet, Cadillac, Buick, and GMC cars and trucks, has reportedly been working on a Microsoft Azure-hosted virtual assistant that leverages the AI models behind ChatGPT. Such an AI assistant could go beyond the simple voice commands available in today’s cars by, for example, informing drivers how to change a flat tire (including by playing an instructional video on the car’s display) or by explaining to drivers the meaning of a diagnostic light on the car’s dashboard, telling them whether they should pull over and maybe even make them an appointment at a local repair shop.
We haven’t heard anything about GM’s project since.
In January 2024, Volkswagen unveiled its first vehicles that integrate ChatGPT into its in-car voice assistant, which will supposedly be able to control the infotainment, navigation, and air conditioning, as well as answer general knowledge questions. The company promises that “this can be helpful on many levels during a car journey: Enriching conversations, clearing up questions, interacting in intuitive language, receiving vehicle-specific information, and much more – purely hands-free." The feature is set to start rolling out “in many production vehicles” in Q2 2024, starting in Europe.
I doubt Volkswagen will issue an OTA update to cars already on the road, meaning that most car owners won’t get this functionality for years. Mercedes, BMW, and Hyundai have also promised to integrate large language models to bolster their in-car voice assistants, but not before 2025.
Carmakers moving slowly does not surprise me. Apple and Google, however, haven’t even bothered to talk about updating Carplay and Android Auto with LLMs.
As a result, people are getting into their cars, launching ChatGPT with voice on their phones, and just talking. They’re using the chatbot as a sounding board to brainstorm. They’re entertaining their kids by letting them ask endless questions. They’re accessing a large chunk of humanity’s knowledge by just talking while driving.
People are doing this behind the wheel because the car is the perfect location for voice input. Billions of car trips occur solo. In most parts of the world, the average occupancy of cars is around 1.5 passengers. Even before voice chatbots and voice assistants, seeing people with their mouth open while driving is common. They’re either talking to themselves, singing to themselves, or talking to someone on a call.
The car is thus a great opportunity for voice tech to shift into gear. Not only is talking alone while driving socially accepted, but the car is an enclosed space. A quiet background reduces conversation misunderstandings with everyone, including voice assistants.
Buttons, knobs, and chatbots
In August 2022, Swedish car magazine Vi Bilägare published a study comparing 11 modern cars with touchscreens against a 17-year-old Volvo V70 without a touchscreen. Vi Bilägare measured the time needed for a driver to perform different simple tasks, such as changing the radio station or adjusting the climate control, while driving at 110 km/h (68 mph). You can guess the result: physical buttons were much less time-consuming to use than touchscreens. The study found that the driver in the worst-performing car needed four times longer to perform the simple tasks than in the best-performing car.
It’s thus no surprise that in 2023, carmakers like Volkswagen (including its subsidiary Porsche), Hyundai, and Nissan took public stances about bringing back buttons and knobs for a safer, more distraction-free driving experience. Touchscreens look great, but they’re not safe and they’re useless for relying on muscle memory.
Neither is opening an app while you’re driving. It’s simply not a good user experience.
Even when my phone is connected to my car, I still find myself grabbing it to perform tasks that I know will be faster or take less effort directly in my hand. Touchscreens are fantastic on phones but largely suck in cars. Conversely, using your voice sucks on the phone but could be fantastic in the car.
Ideally, when I’m in my car, I only need to press physical buttons, turn physical knobs, and talk—or better yet, keep my hands on the wheel and just talk.
LLMs are a huge opportunity to make voice the default mode of interaction in a car, or at least the secondary one to pressing buttons and turning dials. Carmakers are moving away from touchscreens for good reason, and they should be investing in voice tech instead.
At this stage in the race, we should be able to design an assistant that is smart enough to know when I want to turn up the music, when I want to ask about what restaurant used to be on the corner that I just passed, when I am talking to someone in the car, and when I am talking to my car. This is not about what AI can do for me, but what I can do with AI.
I don’t care which company is ultimately responsible for making voice tech the car’s default input, but it needs to happen soon. Once humans get hooked on good tech, we don’t easily let go.
If voice tech can leverage LLMs to conquer the car, it could lead to better voice features for smartphones, smart speakers, and the smart home at large. There’s just one problem with this gateway drug leading to more addiction: there’s not enough time.
Voice tech is not going to happen quickly enough. Technology is moving faster than we can adapt to it. I worry that autonomous cars will be the norm before carmakers and tech companies figure out voice tech.
The window of opportunity is closing: If autonomous vehicles become more prevalent first, humans will quickly become distracted by any number of screens, gadgets, and sex, forgetting to talk to their self-driving cars.
Voice input by default can’t be born on the road if self-driving cars turn it into roadkill.
In a world of autonomous cars, voice isn’t as compelling. If no company gets a generation addicted to voice tech, I may never get to talk to my car.