Hey Siri,

You suck.

Love,

cars

Emil Protalinski
Emil Protalinski
Managing Editor, ON_Discourse

From the editor: Voice tech on smartphones and smart speakers failed to hook users. LLMs could make the car the gateway drug into a voice-first world.

OpenAI released ChatGPT to the world in November 2022, and exactly a year later, in November 2023, rolled out ChatGPT with voice to all its free users. The rollout is part of a long list of voice-based tech launches that overpromise but underdeliver. We’re edging closer to a future where voice can be used as the primary input in increasingly more scenarios (beyond “Google, what’s the temperature outside?” or “Alexa, set a timer for 30 seconds”), but we’re still a long way from a voice-first world.

The car is the gateway drug to voice-first tech

Since at least March 2023, General Motors, which manufactures Chevrolet, Cadillac, Buick, and GMC cars and trucks, has reportedly been working on a Microsoft Azure-hosted virtual assistant that leverages the AI models behind ChatGPT. Such an AI assistant could go beyond the simple voice commands available in today’s cars by, for example, informing drivers how to change a flat tire (including by playing an instructional video on the car’s display) or by explaining to drivers the meaning of a diagnostic light on the car’s dashboard, telling them whether they should pull over and maybe even make them an appointment at a local repair shop.

We haven’t heard anything about GM’s project since.

In January 2024, Volkswagen unveiled its first vehicles that integrate ChatGPT into its in-car voice assistant, which will supposedly be able to control the infotainment, navigation, and air conditioning, as well as answer general knowledge questions. The company promises that “this can be helpful on many levels during a car journey: Enriching conversations, clearing up questions, interacting in intuitive language, receiving vehicle-specific information, and much more – purely hands-free.” The feature is set to start rolling out “in many production vehicles” in Q2 2024, starting in Europe.

I doubt Volkswagen will issue an OTA update to cars already on the road, meaning that most car owners won’t get this functionality for years. Mercedes, BMW, and Hyundai have also promised to integrate large language models to bolster their in-car voice assistants, but not before 2025.

Carmakers moving slowly does not surprise me. Apple and Google, however, haven’t even bothered to talk about updating Carplay and Android Auto with LLMs.

As a result, people are getting into their cars, launching ChatGPT with voice on their phones, and just talking. They’re using the chatbot as a sounding board to brainstorm. They’re entertaining their kids by letting them ask endless questions. They’re accessing a large chunk of humanity’s knowledge by just talking while driving.

People are doing this behind the wheel because the car is the perfect location for voice input. Billions of car trips occur solo. In most parts of the world, the average occupancy of cars is around 1.5 passengers. Even before voice chatbots and voice assistants, seeing people with their mouth open while driving is common. They’re either talking to themselves, singing to themselves, or talking to someone on a call.

The car is thus a great opportunity for voice tech to shift into gear. Not only is talking alone while driving socially accepted, but the car is an enclosed space. A quiet background reduces conversation misunderstandings with everyone, including voice assistants.

Buttons, knobs, and chatbots

In August 2022, Swedish car magazine Vi Bilägare published a study comparing 11 modern cars with touchscreens against a 17-year-old Volvo V70 without a touchscreen. Vi Bilägare measured the time needed for a driver to perform different simple tasks, such as changing the radio station or adjusting the climate control, while driving at 110 km/h (68 mph). You can guess the result: physical buttons were much less time-consuming to use than touchscreens. The study found that the driver in the worst-performing car needed four times longer to perform the simple tasks than in the best-performing car.

It’s thus no surprise that in 2023, carmakers like Volkswagen (including its subsidiary Porsche), Hyundai, and Nissan took public stances about bringing back buttons and knobs for a safer, more distraction-free driving experience. Touchscreens look great, but they’re not safe and they’re useless for relying on muscle memory.

Neither is opening an app while you’re driving. It’s simply not a good user experience.

Even when my phone is connected to my car, I still find myself grabbing it to perform tasks that I know will be faster or take less effort directly in my hand. Touchscreens are fantastic on phones but largely suck in cars. Conversely, using your voice sucks on the phone but could be fantastic in the car.

Ideally, when I’m in my car, I only need to press physical buttons, turn physical knobs, and talk—or better yet, keep my hands on the wheel and just talk.

LLMs are a huge opportunity to make voice the default mode of interaction in a car, or at least the secondary one to pressing buttons and turning dials. Carmakers are moving away from touchscreens for good reason, and they should be investing in voice tech instead.

At this stage in the race, we should be able to design an assistant that is smart enough to know when I want to turn up the music, when I want to ask about what restaurant used to be on the corner that I just passed, when I am talking to someone in the car, and when I am talking to my car. This is not about what AI can do for me, but what I can do with AI.

I don’t care which company is ultimately responsible for making voice tech the car’s default input, but it needs to happen soon. Once humans get hooked on good tech, we don’t easily let go.

Self-driving cars won’t

let voice tech win

Self-driving cars won’t let voice tech win

If voice tech can leverage LLMs to conquer the car, it could lead to better voice features for smartphones, smart speakers, and the smart home at large. There’s just one problem with this gateway drug leading to more addiction: there’s not enough time.

Voice tech is not going to happen quickly enough. Technology is moving faster than we can adapt to it. I worry that autonomous cars will be the norm before carmakers and tech companies figure out voice tech.

The window of opportunity is closing: If autonomous vehicles become more prevalent first, humans will quickly become distracted by any number of screens, gadgets, and sex, forgetting to talk to their self-driving cars.

Voice input by default can’t be born on the road if self-driving cars turn it into roadkill.

In a world of autonomous cars, voice isn’t as compelling. If no company gets a generation addicted to voice tech, I may never get to talk to my car.

Early in the AI era, smart devices are still dumb

Emil Protalinski
Emil Protalinski
Managing Editor, ON_Discourse

With CES 2024 closing out last week, we’re beginning to distill and synthesize the most important and unique perspectives as part of our Intelligently Artificial Issue. CES 2024 might be over, but there’s an emerging narrative that we’re just beginning to weave together.

Our hardware vs. software debate led us to a predictable conclusion in the AI era: To build a moat, you can’t rely on just hardware or just software. Instead, business leaders must figure out how to leverage software-enabled hardware to deploy robust data strategies. Your differentiator is not your hardware or your software, but how you are collecting data, extracting utility, and offering insights.

More than just a new TV

We were transfixed by transparent TVs from LG and Samsung, but not because they were the most visually impressive devices to see at the convention. The discussion quickly focused on where such transparent screen tech can best be deployed—retail use cases being more likely than the home.

Quantified health leaps forward

Health was the category with arguably the most promise. We saw devices that suggest more granular diagnostic health data is within reach, pending approvals and clearances from government bodies around the world. While large language models have been trained on text scraped from humanity’s printed word, health care models could soon be trained on data scraped from human bodies. The quantification of our bodies means exponentially more health data, including everyday vitals and patient-led diagnostics, leading to new services, new user experiences, and new business models.

New connected health products from companies like Abbott and Withings, in the home and at the clinic, seem inevitable, collecting far more useful data than current wearables. Although we wondered whether an abundance of health devices and excessive health tracking could lead to new mental health issues, the consensus was that consumer wellness tech could have a profound impact on preventive health care. On the flip side, companies expanded beyond the human body to gimmicks like AI dog collars. We pointed to countless examples of “AI-washing”, wherein companies offer no useful AI capabilities but plenty of marketing material claiming otherwise. Every company wants to be an AI company and we found ourselves frequently weeding out hype from reality.

Every company wants to be an AI company and we found ourselves frequently weeding out hype from reality.

Maze of people at CES

Swapping one buzzword for another

In years past, companies were labeling every product and service as “smart.” Now, companies are prepending, inserting, or appending AI to their brand and product names. Nonsense terms like “bespoke AI” don’t help. We posited on our floor tours that while mentioning AI is currently best for your company valuation, using a term like “smart” is more consumer-friendly and representative of what people want from their technology. It’s only a matter of time before marketers come up with something less ominous than “AI” and as apt as “smart” for brands to promote.

Brands, organizations, and UX

Speaking of brands, ON_Members at our briefing event debated whether AI will turn most brands into commodities and whether brands will lose their importance or become more valuable than ever before. There was a general agreement that employers need to hire more experts who understand the human experience, not more experts who understand AI. We also discussed how the AI age could be an opportunity to bring more of the human condition into the user experience.

The death of the smartphone?

The UX discussion often centered on the latest trend of supplementing or outright replacing the smartphone, and what interface would be required for such a device to be successful. These devices fall into two categories: new gadgets, like Humane’s Ai Pin, and AR/VR.

Rabbit’s R1, a palm-sized smart personal assistant device that doesn’t run any apps but can connect to your existing apps, created a lot of buzz. It’s sobering to remember that plenty of CES products, including the exciting ones, ultimately flop (relatedly, Humane wasn’t at CES but laid off some staff during the same week).

User holds phone in front of a crowd of people

We spotted plenty of products that were solutions in search of a problem. AI and smart labels aside, most devices are still dumb: they’re not anticipating our needs and thus can’t take any useful action.

Even if nothing seemed ready for prime time, the most striking innovation seemed to be around input devices, spanning wrist wearables, smart mirrors, and even AR/VR controllers that tap into our bodies’ electrical signals. We saw that AR/VR still isn’t ready in 2024, even with Apple’s usual attempt to try to steal CES, this time with some Vision Pro news.

VR is powerful, but cumbersome, and doesn’t have any use cases outside of porn, gaming, and maybe exercise. AR is more useful, but the form factor has major trade-offs: poor battery life or few features.

Regardless, it’s clear that someone needs to upend the current touchscreen UX to displace the smartphone.

Most devices are still dumb

In sum, we spotted plenty of products that were solutions in search of a problem. AI and smart labels aside, most devices are still dumb: they’re not anticipating our needs and thus can’t take any useful action.

This brings us back to where we started: The future will not be prompted. While everyone is understandably focused on text prompts, we focused on hints that we could be heading toward a future of ambient data collection and anticipatory interactive technology. If brands can make the move from personalization to anticipation, our behavior will become the prompt for AI.

This is just the beginning of our Intelligently Artificial Issue’s next phase, in which we’ll be presenting unique insights driven by provocations and mapped to our three areas of focus:

  • Will AI drive a new UX?
  • Will AI reorganize the re-org?
  • Will brands matter in the AI era?

Are you interested in being part of the discourse, and contributing to this and future Living Issues? Is there a perspective you think we might be missing? Tell us what you think.