Scenes from a
Nashville dinner

There’s No Way 
I’m Getting a 
Neuralink Implant,

Toby Daniels

Toby Daniels



Editor’s note: This comes from a private dinner event in Nashville. The room was instantaneously skeptical of Neuralink surgical implantation which was interpreted as a challenge by our co-founder. Underneath all of that skepticism was a lot of fear and curiosity. Would you do it?

This post was written by human Toby Daniels and narrated by AI Toby Daniels (powered by

0:00 / 0:00

“Do you think you would have a chip like Neuralink implanted in your brain in your lifetime?”

This was a question that someone asked at a recent ON_Discourse private dinner.

The room was filled with C-Suite business leaders, investors, a music industry exec and former professional athlete, and the answer was unequivocally, no, apart from two people, myself included, who said, without question, yes.

10 minutes later, and after an impassioned debate, most of the people who said no, changed their answer. But why?


Before I explain, let’s understand the technology.

Neuralink is a technology developed by a company aiming to build a direct interface between the human brain and computers. The technology uses extremely thin wires, much thinner than a human hair, which can be inserted into the brain. These wires have electrodes that can detect brain activity and send signals. Neuralink’s technology is designed to bridge the gap between the brain and digital world, potentially enhancing human capabilities or treating neurological disorders.

It’s worth noting that Neuralink is an Elon Musk company, which comes with baggage. So, the question needed to be reframed slightly: 

“Would you have a brain-computer interface (BCI) implanted in your lifetime?”

Ok, back to why people changed their minds, in less than 10 minutes.

Use case

We’ve spent a considerable amount of time talking to leading experts for the Internet 2025 Living Issue. One prevalent theme emerged.

Every past technology interface has been insufficient in how we interact with it. Each technological step we take, we look back and scoff at the inadequacies of what came before. 

Remember the quill, the pen, the typewriter, the keyboard, mouse, touchscreens, remember swiping?

“Hey Siri!!” How stupid was voice? 

But BCIs? Bending technology to your will, with a single thought? How do you improve upon that? How could it possibly get more seamless, integrated and non-intrusive? Think of the possibilities!

We have become slaves to our smartphones. Count the number of people on a train, walking across the street, at a restaurant with friends, driving a car for fucks sake, who are NOT also on their phones? 

You’re telling me you would prefer to live a life where you are tethered to your screen for large parts of the day? You’re happy to half listen and be only a little bit engaged with whoever is sitting across from you? You’re ok that all of this is making you sick?

Remember the quill, the pen, the typewriter, the keyboard, mouse, touchscreens, remember swiping? “Hey Siri!!” How stupid was voice? 

So, it’s still a no? 

“Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” ― Ferris Bueller

Just to be clear, you will never trust a technology that can interpret the brain’s activity and help control devices externally? What if it was FDA approved? What evidence do you need that the technology is safe, your data protected, and that you would be finally untethered from the insufferable weight of your smartphone?

Dude, what if you can also control your TV with your brain? You’d never have to spend time looking for the remote, ever again!

Still a fat no? WTF.

Ok, last question. What if you were a quadriplegic and the device would allow you to regain some, perhaps even all of your motor functions? Would the gift of extra mobility convince you?

Just to be clear, everyone here would accept a pacemaker, right? Electrical impulses inserted directly in your heart chamber is a yes, but a BCI


Oh, yes, you said yes?

One last question. Earlier you said, no, especially not Neuralink. You don’t trust Elon. Elon’s bad. He’s evil? What if Neuralink was the only option?


I’m sorry, it sounded like you said yes, but I couldn’t hear clearly due to the indignity in your voice.

Just to be clear, you would get Elon’s chip implanted into your brain if it meant you freely and fluidly interface with computers again??

Alright, so context matters.

The good news is that Elon is not the only company working on this, so this might end up being a false choice. BrainGate, Kernel, Openwater, Emotiv and others are all pioneering in this space and while it might take a few more years of clinical trials before government approval, it seems inevitable that the ultimate UI, one that we control with our brains, is going to happen in our lifetime and most of us will get one, not just because of the edge use cases, but because technology always wins, whether you like it or not.


Shift from






Toby Daniels
Co founder ON_Discourse
Sorry, Not Everyone
Can Be a Director
Dan Gardner
Founder & Exec Chair of Code and Theory & Founder, ON_Discourse

The worst piece of advice you could give today to a college freshman hoping to work in tech is to tell them to major in computer science, math, or engineering. Same for coding, which is about to go from being a surefire way into the industry to immaterial.

Automation has been replacing manual labor for decades and now artificial intelligence is ready to take over the bulk of knowledge work. We are on the precipice of a great shift that will drastically change which workers will be the most valuable recruits.

Knowledge workers who, at the turn of the century, were described as the most important workers within a modern, thriving organization, will be replaced by what we’re calling direction workers. The evolution in technology isn’t so much going to eliminate high-end human jobs, it’s going to change what high-end human jobs look like and require.

But the shifts in these sectors do not just show AI replacing human skills. They show a need for a new kind of human skill set. This is where the direction worker comes in.

For the last 60 years, knowledge work has been used to describe a kind of intellectual work that demands a high degree of specialization or training, and the ability to perform non-routine tasks like problem-solving, analysis, decision-making, and the creation of new information. Knowledge workers were the upper crust of all white-collar workers: financial analysts, architects, lawyers, data scientists, and engineers. 

That was before. 

Across many disciplines, knowledge work is already being replaced. In the financial sector, AI systems are able to analyze vast amounts of data and make sophisticated investment decisions. In healthcare, AI systems are able to diagnose medical conditions and recommend treatments with a high degree of accuracy. AI systems don’t take days off; they do not call in sick. They can work 24 hours a day. 

But the shifts in these sectors do not just show AI replacing human skills. They show a need for a new kind of human skill set. This is where the direction worker comes in. 

I use “direction” not so much to convey the management work of a director in a company but more to refer to the literal act of directing, as in instructing or conducting. It could just as easily be called “Instruction Work.”

The image of an orchestra conductor comes to mind, expertly guiding musicians and instruments to produce the right sound. The image of a NASCAR driver may even be more appropriate. The engine may be beautiful, but it won’t win the race without the expertise, the direction, of its driver.

In finance just as in healthcare, human workers are needed to provide direction to AI systems even as they are no longer required to crunch the data themselves. On the tail end, human workers also need to evaluate the results, use critical, lateral thinking, and offer follow-up instructions. 

Ferenc Huszár, a machine learning professor at the University of Cambridge, tweeted last year that the current version of OpenAI’s large language model, ChatGPT, would be a good teaching tool in mathematics, precisely because its answers are sometimes wrong. “Give it a problem, it generates convincing-looking but potentially bullsh*t answer, ask the student if they are convinced by the response,” he wrote.

What Huszár is suggesting here to me is not just teaching students to simply produce an accurate answer, but to develop an ability to go past the appearance of a fact and, with a critical eye, evaluate whether it is indeed accurate. If it isn’t, that eye needs to figure out why not, edit the original question, and do it all over again.

As systems progress, there may be less need for correction and editing, but the need for direction will not disappear. Ever-improving technologies will only call for more excellent direction. 

Where to find and how to train these direction workers then becomes the question. 

I am not sold on telling young people to just go to business school. Sure, we need a good generation of leaders who understand how to manage this new landscape, but more than managers we need critical thinkers who can ask the right questions, look for blind spots, understand connections, and have the creativity and humility to rethink the problem at its end and its base.

Direction workers are likely going to be people who can juggle different skill sets all at once: dual majors in math and anthropology PhDs who have trained quantitatively and qualitatively, journalism majors who work with Python, law school graduates willing to engage with the practicalities of coding and ethics. In short, we are going to need what David Epstein called “generalists” in his best-selling book, Range

Realizing that to be competitive in the marketplace in the next ten years is going to look totally different than it did in the last ten is not just something young professionals need to do. Those of us in business should also be paying attention: The biggest cost to businesses over the next decade will be hiring the wrong people with the wrong skill sets.

As Max Penk put it in a post on LinkedIn earlier this year:

Good news: AI will not replace you. Bad news: a person using AI will.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know