A Brush

With AI

The Copyright

Fight

For Digital

Creativity

The U.S. Copyright Office
needs to adapt to the AI Future.

Generative AI presents a tremendous opportunity to unlock human creativity in ways that never could have been imagined when the concept of copyright emerged.

But the U.S. copyright regime has become an obstacle to this new era of innovation--and not because of what the law actually says.

The U.S. Copyright Office is contorting decades of precedent to impose arbitrary rules on when and how creators seeking copyright protections can use AI tools in their work. If they don’t get with the times, the consequences won’t just be limited to artists like me.

In 2022, my painting Théâtre D’opéra Spatial won a Colorado State Fair award for digital art. I decided to register it with the USCO, even though I knew the office might shoot down the request because I produced it using Midjourney AI.

Jason M. Allen

President and CEO of Art Incarnate, a company specializing in luxury A.I. fine art prints. His creation “Théâtre d’Opéra Spatial” won first place in a Fine Art competition, sparking a global controversy about A.I. in art. 

That’s because the painting had already attracted massive controversy. Even though it was always labeled as an AI-assisted artwork, and the judges confirmed it was a fair decision, critics accused me of lying, art theft, and worse.

What I didn’t expect was for the USCO to write off the entire submission with a dismissive, one-page response: “We have decided that we cannot register this copyright claim because the deposit does not contain any human authorship.” It was hard to believe the nation’s copyright authority would issue a decision on such a consequential issue—who owns the rights to AI art?—with so little substance.

It wasn’t until Tamara Pester Schklar, my intellectual property lawyer, and I pushed back in a request for reconsideration that we received any elaboration. The USCO did admit one mistake. The extensive editing and modification I did to the original AI-generated work was in fact eligible for copyright registration. The USCO also indirectly confirmed that its decision was not based on a finding that AI-generated art infringes on others’ copyrights.

The AI cannot read my mind and steal my ideas for itself.

But it doubled down on denying a copyright to the entire work with an argument that, if accepted, would be disastrous: not only am I not the author of Théâtre D’opéra Spatial, it is “clear that it was Midjourney—not Mr. Allen—that produced this image.” 

That’s just not true. The painting wasn’t spit out by Midjourney at random, but reflected my artistic process of iterative testing, conceptualization, and refining outputs until I find an image that translates my vision into reality. But more importantly, each of those steps was the result of careful and deliberate instructions entered by a human. The AI cannot read my mind and steal my ideas for itself.

Courts have consistently updated their interpretations of copyright to reflect new types of technological innovation (and to challenge biases against them). For example, in 1884, the Supreme Court ruled against a company which had distributed unauthorized lithographs of photographer Napoleon Sarony’s portrait of Oscar Wilde, shooting down the firm’s claim that photography is a mere mechanical process not involving an author.


In other words, the court found Sarony’s camera was a tool through which the photographer translated his mental conception into an artistic work. AI generators like Midjourney might rely on complicated algorithms instead of light focused through a lens onto a film coated in light-sensitive chemicals, but it’s a difference of degree, not kind.

There is no incomprehensible magic going on; Midjourney’s servers don’t have a consciousness of their own. They run diffusion models calibrated using a training set of millions of other images. Diffusion models use deep learning to deconstruct data from the training set, add truly random Gaussian noise, and then attempt to reconstruct the original data. The end result is a model tuned to generate entirely original images.

As we told the USCO in our second request for reconsideration, tools like Midjourney are interactive systems that by definition require humans to code, train, and run them. They’re cameras with extra steps. To put it another way, the USCO is effectively arguing the camera is now so complicated it’s also the photographer!

U.S. copyright law is clear that non-humans cannot author or own the rights to art in any meaningful legal sense. Monkeys are indisputably more self-aware than a large language model. Yet when one picks up a Kodak in the jungle and clicks a button, it doesn’t get the rights to any accidental selfie.

Even the Copyright Office’s decision that my modifications to the raw image in Photoshop are copyrightable as a derivative work isn’t free of haphazard reasoning that should alarm any artist. While the USCO conceded human authorship of the edits, it found the “appreciable amount of AI-generated material” in Théâtre D’opéra Spatial required limiting the copyright to “exclude the AI-generated components of the work.”

But it hasn’t clarified what constitutes an “appreciable” degree of AI content, even though it seems to draw distinctions between an author’s creative input and the tools they use.

No one can make those decisions without interrogating a creator’s methods and processes. Copyright Act precedent clearly limits a court or reviewing agency’s review to perception of the final product, not how or why it was designed.

Théâtre d'Opéra Spatial - Jason M. Allen / Midjourney - The Colorado State Fair 2022

Do we really want the office applying subjective determinations as to what parts of the artistic process are worthy of protection? Consider the rise of AI-assisted features like Photoshop’s generative fill, which uses machine learning to create and modify elements of an image, and the mess that would result if the Copyright Office continued to apply the same capricious standards.

This isn’t theoretical. On similar grounds of non-human authorship, the USCO denied that I could copyright versions of the painting upscaled via Gigapixel AI, a tool that simply enhances pre-existing details in an image without introducing any original elements. Similar tools like Photoshop filters are already in widespread use. Any remaining contrast between non-AI and AI-powered features of imaging editing programs will inevitably disappear as their developers introduce more powerful features.

This fight is not about me, and it’s not just about art. Already, my company Art Incarnate has been forced to come up with creative solutions for releasing fine art prints, like taking photographs instead so they can be protected as derivative works. Other workarounds like open source licensing can only do so much.

and
or


Businesses and entrepreneurs are rapidly adopting generative AI in other fields ranging from entertainment and media to publishing. At some point, though, the novelty will wear off and they will be forced to confront practical commercial challenges. Without assurances that they will be able to safeguard their intellectual property, or that they won’t be forced to jump through IP loopholes to prevent competitors from copying their work, they may be hesitant to continue investing in AI technologies.

Nor is this fight, as critics argue, a vehicle to cheapen the output of traditional artists. We are fighting to extend ownership protections to all creatives who utilize AI in their work, helping keep their livelihoods intact.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

Good Artists Copy Great AIs Steal Recap

We Are Algorithm-ing Ourselves Into a Monoculture

Would You Let Netflix Read Your Mind?

Good Artists Copy, Great AIs Steal

How Artists Turn AI Into Gold

A Brush with AI: The Copyright Fight For Digital Creativity

Will AI Enhance or Destroy the Business of Media?

Matthew Chmiel
Head of Discourse

The largely agreeable July 28 virtual event sparked more introspection than discourse.

At our inaugural virtual event, we invited three guest experts to discuss the intersection of AI and digital media.

Nearly 30 of our ON_Discourse members joined us that day and were provoked by Dan Gardner and Michael Treff. I spoke to Dan before the event to get a sense of his goals for the discussion. “Everyone in our field expects AI to be a massive disruption, I was hoping to get past the hype and dig into the details.”

From my perspective, the discourse was light, almost too pleasant and agreeable to qualify as a deep discourse, but the good news is that the conversation isn’t over yet. Check out some of our key takeaways and let me know if anything you see here makes you want to push this further.

As always, the pull-quotes are not and will never be attributed.

Provocations Used

  • AI will destroy all digital distribution mode
  • Fake news is all news
  • LLMO (Large Language Model Optimization) is the new SEO

Discourse Rating

  • Agreeable — there were no obvious and direct disagreements in the group. Let’s turn up the heat next time! As always, send me a note and we’ll find a way to get your follow-up arguments into the mix.

Recap

This is going to destroy the
internet as we know it


Today’s internet is scraped and organized and ranked by Google. Those results are still, for the short term, listed as links that send users to owned and operated web pages. The introduction of chat-based interfaces, where LLMs process the information from the same web pages and then directly deliver that information, without links, is going to flip the paradigm and ruin the internet.  It will homogenize the sources of information in the eyes of audiences.

The key takeaway here is the irrelevance of owned and operated pages – pretty soon, it seems, the need for websites will fade away – as long as the information is available and presented in the chat query.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

Or.....

It’s putting agency back
in the hands of the user

The evolution of prompting LLMs is a 3-way interaction between an individual and the LLM *and* with the community that is training the LLM. Publishers will have to think in new ways — they can’t just ship some content and stop thinking about it. The essence of this interaction model is that the content gets shaped by the prompts. This is empowering audiences in new ways. 

Forget that nonsense about irrelevant web pages, this new interface and behavior (prompting and querying) are going to reveal new needs among users that will deepen their relationship with the true sources of this information. The fact is, the distribution model for digital media has been fundamentally broken ever since Web 2. This might actually fix it!

— Provocation – AI will destroy all digital distribution models

—— what did you think?

But….

AI needs an attribution protocol

Several members and guests focused on a specific software requirement that will address questions of reliability and trust. The brands, voices, and processes that delivered the information must persist, in some way, in the final delivery of generated-information. In other words, if the LLM is scraping an article that comes from the New York Times, that brand and even that author must be clearly articulated in the answer. There is reason to believe this type of protocol can be administered because publishers have the leverage – LLMs need good source material to be relevant and this is putting the power back in the hands of the publishers. 

— Provocation – Fake news is all news

—— what did you think?

On the other hand…

IT SUCKS

The current state of audience trust in publisher platforms. Will LLMs really make it better? Anecdotal audience research shows that Apple News readers assume that Apple News is a news publication and that the articles they are consuming are coming from Apple News (not the brands and reporters that distribute from that platform). When you remove all links and bury attribution in generative text, you remove all chances of establishing a meaningful relationship between the audience and the brand that created it.

— Provocation – AI will destroy all digital distribution models

—— what did you think?

… but wait…

Young people are savvy about finding trustworthy sources in the new internet

They have a radar for authenticity which is the antidote to the synthetic content they’ve been receiving all their lives. They find authenticity from some brands, but mainly from influencers online. This isn’t always good though, because not all influencers are trust worthy. On top of all that, we don’t know if the AI itself is going to contain any bias that will influence its answers. 

— Provocation – Fake news is all news

—— what did you think?

… and…

AI is just software - just like everything else on the internet

SEO emerged as a discipline after some engineers reverse-engineered the ranking algorithm.  The same kind of reverse-engineering is going to happen with LLMs – even though they are exponentially more complex, it will eventually happen. And when it does happen, there will be more attributions embedded in LLM answers. This will drive up the potential costs that publishers can extract from LLMs because one of the key elements of an LLM – unlike the internet at large – “shitposting will not get you anywhere on ChatGPT.”

— Provocation – LLMO is the new SEO

—— what did you think?

… finally

The Web is undefeated

Many of our members and guests have been working in digital media for decades and they have read (or written) many pieces about the impending demise of the internet – and still it persists.

— Provocation – LLMO is the new SEO

—— what did you think?

Fediverse Deserves

Your Notice,

Even If

You’re Not

Using It

Ernie Smith
Editor of Tedium, a twice-weekly internet history newsletter, and a frequent contributor to Vice’s Motherboard.

Mastodon is known as a Twitter alternative for the technical, but its underlying protocol could help solve some of social media’s biggest headaches.

You might find it a bit uncomfortable to build your social presence on the digital frontier, which is perhaps why Meta’s Threads or Bluesky look a lot more attractive to brands than the fediverse—a loose connection of federated networks, most notably Mastodon, built on an open-source communications protocol called ActivityPub. Even LinkedIn might feel safer in comparison.

The social media pros I’ve talked to have suggested it feels like chaos and not in a good way. That probably explains why brands haven’t made the leap, unless their audiences are suitably technical. (My Linux jokes do really well there.)

Admittedly, even though I’m a fan, I’ll be the first to admit that the fediverse—the loose collection of connected servers that Mastodon and other ActivityPub-compatible services connect to—doesn’t have the fit and finish of a network built for mass consumption. Still, with a little help from Twitter and Reddit’s recent troubles, it has attracted roughly 10 million total users and nearly 2 million monthly active users, according to FediDB.

While that pales in comparison to the estimated 100 million users on Threads, That’s well above Bluesky’s 1 million users and Post.news’ 440,000 users. (Threads, of course, benefited from Instagram’s existing network effects.)

That may not be enough to move your needle. But ignoring the potential of ActivityPub entirely is a mistake because services like it often shape the corporate world. It could be a way to control your brand’s digital destiny.

Most social networks work like a hub-and-spoke model, where users tend to pull information from one centralized resource. Platforms on the fediverse use servers that could talk to anyone on the network.

The effect is similar to early social networks like GeoCities, which relied on interest-based communities. Mastodon ups the ante by encouraging robust local servers with distinct local timelines—but that can still talk to the outside world. If you’re into anime, you can join a server with a robust anime community—but you don’t always have to focus on anime. But the real power of the fediverse is that these servers can scale into something the size of Twitter, but rely on a network of individual hosts who share the server bill and resources.

This has benefits from a moderation standpoint. Dealing with trolls on other social networks is a bit of a crap shoot, but joining smaller servers on Mastodon or competing platforms like Firefish allows you to tailor your experience accordingly.

Prefer free speech over heavy moderation? You can join a server like that. Want to join a more closed-in community instead? You can join a server using software with tighter controls, like Hometown, or join a Reddit-style community hosted on Lemmy.

Just want to use something that feels like Twitter? Hop on a larger server, like Mastodon.social. And if the trolls show up, server mods can block both them and their server—and so can end users.

And if you find that one server isn’t a fit for you, you can take your identity and your followers—but not, in most cases, your posts—to a new server with you.All this can confuse folks used to centralized networks (i.e. most of us), but it’s a throwback to earlier internet distribution models. The fediverse model evokes Usenet, for example. Another common comparison is XMPP, an open-source protocol commonly used as an alternative to AOL Instant Messenger. And the account naming conventions closely follow the cues of email.

To put it all another way, ActivityPub is open architecture, like RSS. That kind of plumbing will become more valuable on the internet over time, even if it never reaches the scale of Twitter, X, or whatever Elon’s calling it today.

That puts it closer to IRC or old-school mailing lists, which are still used in many pockets, even if they aren’t quite mainstream. These old networks can still have commercial value—XMPP, once known as Jabber, has evolved into an essential communication protocol for the internet of things.

Unlike those, however, ActivityPub is riding a wave of developer buzz, which makes it likely that later generations of apps will support it natively. It has formal support from the World Wide Web Consortium. And it can plug into many mediums, making it possible to share content once and have it spread across many platforms, making it easier to support multiple networks with your content.

Admittedly, some challenges could dampen its uptake. While Meta is publicly interested in connecting Threads to Mastodon, many existing Mastodon users are understandably concerned that it might ruin the network—and there’s a chance that could scare Meta off. (Threads already has a huge user base, after all.) But others see this as a way to strengthen the value of existing networks—existing platforms like Tumblr, Flickr, Flipboard, and Medium are also interested in joining the fediverse.

If the fediverse does find a way to get past its cultural challenges with commercialism, it could solve a pair of problems that often slow down major brands online: building an audience and building trust.

Every time a new social network appears, time is often wasted convincing your biggest advocates to follow you on the hot new thing, with no guarantee that the social network will be anything other than a flash in the pan. There’s no guarantee any of these networks will be nearly as effective, which is why many brands have continued to focus on building traditional email lists.

Plus, there’s the whole factor of impersonation and verification, which Elon Musk has muddied the waters on, but remains a major problem for large companies. The fediverse has a much better solution than many proprietary networks: You could self-host your account and attach it to your domain, just like you might host a content hub, and have that account plug into the rest of the fediverse.

A social presence hosted on a first-party website could naturally carry a level of brand association that a checkmark next to a username might lack these days. (If you don’t want to use your domain, Mastodon also has a very easy form of self-verification that works quite well, no $8 surcharge needed.)

You could transfer your existing followers to another ActivityPub-compatible server, potentially speeding up ramp-up time to the next new thing.

Some examples of what this could look like are already in the wild—the European Commission, for example, has 85,000 followers on an account on its own dedicated server.

Suppose Threads and other networks plug into the fediverse. In that case, it’s not out of the realm of possibility that large companies might be able to build connections with their fans this way while avoiding some of the inherent risks that traditionally come with new networks.

The idea of “owning” your audience has been a somewhat foreign concept in the social media era. The fediverse could finally make it possible.

What does it mean to take your following with you?

It means you can move from one Fediverse server to another without losing the people who followed you on the other server, as long as those two servers don’t have restrictions from connecting. Most Fediverse servers can speak to each other, except in rare cases when they prefer not to.

What happens if your following isn't on the new server?

In those rare cases where a server denies a connection to another, and your followers are based on that server, you won’t be able to speak to them.

What would allow content to come with you or not?

While your followers can follow you to a new server, your posts will stay behind on your previous server but will stay accessible as long as that server stays online. As stated earlier, so long as those two servers can speak to each other, your content would remain accessible from the new server, even if you moved servers.

Can you join two servers? What happens if you are very interested in two servers?

You have to choose a home server, so you can only be primarily on one. You could still see content on that other server and they could see your content, as long as they allow each other to connect to each other.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Good Artists Copy,

Great AIs Steal.

You are invited to:

Exploring the

forces of

disruption

in entertainment,

media and tech

August 9th, 6pm-10pm
Water Mill, Hamptons, NY

Set in a modern-styled barn nestled within the farm fields of Bridgehampton, ON_Entertainment invites you to participate in Good Artists Copy, Great AI Steals, an evening driven by deep provocations that serve to unlock new perspectives in the business of entertainment.

Entertainment executives, technology founders, artists, entrepreneurs, and investors will come together for a captivating evening of off-the-record conversations, enhanced by the smooth allure of Komos Tequila and the perfect accompaniment of caviar.

To top it all off, an exquisite dinner awaits, artfully paired with the delightful and organically farmed Avaline wines, courtesy of founders Cameron Diaz and Katherine Power. It promises to be an unforgettable gathering of visionaries, indulging in both fine company and delectable flavors.


RSVP to dharika@ondiscourse.com 

Please reach out for travel and accommodation recommendations if you need them.

WHAT IS

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

WHO?

Designed for executives, brands, celebrities, entrepreneurs, and business leaders at the intersection of technology and entertainment

YOUR
HOSTS

Dan Gardner
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse
Toby Daniels
Co-Founder, ON_Discourse
Brandon Ralph
Founder, The Unquantifiable

With thanks

to our partners

Threads:

Who cares?

Dan Gardener
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse

All the talk about Threads would make you think it could be as potentially transformative as ChatGPT. 

The headlines read: “100 million sign-ups in 5 days.” Wow, this must be important! 

Podcasts are endlessly talking about it. Media trades can’t stop writing about it. Creative advertising agencies are already the experts on how to use it. Aren’t we lucky?

Let’s put the 100 million sign-ups into perspective. Justin Bieber still personally has more followers on Twitter than the entire Threads platform.

100 million is still a small percentage of total Instagram or even Facebook users despite the frictionless way you could sign up. If you advertised a free service to over 2 billion people, is 5% sign-up a success? Why is the media jumping all over this like it’s an indication of success or change in behavior?

And most importantly, why would you use it if you care about text-based social networks and already have a proven platform that delivers? Is a dislike for Elon Musk the main driving force for change? And so they go to Meta?!

WHAT A WASTE

OF TIME

Is this the new Apple vs. Microsoft passion war of the ‘80s and ‘90s? Doesn’t seem like it. At least with the Apple vs. Microsoft brand war there were product attributes or design tastes that were clearly different and better.

Threads has no clear behavioral-driven product experiences that are new or superior. YouTube popularized community-generated content, Facebook allowed you to connect in new ways, WhatsApp allowed you to communicate in new ways, Twitter allowed you to discover content in new ways, Snap created ephemeral communication and sharing, TikTok created a new mobile-first faster way to share and consume content, Pinterest allowed you to be inspired in a new way and even more recently ChatGTP allowed you to create in new ways. Threads have no clear proposition that is different.  

This feels like the media is talking to themselves. It’s meaningless to most people. 

And for advertisers, it’s meaningless to them as well. At least for now.

I can reach more people with Bieber or the Super Bowl if I wanted mass advertising with no targeting.

Will it pick up steam? Maybe. I’m not making a prediction either way. But right now it’s much ado about nothing. Just something to speculate and talk about. 

Big business is not doing anything differently because threads exists today, but the mainstream and trade media would have you feeling differently.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

You're invited to:

Will AI Enhance or Destroy the Business of Media?

July 28th, 2023 
Virtual Event

12:00 pm- 1:00 pm ET

As an ON_Discourse member, we invite you to join a discussion that aims to deepen our understanding of AI’s impact on the business of media.

We will be joined by former tech reporter for the New York Times and now columnist at the Washington Post, Taylor Lorenz, trust editor for the New York Times, Edmund Lee, and co-founder of Vox Media, Trei Brundrett.

Gain insights into the future of AI driven journalism, personalized content, disruptive distribution models, and the convergence of traditional and digital formats, that set the stage for a media revolution.


RSVP to dharika@ondiscourse.com

Special

Guests

Taylor Lorenz
Tech Columnist,
Washington Post
Trei Brundrett
Co-Founder & Senior Advisor,
Vox Media
Edmund Lee
Trust Editor,
The New York Times

Why
         Attend?

We are Surrounded
by Fake-experts
_______Lacking Depth
in their Thinking

Ideas in our Industry are
_______Trapped within
Conventional
Boundaries

The People
_______in our Industry
Often Think
_______the Same

Unintended Consequences
_______in Tech Lead to
Costly Mistakes
_______in Business

Perspective
              is Everything.

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

AI is Not a Public Utility

Anthony DeRosa
Head of Content and Product,
ON_Discourse
or
Should Bored Apes
Be in Charge of AI?

There are a number of reasons why handing defacto ownership of AI to any one or two companies is not only unfeasible, but also a non-starter from both from a regulatory perspective and an antitrust perspective. Unfeasible because open source AI models are likely to allow those with modest means to run their own AI systems. The cost to entry gets lower by the day. There’s no way to put that genie back in a bottle. 

Charging someone to obtain an exorbitant “AI license” in the realm of several billion dollars seems clean and easy to execute, but how could it possibly be enforced? How would you find every AI system? What if they just ran the servers overseas?

Pushing AI innovation outside of the United States is exactly what adversaries would like to see happen. If you were to somehow find a way to limit the technology here at an infrastructure level, it would simply flourish in places with interests against our own. It would be a massive win for China if we were to abort an American-based AI innovation economy in the womb.

It’s not even clear what “controlling” AI would even mean since no one company can possibly own the means to utilize AI. It’s like trying to regulate the ability to use a calculator. You can’t put a limited number of companies in charge of “AI infrastructure” because there’s no reasonable way to limit someone’s ability to put the basic pieces in place to build one. 

Thinking of AI as a public utility is incoherent. The defining characteristic of a public utility is that the means of distribution, the delivery lines, would not benefit from having multiple parties build them. Unlike utilities, such as phone, power and water, there’s not a finite source for AI and a limited number of ways to deliver it. There’s many ways AI can be built for different purposes and having few companies doing so is not a common good. Making that comparison is a misunderstanding of what AI is and how it works.

Putting government controlled monopolies in charge of AI would create a conflict of interest for those parties, leading to among other things, a privacy nightmare and a likely Snowden-like event in our future that reveals a vast surveillance state.

One might argue that we should at least limit large scale AI infrastructure. As unworkable as that may seem, let’s interrogate that argument with the idea that Apple would “control” that business. Apple might have a solid record of protecting consumer privacy, pushing back on law enforcement and government requests to access phone data. That trust would be shattered once they were an extension of the U.S. government by way of their granted AI monopoly and their market dominance would likely plummet. It would be a bad deal not only for Apple but for consumers as well.

Some of the most potentially useful forms of AI is utilized by private LLMs, which would have more refined, domain-specific, accurate data within a smaller area of focus. Doctors, scientists, and other specialists benefit greatly from this bespoke form of AI. Putting AI in the hands of one or two large companies would stifle innovation in this area. For those reasons alone it’s unlikely to be considered on regulatory and antitrust grounds. 

If we want to put safety around AI, there's a better and more realistic way to approach it.

The best way to deal with AI risks is through reasonable regulation. Independent researchers can document the potential risks and laws can hold them to account. It can happen at both the state and federal levels. Several states are already forming legislation based on the “AI Bill of Rights” and other efforts are happening worldwide. Handing over control of AI to a few companies isn’t feasible, doesn’t make good business sense, and wouldn’t necessarily prevent the calamaties it was intended to mitigate. Instead we will need to be clear-eyed about the specific risks and meet them head-on, rather than expecting them to disappear because a few massive tech companies are in control of the technology.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Monopoly or Oligopoly, Seriously? With an idea like that, we will soon be crowning Oligarchs.  

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

or
Should Bored Apes
Be in Charge of AI

I do not trust Bored Apes and they are not who should be regulating AI. But making Big Tech bigger by allowing them to be the regulators? That’s like wishing for bigger banks with more control? Ultimately, maybe not in the short term, but in the long term, it ends badly. We already have a too big to fail Big Tech problem, so we don’t have to double down on that.

BIG TECH

When we think of big tech, Apple or Microsoft and trust don’t go together. I’m pretty sure anytime either of those (or the others) have an opportunity to monopolize and create unfair advantages through their ecosystem, they do: Maps, Browser, Messaging, App Store etc. Ultimately innovation gets stifled, reducing the incentive to be inventive when your path can be crushed immediately by the players who are the gatekeepers of distribution.

The smartest people from AI and Big Tech including Sam Altman and others have signed petitions and spoken to congress about the need for regulation, but surprisingly those smart people also don’t have any tangible solutions (or they don’t actually want them?). How do we expect them to regulate if they can’t even propose a solution?

Should not

Ultimately, the cat is out of the bag. There will be too many spin offs in too many countries that can’t be regulated working on advancement of AI. The best we can do is to work on next generation cybersecurity, and think of fall back safe switches that will allow a shut down or disconnection of systems should AI fall into a bad actor’s hand (or worse out of any human’s hand). 

Every sector has regulation, so the government should continue to develop innovation friendly regulation. Government, although not perfect, is not driven by profit, but built to serve and protect. Regulation should still sit in their hands, with elected officials who are held accountable to the people.

GET BIGGER

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Should

Bored Apes

be in charge

of AI?

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory
Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Do you believe in free markets? I do. I know a lot of people do. Competition drives innovation and fairness to the consumer. Regulation is stifling. It slows growth and free enterprise and prevents innovation.

Maybe.

Or maybe not.

or
Big Tech Should 
Not Get Bigger

Who do you trust? Bored Apes? Bankman Fried? Kevin Rose? Facebook? TikTok? Should they all have an opportunity to participate in AI? Have they earned it? There are thousands of others I could put on the list. Not necessarily “bad” enterprises or “bad” people (some are). Just companies and people who have not earned our trust. Imagine the mess they would all create.

Two companies that have earned our trust are Apple and Microsoft. Over the past 30 – 40 years they have performed through internet bubbles, financial crises, pandemics, recession, and inflation. They have satisfied the consumer in many ways and have not been accused of stealing (Maybe from each other). You may not agree with the 30% vig on the Apple App Store, but it’s far from stealing. It has created safety and standards which have provided trust at scale for consumers.

These are two trusted vendors to the world. Many startups became rich doing business with both. As we complain about their control and the unfairness of their system they have helped change and enrich the world. Why wouldn’t we want them to be the stewards of the AI revolution? Can you think of better companies to safely lead us? 

They are like the better version of cable companies from 30 years ago when there was no competition, but everything was priced fairly, and pretty much everything worked. Yes, everyone “hates” the cable company, but the value exchange overall worked. Their service could always improve (which has been over the years), of course. Nothing is perfect. People complained about the rising costs, but it beats the cord-cutting world we have now where we subscribe to multiple streaming services. Most people’s costs are higher now, with a lack of any unified bill or understanding. 

Monopoly

Or another example was the phone companies of the 60s and 70s or the utilities. Boring and regulated and licensed. One choice, no confusion of clear value, no scammy deals, no discounts rolling off, and being billed 3x. Choice is not always better.

There is a lot of talk about AI regulation. You even have Sam Altman going to Congress and acknowledging the need. Some have argued, he has openly asked Congress for it because he knows the government has no ability to do it under the current spectrum of thinking. AI speed of innovation is almost too fast for a governmental agency to stay relevant with all the top paying jobs going to the private sector. Government typically is inefficient in its approach and is mired with challenges that the private sector doesn’t have. So what is the balance of government control and regulation and the freedom of the capital market that has spawned such innovation? A license to big tech. Similar to the cable, phone and utility companies mentioned above. It’s not perfect, but it has worked.

or Oligopoly

Why not sell them a license to run the AI infrastructure and close the door behind them? Build the moat. Charge them each $500,000,000,000 (That’s 500 Billion) for 10 years and let them go to work. Still, Regulate the shit out of them. Let Apple make its 30%, but they can police the AI models and keep us safe. They have proven the scale with the Apple App Store. Really – who else are you going to trust? They earned it, let them have it. And let them pay for it. And let them hopefully make money off of it. It’s a system that works. 

We hear it’s not fair to other entrepreneurs. Tough shit. Let them make the killer apps. Let’s put the grown ups in charge. Really, just imagine for a second if the free-for-all we experienced with Bored Apes and Moonbirds and Binance happened in AI. The risks we would all be taking putting that trust into the hands of greedy amateurs. Someone will win – let’s predetermine that it is someone we can trust.

Clearly more details need to be worked out. That is welcome, even the disagreement. But we’ve yet to see an actual solution better than this, because history has shown us that private market with regulation is the superior way to allow innovation while providing control, while actually generating revenue (through licensing) to the governemnt instead of using tax payers to accomplish a far more inferior solution.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Boomers vs Juniors:
Easy choice for emerging tech

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

It’s a clear answer. Juniors (new workplace entrants) have been using technology their whole lives. It’s second nature to them. Of course, they have a clear advantage. You build a team of young energetic workers, you get great creativity, teamwork, and the natural depth of skill that has been honed since birth. A company filled with these go-getters should certainly have a leg up.

On the flip side, you can piece together a team of rethreads who are hanging on in the workplace. They have never seen this particular technology before and all tech seems to be a struggle for them. They are stuck in old ways of thinking and fall back on their old ways of solving problems. A company filled with these people must be destined to fail.

BUT

The potential of generative AI changes all of that. Its best ability is executing tasks. It can write code. It can write copy. It can create imagery. Easier and faster. You don’t actually need technical proficiency to execute almost anything, because the technology simplifies execution to simple semantic commands. The advancement is continuing to compound at a rate where it can execute more and more with proficiency and ease. 

What can AI not do? Use Judgment.

So is AI changing the age-old story where the older you get the less relevant you get? Who will really provide outsize value in a workplace defined by increasing automation and the need for great judgment?

The Boomers

Because of their years of experience and perspective

They:

  • Are thrilled about the opportunity.
  • Understand that they don’t have all the answers.
  • Have adapted to new technology many times in the past.
  • Are eager to mentor others and be good teammates. They know that it’s not about themselves.
  • Understand that the clock is ticking. They’re not trying to conquer the world –  they simply just want to help the team win.
  • Know it’s not about what they get, it’s about what they give.
  • Are a proven hard-working commodity. Sure, they may need a nap, but they will be back refreshed.
  • Have a perspective that drives creativity, humility, and most importantly they have JUDGMENT.

The Juniors

Because of their lack of experience and perspective

They:

  • Think they know.
  • Think it’s about them.
  • Don’t have much of a clue about leadership and the importance of selflessness.
  • Are more worried about what they get, not what they give.
  • Don’t want feedback. They want to be right.
  • Have limited judgment, however, they believe their judgment must be right.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know