Fediverse Deserves

Your Notice,

Even If

You’re Not

Using It

Ernie Smith
Editor of Tedium, a twice-weekly internet history newsletter, and a frequent contributor to Vice’s Motherboard.

Mastodon is known as a Twitter alternative for the technical, but its underlying protocol could help solve some of social media’s biggest headaches.

You might find it a bit uncomfortable to build your social presence on the digital frontier, which is perhaps why Meta’s Threads or Bluesky look a lot more attractive to brands than the fediverse—a loose connection of federated networks, most notably Mastodon, built on an open-source communications protocol called ActivityPub. Even LinkedIn might feel safer in comparison.

The social media pros I’ve talked to have suggested it feels like chaos and not in a good way. That probably explains why brands haven’t made the leap, unless their audiences are suitably technical. (My Linux jokes do really well there.)

Admittedly, even though I’m a fan, I’ll be the first to admit that the fediverse—the loose collection of connected servers that Mastodon and other ActivityPub-compatible services connect to—doesn’t have the fit and finish of a network built for mass consumption. Still, with a little help from Twitter and Reddit’s recent troubles, it has attracted roughly 10 million total users and nearly 2 million monthly active users, according to FediDB.

While that pales in comparison to the estimated 100 million users on Threads, That’s well above Bluesky’s 1 million users and Post.news’ 440,000 users. (Threads, of course, benefited from Instagram’s existing network effects.)

That may not be enough to move your needle. But ignoring the potential of ActivityPub entirely is a mistake because services like it often shape the corporate world. It could be a way to control your brand’s digital destiny.

Most social networks work like a hub-and-spoke model, where users tend to pull information from one centralized resource. Platforms on the fediverse use servers that could talk to anyone on the network.

The effect is similar to early social networks like GeoCities, which relied on interest-based communities. Mastodon ups the ante by encouraging robust local servers with distinct local timelines—but that can still talk to the outside world. If you’re into anime, you can join a server with a robust anime community—but you don’t always have to focus on anime. But the real power of the fediverse is that these servers can scale into something the size of Twitter, but rely on a network of individual hosts who share the server bill and resources.

This has benefits from a moderation standpoint. Dealing with trolls on other social networks is a bit of a crap shoot, but joining smaller servers on Mastodon or competing platforms like Firefish allows you to tailor your experience accordingly.

Prefer free speech over heavy moderation? You can join a server like that. Want to join a more closed-in community instead? You can join a server using software with tighter controls, like Hometown, or join a Reddit-style community hosted on Lemmy.

Just want to use something that feels like Twitter? Hop on a larger server, like Mastodon.social. And if the trolls show up, server mods can block both them and their server—and so can end users.

And if you find that one server isn’t a fit for you, you can take your identity and your followers—but not, in most cases, your posts—to a new server with you.All this can confuse folks used to centralized networks (i.e. most of us), but it’s a throwback to earlier internet distribution models. The fediverse model evokes Usenet, for example. Another common comparison is XMPP, an open-source protocol commonly used as an alternative to AOL Instant Messenger. And the account naming conventions closely follow the cues of email.

To put it all another way, ActivityPub is open architecture, like RSS. That kind of plumbing will become more valuable on the internet over time, even if it never reaches the scale of Twitter, X, or whatever Elon’s calling it today.

That puts it closer to IRC or old-school mailing lists, which are still used in many pockets, even if they aren’t quite mainstream. These old networks can still have commercial value—XMPP, once known as Jabber, has evolved into an essential communication protocol for the internet of things.

Unlike those, however, ActivityPub is riding a wave of developer buzz, which makes it likely that later generations of apps will support it natively. It has formal support from the World Wide Web Consortium. And it can plug into many mediums, making it possible to share content once and have it spread across many platforms, making it easier to support multiple networks with your content.

Admittedly, some challenges could dampen its uptake. While Meta is publicly interested in connecting Threads to Mastodon, many existing Mastodon users are understandably concerned that it might ruin the network—and there’s a chance that could scare Meta off. (Threads already has a huge user base, after all.) But others see this as a way to strengthen the value of existing networks—existing platforms like Tumblr, Flickr, Flipboard, and Medium are also interested in joining the fediverse.

If the fediverse does find a way to get past its cultural challenges with commercialism, it could solve a pair of problems that often slow down major brands online: building an audience and building trust.

Every time a new social network appears, time is often wasted convincing your biggest advocates to follow you on the hot new thing, with no guarantee that the social network will be anything other than a flash in the pan. There’s no guarantee any of these networks will be nearly as effective, which is why many brands have continued to focus on building traditional email lists.

Plus, there’s the whole factor of impersonation and verification, which Elon Musk has muddied the waters on, but remains a major problem for large companies. The fediverse has a much better solution than many proprietary networks: You could self-host your account and attach it to your domain, just like you might host a content hub, and have that account plug into the rest of the fediverse.

A social presence hosted on a first-party website could naturally carry a level of brand association that a checkmark next to a username might lack these days. (If you don’t want to use your domain, Mastodon also has a very easy form of self-verification that works quite well, no $8 surcharge needed.)

You could transfer your existing followers to another ActivityPub-compatible server, potentially speeding up ramp-up time to the next new thing.

Some examples of what this could look like are already in the wild—the European Commission, for example, has 85,000 followers on an account on its own dedicated server.

Suppose Threads and other networks plug into the fediverse. In that case, it’s not out of the realm of possibility that large companies might be able to build connections with their fans this way while avoiding some of the inherent risks that traditionally come with new networks.

The idea of “owning” your audience has been a somewhat foreign concept in the social media era. The fediverse could finally make it possible.

What does it mean to take your following with you?

It means you can move from one Fediverse server to another without losing the people who followed you on the other server, as long as those two servers don’t have restrictions from connecting. Most Fediverse servers can speak to each other, except in rare cases when they prefer not to.

What happens if your following isn't on the new server?

In those rare cases where a server denies a connection to another, and your followers are based on that server, you won’t be able to speak to them.

What would allow content to come with you or not?

While your followers can follow you to a new server, your posts will stay behind on your previous server but will stay accessible as long as that server stays online. As stated earlier, so long as those two servers can speak to each other, your content would remain accessible from the new server, even if you moved servers.

Can you join two servers? What happens if you are very interested in two servers?

You have to choose a home server, so you can only be primarily on one. You could still see content on that other server and they could see your content, as long as they allow each other to connect to each other.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Good Artists Copy,

Great AIs Steal.

You are invited to:

Exploring the

forces of

disruption

in entertainment,

media and tech

August 9th, 6pm-10pm
Water Mill, Hamptons, NY

Set in a modern-styled barn nestled within the farm fields of Bridgehampton, ON_Entertainment invites you to participate in Good Artists Copy, Great AI Steals, an evening driven by deep provocations that serve to unlock new perspectives in the business of entertainment.

Entertainment executives, technology founders, artists, entrepreneurs, and investors will come together for a captivating evening of off-the-record conversations, enhanced by the smooth allure of Komos Tequila and the perfect accompaniment of caviar.

To top it all off, an exquisite dinner awaits, artfully paired with the delightful and organically farmed Avaline wines, courtesy of founders Cameron Diaz and Katherine Power. It promises to be an unforgettable gathering of visionaries, indulging in both fine company and delectable flavors.


RSVP to dharika@ondiscourse.com 

Please reach out for travel and accommodation recommendations if you need them.

WHAT IS

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

WHO?

Designed for executives, brands, celebrities, entrepreneurs, and business leaders at the intersection of technology and entertainment

YOUR
HOSTS

Dan Gardner
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse
Toby Daniels
Co-Founder, ON_Discourse
Brandon Ralph
Founder, The Unquantifiable

With thanks

to our partners

Threads:

Who cares?

Dan Gardener
Co-Founder & Exec Chair,
Code and Theory
Co-Founder, ON_Discourse

All the talk about Threads would make you think it could be as potentially transformative as ChatGPT. 

The headlines read: “100 million sign-ups in 5 days.” Wow, this must be important! 

Podcasts are endlessly talking about it. Media trades can’t stop writing about it. Creative advertising agencies are already the experts on how to use it. Aren’t we lucky?

Let’s put the 100 million sign-ups into perspective. Justin Bieber still personally has more followers on Twitter than the entire Threads platform.

100 million is still a small percentage of total Instagram or even Facebook users despite the frictionless way you could sign up. If you advertised a free service to over 2 billion people, is 5% sign-up a success? Why is the media jumping all over this like it’s an indication of success or change in behavior?

And most importantly, why would you use it if you care about text-based social networks and already have a proven platform that delivers? Is a dislike for Elon Musk the main driving force for change? And so they go to Meta?!

WHAT A WASTE

OF TIME

Is this the new Apple vs. Microsoft passion war of the ‘80s and ‘90s? Doesn’t seem like it. At least with the Apple vs. Microsoft brand war there were product attributes or design tastes that were clearly different and better.

Threads has no clear behavioral-driven product experiences that are new or superior. YouTube popularized community-generated content, Facebook allowed you to connect in new ways, WhatsApp allowed you to communicate in new ways, Twitter allowed you to discover content in new ways, Snap created ephemeral communication and sharing, TikTok created a new mobile-first faster way to share and consume content, Pinterest allowed you to be inspired in a new way and even more recently ChatGTP allowed you to create in new ways. Threads have no clear proposition that is different.  

This feels like the media is talking to themselves. It’s meaningless to most people. 

And for advertisers, it’s meaningless to them as well. At least for now.

I can reach more people with Bieber or the Super Bowl if I wanted mass advertising with no targeting.

Will it pick up steam? Maybe. I’m not making a prediction either way. But right now it’s much ado about nothing. Just something to speculate and talk about. 

Big business is not doing anything differently because threads exists today, but the mainstream and trade media would have you feeling differently.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

You're invited to:

Will AI Enhance or Destroy the Business of Media?

July 28th, 2023 
Virtual Event

12:00 pm- 1:00 pm ET

As an ON_Discourse member, we invite you to join a discussion that aims to deepen our understanding of AI’s impact on the business of media.

We will be joined by former tech reporter for the New York Times and now columnist at the Washington Post, Taylor Lorenz, trust editor for the New York Times, Edmund Lee, and co-founder of Vox Media, Trei Brundrett.

Gain insights into the future of AI driven journalism, personalized content, disruptive distribution models, and the convergence of traditional and digital formats, that set the stage for a media revolution.


RSVP to dharika@ondiscourse.com

Special

Guests

Taylor Lorenz
Tech Columnist,
Washington Post
Trei Brundrett
Co-Founder & Senior Advisor,
Vox Media
Edmund Lee
Trust Editor,
The New York Times

Why
         Attend?

We are Surrounded
by Fake-experts
_______Lacking Depth
in their Thinking

Ideas in our Industry are
_______Trapped within
Conventional
Boundaries

The People
_______in our Industry
Often Think
_______the Same

Unintended Consequences
_______in Tech Lead to
Costly Mistakes
_______in Business

Perspective
              is Everything.

ON_Discourse is a new membership media company
focused on the business of technology, prioritizing
expert-driven discourse to drive perspectives.

AI is Not a Public Utility

Anthony DeRosa
Head of Content and Product,
ON_Discourse
or
Should Bored Apes
Be in Charge of AI?

There are a number of reasons why handing defacto ownership of AI to any one or two companies is not only unfeasible, but also a non-starter from both from a regulatory perspective and an antitrust perspective. Unfeasible because open source AI models are likely to allow those with modest means to run their own AI systems. The cost to entry gets lower by the day. There’s no way to put that genie back in a bottle. 

Charging someone to obtain an exorbitant “AI license” in the realm of several billion dollars seems clean and easy to execute, but how could it possibly be enforced? How would you find every AI system? What if they just ran the servers overseas?

Pushing AI innovation outside of the United States is exactly what adversaries would like to see happen. If you were to somehow find a way to limit the technology here at an infrastructure level, it would simply flourish in places with interests against our own. It would be a massive win for China if we were to abort an American-based AI innovation economy in the womb.

It’s not even clear what “controlling” AI would even mean since no one company can possibly own the means to utilize AI. It’s like trying to regulate the ability to use a calculator. You can’t put a limited number of companies in charge of “AI infrastructure” because there’s no reasonable way to limit someone’s ability to put the basic pieces in place to build one. 

Thinking of AI as a public utility is incoherent. The defining characteristic of a public utility is that the means of distribution, the delivery lines, would not benefit from having multiple parties build them. Unlike utilities, such as phone, power and water, there’s not a finite source for AI and a limited number of ways to deliver it. There’s many ways AI can be built for different purposes and having few companies doing so is not a common good. Making that comparison is a misunderstanding of what AI is and how it works.

Putting government controlled monopolies in charge of AI would create a conflict of interest for those parties, leading to among other things, a privacy nightmare and a likely Snowden-like event in our future that reveals a vast surveillance state.

One might argue that we should at least limit large scale AI infrastructure. As unworkable as that may seem, let’s interrogate that argument with the idea that Apple would “control” that business. Apple might have a solid record of protecting consumer privacy, pushing back on law enforcement and government requests to access phone data. That trust would be shattered once they were an extension of the U.S. government by way of their granted AI monopoly and their market dominance would likely plummet. It would be a bad deal not only for Apple but for consumers as well.

Some of the most potentially useful forms of AI is utilized by private LLMs, which would have more refined, domain-specific, accurate data within a smaller area of focus. Doctors, scientists, and other specialists benefit greatly from this bespoke form of AI. Putting AI in the hands of one or two large companies would stifle innovation in this area. For those reasons alone it’s unlikely to be considered on regulatory and antitrust grounds. 

If we want to put safety around AI, there's a better and more realistic way to approach it.

The best way to deal with AI risks is through reasonable regulation. Independent researchers can document the potential risks and laws can hold them to account. It can happen at both the state and federal levels. Several states are already forming legislation based on the “AI Bill of Rights” and other efforts are happening worldwide. Handing over control of AI to a few companies isn’t feasible, doesn’t make good business sense, and wouldn’t necessarily prevent the calamaties it was intended to mitigate. Instead we will need to be clear-eyed about the specific risks and meet them head-on, rather than expecting them to disappear because a few massive tech companies are in control of the technology.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Monopoly or Oligopoly, Seriously? With an idea like that, we will soon be crowning Oligarchs.  

Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

or
Should Bored Apes
Be in Charge of AI

I do not trust Bored Apes and they are not who should be regulating AI. But making Big Tech bigger by allowing them to be the regulators? That’s like wishing for bigger banks with more control? Ultimately, maybe not in the short term, but in the long term, it ends badly. We already have a too big to fail Big Tech problem, so we don’t have to double down on that.

BIG TECH

When we think of big tech, Apple or Microsoft and trust don’t go together. I’m pretty sure anytime either of those (or the others) have an opportunity to monopolize and create unfair advantages through their ecosystem, they do: Maps, Browser, Messaging, App Store etc. Ultimately innovation gets stifled, reducing the incentive to be inventive when your path can be crushed immediately by the players who are the gatekeepers of distribution.

The smartest people from AI and Big Tech including Sam Altman and others have signed petitions and spoken to congress about the need for regulation, but surprisingly those smart people also don’t have any tangible solutions (or they don’t actually want them?). How do we expect them to regulate if they can’t even propose a solution?

Should not

Ultimately, the cat is out of the bag. There will be too many spin offs in too many countries that can’t be regulated working on advancement of AI. The best we can do is to work on next generation cybersecurity, and think of fall back safe switches that will allow a shut down or disconnection of systems should AI fall into a bad actor’s hand (or worse out of any human’s hand). 

Every sector has regulation, so the government should continue to develop innovation friendly regulation. Government, although not perfect, is not driven by profit, but built to serve and protect. Regulation should still sit in their hands, with elected officials who are held accountable to the people.

GET BIGGER

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Should

Bored Apes

be in charge

of AI?

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory
Dan Gardner
Co-Founder & Exec Chair, Code and Theory
Co-Founder, ON_Discourse

Do you believe in free markets? I do. I know a lot of people do. Competition drives innovation and fairness to the consumer. Regulation is stifling. It slows growth and free enterprise and prevents innovation.

Maybe.

Or maybe not.

or
Big Tech Should 
Not Get Bigger

Who do you trust? Bored Apes? Bankman Fried? Kevin Rose? Facebook? TikTok? Should they all have an opportunity to participate in AI? Have they earned it? There are thousands of others I could put on the list. Not necessarily “bad” enterprises or “bad” people (some are). Just companies and people who have not earned our trust. Imagine the mess they would all create.

Two companies that have earned our trust are Apple and Microsoft. Over the past 30 – 40 years they have performed through internet bubbles, financial crises, pandemics, recession, and inflation. They have satisfied the consumer in many ways and have not been accused of stealing (Maybe from each other). You may not agree with the 30% vig on the Apple App Store, but it’s far from stealing. It has created safety and standards which have provided trust at scale for consumers.

These are two trusted vendors to the world. Many startups became rich doing business with both. As we complain about their control and the unfairness of their system they have helped change and enrich the world. Why wouldn’t we want them to be the stewards of the AI revolution? Can you think of better companies to safely lead us? 

They are like the better version of cable companies from 30 years ago when there was no competition, but everything was priced fairly, and pretty much everything worked. Yes, everyone “hates” the cable company, but the value exchange overall worked. Their service could always improve (which has been over the years), of course. Nothing is perfect. People complained about the rising costs, but it beats the cord-cutting world we have now where we subscribe to multiple streaming services. Most people’s costs are higher now, with a lack of any unified bill or understanding. 

Monopoly

Or another example was the phone companies of the 60s and 70s or the utilities. Boring and regulated and licensed. One choice, no confusion of clear value, no scammy deals, no discounts rolling off, and being billed 3x. Choice is not always better.

There is a lot of talk about AI regulation. You even have Sam Altman going to Congress and acknowledging the need. Some have argued, he has openly asked Congress for it because he knows the government has no ability to do it under the current spectrum of thinking. AI speed of innovation is almost too fast for a governmental agency to stay relevant with all the top paying jobs going to the private sector. Government typically is inefficient in its approach and is mired with challenges that the private sector doesn’t have. So what is the balance of government control and regulation and the freedom of the capital market that has spawned such innovation? A license to big tech. Similar to the cable, phone and utility companies mentioned above. It’s not perfect, but it has worked.

or Oligopoly

Why not sell them a license to run the AI infrastructure and close the door behind them? Build the moat. Charge them each $500,000,000,000 (That’s 500 Billion) for 10 years and let them go to work. Still, Regulate the shit out of them. Let Apple make its 30%, but they can police the AI models and keep us safe. They have proven the scale with the Apple App Store. Really – who else are you going to trust? They earned it, let them have it. And let them pay for it. And let them hopefully make money off of it. It’s a system that works. 

We hear it’s not fair to other entrepreneurs. Tough shit. Let them make the killer apps. Let’s put the grown ups in charge. Really, just imagine for a second if the free-for-all we experienced with Bored Apes and Moonbirds and Binance happened in AI. The risks we would all be taking putting that trust into the hands of greedy amateurs. Someone will win – let’s predetermine that it is someone we can trust.

Clearly more details need to be worked out. That is welcome, even the disagreement. But we’ve yet to see an actual solution better than this, because history has shown us that private market with regulation is the superior way to allow innovation while providing control, while actually generating revenue (through licensing) to the governemnt instead of using tax payers to accomplish a far more inferior solution.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

MORE
ON_AI

Boomers vs Juniors:
Easy choice for emerging tech

Larry Muller
ON_Discourse Co-Founder, COO of Code & Theory

It’s a clear answer. Juniors (new workplace entrants) have been using technology their whole lives. It’s second nature to them. Of course, they have a clear advantage. You build a team of young energetic workers, you get great creativity, teamwork, and the natural depth of skill that has been honed since birth. A company filled with these go-getters should certainly have a leg up.

On the flip side, you can piece together a team of rethreads who are hanging on in the workplace. They have never seen this particular technology before and all tech seems to be a struggle for them. They are stuck in old ways of thinking and fall back on their old ways of solving problems. A company filled with these people must be destined to fail.

BUT

The potential of generative AI changes all of that. Its best ability is executing tasks. It can write code. It can write copy. It can create imagery. Easier and faster. You don’t actually need technical proficiency to execute almost anything, because the technology simplifies execution to simple semantic commands. The advancement is continuing to compound at a rate where it can execute more and more with proficiency and ease. 

What can AI not do? Use Judgment.

So is AI changing the age-old story where the older you get the less relevant you get? Who will really provide outsize value in a workplace defined by increasing automation and the need for great judgment?

The Boomers

Because of their years of experience and perspective

They:

  • Are thrilled about the opportunity.
  • Understand that they don’t have all the answers.
  • Have adapted to new technology many times in the past.
  • Are eager to mentor others and be good teammates. They know that it’s not about themselves.
  • Understand that the clock is ticking. They’re not trying to conquer the world –  they simply just want to help the team win.
  • Know it’s not about what they get, it’s about what they give.
  • Are a proven hard-working commodity. Sure, they may need a nap, but they will be back refreshed.
  • Have a perspective that drives creativity, humility, and most importantly they have JUDGMENT.

The Juniors

Because of their lack of experience and perspective

They:

  • Think they know.
  • Think it’s about them.
  • Don’t have much of a clue about leadership and the importance of selflessness.
  • Are more worried about what they get, not what they give.
  • Don’t want feedback. They want to be right.
  • Have limited judgment, however, they believe their judgment must be right.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

It’s Game Over

for Volunteer-Driven

Social Media

Reddit’s blackout issue points to a chasm between its ownership and its volunteer moderators. It also points to the fact that those moderators, like other prominent social media users, should be paid.

Ernie Smith
Ernie Smith is the editor of Tedium, a twice-weekly internet history newsletter, and a frequent contributor to Vice’s Motherboard.

There’s probably no stronger sign that social networks’ tendency to lean on the free work of its users was a losing strategy than the drama happening at Reddit.

You’ve probably heard about the saga that emerged after Reddit decided to begin charging for its API—for one, claims that popular app creators would be on the hook for bills in the tens of millions of dollars. But in many ways, the real story is how quickly the site’s own volunteer moderators turned on the network, leading to a blackout with extended impacts and a CEO who has responded defiantly to his users.

These moderators largely aren’t talking about the blackout in financial terms. But perhaps they should be. In a recent interview with NBC News, CEO Steve Huffman implied that the problem was that the leaders of these communities had too much power over how they were run, even though they are doing so on a volunteer basis.

The people who get there first get to stay there and pass it down to their descendants, and that is not democratic,

Huffman said in reference to long-time moderators of the platform.

Huffman’s comments hint at something that I have noticed from many social networks over the years: A failure to see what leaders of online communities do as worthy of compensation or an equity stake. It’s a structural issue, one that appears to have existed from the start, but has reentered the public conversation recently.

Those who have been watching closely, however, might have seen signs of this problem simmering beneath the surface. Since 2020, Reddit has been legally entangled with the founder of a popular subreddit, r/wallstreetbets. Jamie Rogozinski, along with other moderators, built the group to high-profile mainstream success, but when he took steps to commercialize the group—selling a WallStreetBets book and filing for a WallStreetBets trademark for merchandising, media, forums, and entertainment—Reddit booted him from the group, claiming what he was doing wasn’t allowed.

Reddit then formally opposed Rogozinski’s trademark filing on the grounds that it owned the community and that the trademark would create confusion in the market. Rogozinski sued Reddit in early 2023, claiming that Reddit’s terms of service, which it says Rogozinski violated, effectively make it possible for the company to steal the intellectual property of its users.

“My real issue stemmed from trying to claim ownership over my creation,” he wrote in an IAmA thread. “Reddit systematically takes intellectual property from its users by registering trademarks, and I posed a threat to this.”

In effect, the WallStreetBets creator—whose subreddit directly inspired a forthcoming Hollywood movie starring Paul Dano and Pete Davidson—is challenging the legality of a policy that, in Reddit’s view, favors the power of the crowd over the work of the individual creator.

Rogozinski’s legal action is part of a long legacy of lawsuits by volunteers who felt their relationship with a tech company had crossed the line into work.

What Reddit Has in Common With AOL

In the early boom years of the internet, between 1990 and 2000, America Online convinced thousands of volunteers to take on various support roles to help keep its landmark online service running.

At first, users received free accounts and access hours in exchange for this work, which had developed in a similar sense of community spirit as other online communities like The WELL and CompuServe, as well as later communities like Reddit.

These roles, like on Reddit, were initially seen as collaborative. But as AOL grew larger and more dominant, its valuation grew to massive numbers, and the program grew increasingly exploitative. That led to claims the company was running, as Wired put it, a “cyber-sweatshop.”

This led to lawsuits, both individual and class-action, that ultimately took a decade to resolve. By 2005, after years of negative press, AOL no longer had a volunteer program. A couple of years later, the company paid a $15 million settlement to its army of volunteers.

A more recent parallel also involves a company AOL at one point owned, The Huffington Post, which used a free contributor model for years. It, too, faced lawsuits over the matter—and it, too, shut down the model. When AOL bought HuffPost, observers pointed out the parallels between the news outlet and AOL’s own volunteer army.

There’s a cultural chasm that networks like AOL can cross when something turns from volunteering into working for free. With the recent conflict, Reddit likely crossed it. It could find itself in similar hot water with its moderators, especially if Huffman makes good on his threats to boot some of them out.

Paid Creators & Moderators, Not Volunteers

Generally, I am strongly in favor of social networks having models that support contributors that play important roles in their networks, whether prominent creators, popular influencers, or moderators. Sadly, and frustratingly, many do not.

All commercial social networks should have tools available to make possible a capability for creator monetization, and those that don’t are failing to do their job as social networks. When it is not baked into the network’s DNA, you gradually see problems like the ones Reddit is currently facing, where the goals of Reddit’s user base become opposed to what its ownership wants. If moderators were getting some kind of direct financial support, the odds this would happen would decrease.

Compare this to YouTube, a platform that was founded just four months before Reddit, and has a thriving community, in part because creators are compensated based on the success they bring. 

You might think that YouTube and Reddit are apples and oranges, but when you break it down, the work of a YouTube channel operator is in many ways similar to a Reddit moderator, requiring keeping a close eye on comments, setting an overall vibe, and developing content that the community then reacts to. The main difference is that Reddit moderators generally don’t make videos.

Now, Reddit could do things like share banner ad revenue with moderators, offer tipping functionality, or let moderators paywall content, but unlike many creator communities such as Substack or Patreon, they don’t. (But, as recent events have shown, they can still get fired.)

And yes, creators do care about this: When the Twitter-owned Vine refused to pay its largest creators and didn’t solve longstanding product issues, many of them went to YouTube—likely killing Vine in the process.

Discover More
Discourse

Future social networks may be less likely to make the mistake Reddit is making. Earlier this year, I talked to Nico Laqua, the CEO of a network called Picnic, a Reddit-meets-TikTok site that is growing in popularity with teenagers. He specifically cited the issues with r/wallstreetbets in choosing to build a model where moderators will have fractional ownership and revenue-sharing equity in the groups they build and support.

“We want to take the exact opposite approach—the YouTube approach, where we not only share advertising revenue with the communities but allow them to own and govern themselves in whatever way makes sense,” he told me.

If Picnic someday finds itself picking up some of Reddit’s users, this design structure could make all the difference.

When a social network refuses to compensate its best users, it can all too easily turn into an us-against-them fight—and it can poison the community for good.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Moish E. Peltz, Esq.
Partner, Falcon Rappaport & Berkman
Intellectual Property Law and AI: The Future of Athlete Branding

The rapid technological advancements in artificial intelligence are reshaping numerous industries, with sports and athlete branding emerging as a potential area of transformation. However, burgeoning AI technology raises compelling legal and intellectual property issues for athletes, law practitioners, and other stakeholders.

Traditionally, professional athletes have been able to commercialize and protect their name and brand. More recently, since the U.S. Supreme Court’s June 2021 ruling in NCAA v. Alston, even amateur collegiate athletes have increasingly benefitted from this same concept, after gaining the ability to monetize their name, image, and likeness (NIL). 

Now with the advent of powerful consumer AI tools, all athletes have the means to use their brand and likenesses to generate new revenue streams. Examining past experiences in licensing likeness can shed light on these opportunities and concerns. 

AI will

Consider the music industry. In April, artists Drake and The Weeknd suddenly had a new hit collab titled “Heart on My Sleeve.” The song had hundreds of thousands of streams on Spotify and Apple Music. But, as it turned out, the song was an AI-created knockoff. Neither of the artists had actually recorded the music.

In response Universal Music Group, the record label for the artists, swiftly moved to have the song removed from major streaming platforms, and went a step further, asking the platforms to prevent AI companies from using their catalog to train generative AI tools. However, another artist, Grimes, moved to “open-source” her voice using AI tools, allowing fans to share 50% of the royalties generated using her newly created GrimesAI-1 voice generation platform.

A recent television commercial featuring an AI-generated young Charles Barkley demonstrates another potential use of AI-created content in advertising. Sports betting company FanDuel created an authorized version of a young Charles Barkley in his NBA prime to star in a television ad opposite the present-day retired NBA player-turned-announcer, to hilarious results. Here, the laws that govern the commercial use of an athlete’s likeness could apply, but new stipulations addressing AI applications would be crucial.

Transform

And then consider the video gaming industry, where athletes have licensed their images for use in popular games like FIFA or Madden for decades. Recent AI technology promises to allow gamers to interact with real-time generated avatars with in-game voices and faces that respond to a gamer’s input, such as questions. 

Given that we have already seen popular social media personalities which are entirely robotic or virtual, such as Miquela or FNMeka, it is not difficult to imagine interacting in real-time with your favorite athlete, perhaps even in a version of FIFA or Madden in the future. 

Generative AI can already help create engaging avatars by simulating an athlete’s tone in social media posts or in interactive chatbots. The possibility of personalized fan engagement through these kinds of AI-generated means could bolster an athlete’s brand in unprecedented ways.

Athlete

This arrangement is potentially beneficial for many in the industry. Gaming companies can release more engrossing games. Gamers could get a more interactive experience of their favorite players, albeit an AI-generated version. Athletes and leagues stand to earn more revenues and establish a more personalized connection with their fans. 

However, as athletes enter this new era of branding, they will want to consider the legal implications, looking at the current landscape as a guidepost for licensing AI-generated use cases. For example, the extent to which AI-generated works are protected by US Copyright law, if at all,  is now under question, and many open legal questions remain. Such a world would require athletes, teams, leagues, agents, lawyers, and the industry write large to reimagine already complex contractual agreements, necessitating legal adaptations to accommodate AI technologies.

Some bedrock principles still apply. AI’s potential impact on personal branding in sports could potentially be colossal, but athletes must still take a proactive legal approach as build, protect and grow their brand, secure any relevant IP rights, and operate with a wholistic and long-term outlook as they seek to monetize their NIL rights. 

Branding

or
Did Tom Brady
Really Say That?
Tony Iliakostas
Adjunct professor of Entertainment Law and IP at New York Law School

Moreover, it’s crucial to consider potential legal risks or drawbacks to an athlete’s brand. The use of AI technology raises the possibility of diminishing brand value if not properly managed, leading to a watered-down fan experience. As AI tools become more sophisticated, bad actors could create unlicensed deepfakes of athletes, leading to misuse, misrepresentation, or outright fraud. These realities underscore the importance of robust IP laws that are adaptable to technological changes, and that athletes and their advisors know how and when to use legal remedies.

Athlete branding will evolve along with the next generation of AI technology, and so existing IP laws must evolve to address the complex challenges that might arise. While there is no doubt that AI holds exciting prospects for athletes, it also opens a potential Pandora’s box of legal issues. Athletes, alongside their advisors, lawyers, and agents, will need to navigate this uncharted territory with foresight and diligence, ensuring the protection of their likeness in the evolving landscape of AI technology.

This article is for informational purposes only and should not be construed as legal advice. Feel free to reach out to Moish Peltz, Intellectual Property Practice Group Chair at Falcon Rappaport & Berkman LLP, with specific questions at mpeltz@frblaw.com.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

More

The Future is Fan Controlled

The Future of Golf Will Be Simulated

Game Over? The Uncertain Future of Live Sports

The Death of Live Sports is Greatly Exaggerated

Live Sport Needs to Embrace the Realities of Reality TV

Tech: The Uncanny Valley of Fan Experience

The Fan Experience Event Preview

LIV Golf’s Playbook: Innovating Without Losing Traditional Fans

Home Field Advantage: The Community Experience is Paramount

Fans Are The New Free Agent

Did Tom Brady Really Say That?

An NBA Player Surveys the AI Opportunity

AI: A Goldmine or a Landmine For Athlete Brands?

Intellectual Property Law and AI: The Future of Athlete Branding