You're invited to:

Playing
Business           

An Exclusive
                                           ON_Discourse
                                                                 Members-Only Dinner
           (and After Party)

June 22nd, 2023
Le Maschou, Cannes

Dinner: 7:00 pm- 10:00 pm

After Party: 10:00 pm- 2:00 am

During Cannes LIONS, ON_Discourse will host four days of programming in partnership with Stagwell’s Sport Beach. The week culminates with a private dinner and after-party hosted at Le Maschou. RSVP memberships@ondiscourse.com

Special

Guests

Kyle Martino
Former Professional Soccer Player
Mack Hollins
NFL Athlete
DeShone Kizer
Former NFL QB
Spencer Dinwiddie
NBA Athlete and Entrepreneur

Vanita Krouch
‘23 NFL ProBowl NFC Offensive Coordinator
Diana Flores
Mexican Flag Football Team Quarterback and Captain
James Worthy
TV Commentator and former NBA Athlete
Conrad Anker
American Rock Climber, Mountaineer, and Author

Performance by

World Renowned

DJ and

Entrepreneur

MICK

What?

ON_Discourse is a new membership media company focused on the business of technology, raising the level of discourse with expert-driven perspectives.

We provide member-only content for those that crave substance and closed-door events where you can ditch the small talk.

During Cannes Lions, Stagwell’s Sport Beach will bring together creatives, brands, marketers, athletes, coaches and leagues to discuss the future of fandom, and celebrate the impact sport has on shaping global culture.

Why
         Attend?

We are Surrounded
by Fake-experts
_______Lacking Depth
in their Thinking

Ideas in our Industry are
_______Trapped within
Conventional
Boundaries

The People
_______in our Industry
Often Think
_______the Same

Unintended Consequences
_______in Tech Lead to
Costly Mistakes
_______in Business

Can I
     Join?


The ‘Playing Business’ dinner is exclusively for ON_Discourse members. Premier members receive an invite to the dinner, the daily closed-door sessions, and will receive complimentary helicopter flight from Cannes to Nice with our partners BLADE. For more information email memberships@ondiscourse.com

The

Open
Source

AI
Revolution

Dylan Patel
Semiconductor Analyst

In the late 20th century, the technology world witnessed a seismic shift as open-source Linux rose to prominence, challenging the dominance of proprietary operating systems from the era’s tech giants. Today, we are on the cusp of a similar revolution in the realm of AI, as open-source language models gain ground on their closed-source counterparts, such as those developed by Google and OpenAI.

In the 1990s, the UNIX ecosystem was dominated by proprietary solutions from major players like Sun Microsystems, IBM, and HP. These companies had developed sophisticated, high-performance systems tailored to the needs of their customers, and they maintained tight control over the source code and licensing. However, Linux, an open-source operating system created by Linus Torvalds, started gaining traction, ultimately disrupting the market.

The Linux revolution was propelled by three key factors: rapid community-driven innovation, cost-effectiveness, and adaptability. By embracing a decentralized development model built off the x86 personal computer, Linux empowered developers worldwide to contribute to its growth. This allowed it to evolve more quickly than its rivals and adapt to a diverse range of applications. Furthermore, Linux’s open-source nature made it significantly more cost-effective than proprietary alternatives, which relied on expensive licensing fees.

Fast-forward to the present day, and we are witnessing a similar upheaval in the AI landscape. The past two months have seen open-source AI projects such as EleutherAI GPT, Stanford Alpaca, Berkeley Koala, and Vicuna GPT, make rapid strides, closing the gap with closed-source solutions from giants like Google and OpenAI. Open-source AI models have become more customizable, more private, and pound-for-pound more capable than their proprietary counterparts. Their adoption has been fueled by the advent of powerful foundation models like Meta’s LLaMA, which was leaked to the public and triggered a wave of innovation. 

The Linux saga offers important lessons for the AI community, as the similarities between the rise of Linux and the current open-source AI renaissance are striking. Just as Linux thrived on rapid community-driven innovation built off the backs of the x86 PC, open-source AI benefits from a global pool of developers and researchers who build upon each other’s work in a collaborative manner off the backs of gaming GPUs. This results in a breadth-first exploration of the solution space that far outpaces the capabilities of closed-source organizations.

Another parallel is the cost-effectiveness of open-source AI. Techniques such as low-rank adaptation (LoRA) have made it possible to fine-tune models at a fraction of the cost and time previously required. This has lowered the barrier to entry for AI experimentation, allowing individuals with powerful laptops to participate, driving further innovation.

Moreover, open-source AI models are highly adaptable. The same factors that make them cost-effective also make them easy to iterate upon and customize for specific use cases. This flexibility enables open-source AI to cater to niche markets and stay abreast of the latest developments in the field, much like Linux did with diverse applications.

The implications of this open-source AI revolution are profound, especially for closed-source organizations like Google and OpenAI. As the quality gap between proprietary and open-source models continues to shrink, customers will increasingly opt for free, unrestricted alternatives. The experience of proprietary UNIX-based systems in the face of Linux’s rise serves as a stark reminder of the perils of ignoring this trend. In fact, with image generation bots, OpenAI’s Dall-E and Google’s various closed models are barely a point of discussion as the world flocked to open Stable Diffusion models.

To avoid being left behind, closed-source AI organizations must adapt their strategies. Embracing the open-source ecosystem, collaborating with the community, and facilitating third-party integrations are crucial steps. By doing so, these organizations can position themselves as leaders in the AI space, shaping the narrative on cutting-edge ideas and technologies. Companies like Replit, MosaicML, Together.xyz, and Cerebras are doing just that, releasing open-source models, but offering services, finetuning, or operations as a service instead.

The implications of this open-source AI revolution are profound, especially for closed-source organizations like Google and OpenAI. As the quality gap between proprietary and open-source models continues to shrink, customers will increasingly opt for free, unrestricted alternatives.

The flip side of the argument is that this is only possible for a certain model size. There are many emergent behaviors that have only been witnessed on the largest models. While open-source AIs that are an order of magnitude smaller than GPT-3 have already surpassed GPT-3’s quality, this does not necessarily apply to models of the scale of GPT-4 and beyond. With continued scaling in sequence length, parameter count, and training data set sizes, it is possible the gap between open-source and closed-source widens again.

Furthermore, while models are free to use, services that are built on top will still require significant investments. Google, Microsoft, and Meta are able to build these closed-source services for use in people’s everyday lives due to the moat of their platforms. Lastly, the cost of inference is a significant barrier given most consumer devices do not have the horsepower required for models larger than 7 billion parameters (GPT-3 is 175 billion parameters, GPT-4 is over 1 trillion), and it is possible that only the largest organizations can afford to scale their model out to billions of users.

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know

Against AI "art"

Molly Crabapple
Artist and writer in New York City.  Her work was nominated for three Emmys and is in the permanent collection of the Museum of Modern Art.

Like most illustrators, I’ve been horrified by the rise of AI image generators, like Dall-e, Midjourney, and Stable Diffusion. These systems are trained on billions of images obtained without the knowledge, compensation, or consent of their creators, and churn out passable-looking images in seconds, for pennies or for free. Since they are faster and cheaper than any human illustrator can be, these generators threaten to destroy the industry that I have devoted my life to. Worse, they are using our own stolen images against us. My work, like that of many other illustrators, is stored in the LAION-5B database, which billion-dollar corporation Stability AI uses to train their generator. Even more alarmingly, DALL-E, owned by multi-billion dollar Open AI, can create ersatz versions of my work if you type “drawn in the style of Molly Crabapple."

One might argue that the rise of AI image generators is only a problem for illustrators like me. But what about you, editor, art director, or publisher? Why should you forgo something so convenient? Sure, generative AI threatens mass unemployment for millions of people, far beyond the illustration field, but appeals to ethics and solidarity won’t stop you from using it any more than it stopped you from using Amazon or Uber or AirBnB. You need stronger stuff.

Well, here are some reasons why you should avoid using AI art generators, even if the future of illustrators does not concern you in the slightest.

Reason One:
Lawsuits

There are currently two major lawsuits against the corporations that make the major image generators.

In January, three artists launched a class action lawsuit against Stability AI (the company behind Stable Diffusion), Midjourney and DeviantArt, alleging mass copyright infringement.

In April, stock image company Getty launched another lawsuit against Stability AI for 1.8 trillion dollars for scraping their entire archive (the theft was so obvious that generators spat out images with Getty’s mangled watermark).

Many more lawsuits will undoubtedly follow – especially in the EU, where privacy restrictions are stricter than in the US. Why is this relevant to you?

The terms of service at many generators make users liable for copyright violations in the images they generate. In Midjourney’s words: “If you knowingly infringe someone else’s intellectual property and that costs us money, we’re going to come to find You and collect that money from You.” 

Reason Two: Blowback

I am yet one of many who are strongly opposed to AI art generators. When I co-released an Open Letter calling for their restriction in publishing, thousands of people, from every continent signed – including MSNBC host Chris Hayes, actor John Cusack, and author Naomi Klein. This opposition extends from organized professional groups to unaffiliated art lovers, but it is passionate and it is growing.

One might argue that the rise of AI image generators is only a problem for illustrators like me. But what about you, editor, art director, or publisher?


Any use of AI-generated work is likely to inspire a loud and persistent backlash. Look at Amnesty International, which tried to use photorealistic images from Midjourney to illustrate an article commemorating the second anniversary of Colombia’s national strike. The criticism was so harsh they were forced to pull the images and issue an apology. Entities from Netflix Japan to Amsterdam’s Rijksmuseum to the Bradford Literary Festival have all faced fury for using AI-generated images. The point of the illustration is to make you look good. Why use something that will do the opposite?

Reason Three: Sameness

It’s been mere months since AI art generators exploded, but that’s already long enough for them to have developed certain stylistic tics. There’s the gelatinous smoothness of the skin. Profusion of fingers and teeth. Limbs that go nowhere. Above all, there’s the sameness. An image generator cannot create – it can only spit out pastiches of the art it stole; and if these generators become ubiquitous, the quantity of human art will be dwarfed by oceans of AI-excreted schlock, which the generators will train on and spit out versions of, in an algorithmically-enabled ouroboros of visual mediocrity. 

If you use images from a generator, you are using images that look like the images posted by every other yahoo who uses a generator… including your competitors. You give up the very advantage that art is supposed to give you… the ability to be visually unique.

So there you have it – three unemotional, business-wise reasons why you, my hard-nosed reader, should stay away from AI image generators.

Having defined what I’m against, I now want to talk about what I’m for. Even more than an artist trying to save her field, I am what the digital theorist Douglas Rushkoff calls Team Human – in that I believe that technological developments must be aligned with human values, rather than treated as some sort of manifest destiny, to be imposed upon us regardless of the costs.

While generative AI might be coming for us illustrators, it is coming for you as well. Anyone who translates, writes, sings, codes, conceptualized, or creates is in danger of being chucked out of work by robots trained on their own bodies of work. This also goes for the people, like you, whose job it is to commission us. 

We have one shot to resist Silicon Valley’s plagiarism bots. Let's take it. 

or
How Artists Turn
AI Into Gold
or
Against
AI Art

Do you agree with this?
Do you disagree or have a completely different perspective?
We’d love to know