Skip to content

The Greatest Theft in the History of the World—Brought to you by OpenAI

AI take trends from quintillions of bytes of blogs, tweets, Reddit threads, and uploaded images and make something appear new, but—this is really important—it can only ever be a derivative, or combinations of derivatives, of other work.

Daniel Velez
Daniel Velez
14 min read

AI is all the rage these days. Specifically, the “generative” AI from a company called OpenAI that has two services: ChatGPT, which is text-based, and DALL-E 3, which is image-based. In ChatGPT, you can ask it to write a summary of Adam Smith’s economic views in the style of J.K. Rowling; and, for shits and giggles, you can ask it to replace all mentions of men with muggles, and also make sure it’s in three tight paragraphs please. In DALL-E 3, you can ask the machine to make a Star Wars movie poster with actors from Seinfeld and make the backdrop similar to the latest Dune movie. Both of these services create shockingly accurate results.

(I’ve asked my friend to use AI to create the ChatGPT and DALL-E 3 examples I gave above as well as an AI version of this article. They can be found in the Appendix.)

The sophistication of generative AI—which many regard as a breakthrough moment—has lead to quick and widespread adoption. Microsoft has incorporated ChatGPT into their search engine Bing; Adobe has created an AI for their flagship Photoshop app; and seemingly every app update includes an integration with AI (and a sneaky update to their privacy policy). Some say these new services will birth a utopia; others scream of armageddon. As with all technology, there will be some good and some bad unintended consequences.

Writing an essay for a college assignment has never been easier. Just type in the essay prompt to ChatGPT and copy and paste. You could write a 500 word essay or a 50,000 word novel in the same amount of time—and the same amount of minimal effort. Is it cheating to have somebody write an essay for you? Yes. What if that somebody was a robot? Also yes. But, isn’t spell check a robot too? That’s allowed so why isn’t ChatGPT? Interesting point, but still—no. Spell check and ChatGPT are different tools entirely. It’s like a baseball player having a cup of coffee before a game versus six syringes of anabolic steroids.

I work on a college campus and see rampant use of ChatGPT for college assignments.

Arizona State University’s policy on AI encourages their professors to “… determine whether student use of generative AI/ChatGPT in their courses is permitted or prohibited and to state this and any parameters in your syllabi, announcements, and assignment instructions.” It also goes on to say, ”The accuracy of AI detection tools is not reliable.”

Best believe that these college kids are going crazy with AI. As with any innovation, there will be a period of exploitation as rules, regulation, and laws take shape.


I don’t think generative AI is legal. The key lies not into what these machines generate but how.

These AIs are swallowing the entirety of the internet, digesting and synthesizing with speed that would challenge our all-seeing God, and then spitting it out—or “generating”—what appears to be novelties. But, they can never be novelties because computers aren’t coming up with new ideas—that would require agency. If I put these AIs in an isolated room without internet, they will not work. They require god-level amounts of data to feed their machine learning algorithms. They take trends from quintillions of bytes of blogs, tweets, Reddit threads, and uploaded images and make something appear new, but—this is really important—it can only ever be a derivative, or combinations of derivatives, of other work.

And if AIs can only ever be derivatives of other work, how are the original creators being compensated? Or, if they didn’t give permission to use their work, how are they being protected?

Before I begin, allow me to dispose of the argument that AI is being inspired by other works the same way a human would, but they are just doing so faster. Again, this is a question of how. I can watch Dune at the movie theater and leave so inspired by the film that I start writing a dystopian novel the next day. That’s totally fine; even encouraged. But, how does a computer watch Dune? It cannot. It will have to copy the video file to its database. That isn’t watching; it’s copying. And, how did the AI get access to the Dune movie? It surely didn’t pay for a ticket, much less the rights to the film. I can watch Dune when it comes out on Netflix so why can’t an AI do that? Having a computer stream a show is surely against Netflix’s terms and conditions. It’s probably against spam laws. How many times a show or movie is watched affects contracts and algorithms. Having computers—who advertisers and filmmakers don’t care about—alter viewing numbers isn’t allowed; in the same way that I can’t release a record on Spotify and run one million virtual computers to play my song 24/7 for the rest of my life to enrich myself—that’s against the terms and conditions and obviously fraud.

AI can’t be inspired; it can only manipulate.

Now, there are other ways to consume movies without paying to watch them; it’s the ways movies permeate our culture: people talk about them; movie studios make posters, billboards, and commercials; critics write reviews; your friends make memes. But your friend taking a still shot of a scene in the movie and writing something funny underneath is a violation of copyright laws; it’s just a matter of whether the company wants to send your friend or the social media platform a cease-and-desist letter or not. For the most part, companies don’t do that for tweets. So in this case, can the AI take all of the non-film data and build its “knowledge” that way? Even this is troublesome.

Commercials, movie posters, and critics’ reviews are still protected. But, how about a public tweet of your friend who posted the full three-minute trailer? We know this is against the law but is rarely enforced. How is the AI to know whether this video was created by your friend or not? If the AI adds this video to its database, it’s now, unwittingly, stolen the work. The AI can’t decipher whether something is original or stolen unless it has some access to the original. Maybe there is a way to encode invisible-to-the-human-eye watermarks to protected videos that can signal to AIs that they can’t download them. Even with that, what if your same friend posts the video and slightly doctors it or takes a video of the video with a shitty camera that can’t pick up the watermark? The internet is too grand to be effectively policed, although, interestingly enough, AI has helped. For example, YouTube will take down videos with copyright violations. YouTube has access to the originals—through an agreement with media companies—so it can detect whether something is original or not. However, YouTube is one platform, or in other words, one internet jurisdiction. Whereas AI can take anything and everything from the internet—it has no jurisdiction. The only reasonable way to police what AI can take is to police the company that owns and operates the AI. It’s reasonable to ask one company to police content their users are posting on their service (although Big Tech has been hesitant to do so); it’s another thing to police all of the internet.

A group of prominent authors have already sued OpenAI for using their works without permission or compensation. They claim that the AI knows small details about obscure characters in their books—a sure sign that the AI has been fed their works in their entirety and not relying on Reddit comments or critics’ reviews. And, in what seems shocking but not surprising, OpenAI has admitted that they fed copyright materials to train their AI. Their defense is that they used copyrighted protected works “for innovation” and that constitutes fair use.

Give me a break.

It makes sense to me that when OpenAI was creating their prototypes, they would feed it all they could for the sake of testing and innovation. Feeding it everything would be the best way to make the best possible version of the AI. I’m not against that for testing purposes. But, releasing that version of the AI to the public, knowing that it has lord knows how many creative works from lord knows how many copyright holders in its database, is wrong.

OpenAI has benefited immensely because of how good their AIs are, but they are only good because of all of the work they stole. None of these artists, writers, filmmakers, bloggers, tweeters, redditors, or creatives ever gave their permission to use their work—much less were compensated—and they aren’t even being cited or given credit in any way. OpenAI has gone from a well funded million dollar startup to being worth $80 billion dollars, all by taking every creative work ever made without paying for it. And the difference between say, $20 million, and $80 billion is the real kicker because it’s coming off of the backs of literally everybody who has ever created anything and had their work appear on the internet, with or without their permission.  

This is, without a doubt, the greatest and grandest heist in the history of the world.

Whether the founders of OpenAI really wanted to work on AI to improve the world or make lots of money is irrelevant. They’ve taken trillions of dollars of protected work without permission and enriched themselves. And trillions of dollars may be putting it lightly; what’s the cost of all copyright protected art ever made? Or a better question, what are the damages for stealing every copyright protected art ever made? The calculation for both questions seems unfathomable.

There is no question that generative AI is remarkable and its potential is awesome (and like smartphones, there is sure to be serious unintended consequences). But, should the copyright protection of every individual and company all over the world and across time be violated? The answer to that question may depend on who you are. A young person whose generation is happy and willing to publish intimate pictures of themselves and pictures of their family, friends, and strangers on the internet; who’s accustomed to free and great internet services (Google, Facebook, Twitter) because they offer up their personal data; who may share their precise and real-time GPS location on Snapchat with their “friends” which may include their cousins and random people they’ve met at the bar; and who grew up with access to almost every movie, TV show, and record available at a low monthly fee—is more likely to say, yeah, take all the data available to create these tools that will benefit us.

An older person might lean towards protecting individual rights and liberties.

As for me, I’m somewhere in the middle. I want to encourage innovation and protect creative works. Luckily, I think there is a solution.

Before I begin, we can’t ignore the damages that OpenAI has already done.

First things first, there must be reparations; although, it will be complicated.

How can say, Colson Whitehead, be rightly compensated for having his words train an AI to write? Therefore, every response the AI has ever given to any user is, in the smallest way, a derivative of Whitehead’s work. To make it even more complicated, what if Whitehead is one of a million authors and writers whose work combined “taught” the AI how to write?

The original sin of stealing all creative works is so grand in scale that it’s almost incomprehensible. Any lawsuit would be complicated. This aforementioned lawsuit will be, I imagine, the first of many to attempt to establish the question of whether AI companies are stealing or not. It’s imperative that government protect copyright for the sake of liberty, justice, and commerce. I hope the authors win for the sake of copyright protections everywhere.

How I think generative AI can work

All holders of copyright protected works must consent and give explicit permissions for AIs to use their work. They should be protected against AIs who steal their work.

Once there is consent, there has to be a way to compensate copyright holders. But, the only way to do that fairly is to have the AI companies share their methods and then there could be an agreed upon percentage of how much copyright holder A, B, and C contributed to derivative X.

Sharing their methods is intellectual property and their most valuable asset so, they aren’t going to give that up. But, the copyright holders, who just were on the receiving end of the world’s largest fleece, aren’t going to trust the AI companies to be honest. So there must be an intermediary, a clearing house of sorts, that serves as a trusted go-between between AI companies and copyright holders. They must verify the AI companies methods, without revealing them, and dish out appropriate compensation to copyright holders. This intermediary can be a trade group, nonprofit, or the federal government.

Laws are always lagging behind technology and that’s because the Founding Fathers intentionally designed the creation of laws to be a slow process. Uber was probably not legal when it first started (remember, taxi companies complained because they needed licenses to operate and Uber was claiming they weren’t a taxi service). However, Uber grew so quickly and it was such a valuable service that they pressured municipalities to allow them to operate and made sure existing and new laws morphed around their service. Any city that didn’t have Uber was looked on as lesser and their residents would plead to their city councils to allow Uber to operate. This was their strategy and they executed it marvelously. Bird, the e-scooter rental company, tried a similar strategy: they flooded cities across the country with scooters and they wanted public pressure to ensure laws morphed around their service. But, ride-share is more essential than scooter-share and the nuisance of having scooters on sidewalks—without permission—was too great. Municipalities came back with a vengeance to punish Bird; many outlawing or severely limiting Bird’s ability to operate. Bird’s strategy did not work; their stock is worth $0.48 as of this writing and their market cap is a measly $6.49M—the same price as the yearly salary of an average free agent middle reliever in Major League Baseball.

Will laws make AI become Uber or Bird? We’ll have to wait and see.

Appendix

I’ve asked my friend to use generative AI to create some things I’ve mentioned in my essay. I’ll outline exactly what my friend inserted in the AI and the result.

  1. “Write a summary of famous philosopher and economist Adam Smith’s economic views in the style of J.K. Rowling and replace all mentions of men with muggles and make it three paragraphs.”

Summary of Adam Smith's Economic Views in the Style of J.K. Rowling

Adam Smith, often heralded as the patriarch of economics, beheld the marketplace as a veritable cauldron of competition, where muggles engaged in trade as if led by an invisible wand. He espoused that the pursuit of individual ambition, much like the self-seeking quests of solitary wizards, serendipitously benefits the collective through a spellbinding harmony of interests. In his mystical tome, 'The Wealth of Nations,' Smith conjured the principle of the division of labor, which, akin to a carefully crafted potion, enhances productivity and conjures economic prosperity.

In the cobblestoned streets of the economic Diagon Alley, Smith saw the invisible hand as a guiding patronus, directing muggles towards activities that yield the greatest enchantment—profit. This spectral hand ensures that, although muggles act out of self-interest, they inadvertently fabricate an alchemy of efficiency that benefits all. His philosophy orbited around free trade, much like the unrestricted exchange of spells and knowledge between learned mages, which he believed would dispel barriers and lead to a realm of universal affluence.

Yet, Smith also warned of the dark arts of monopolies and mercantilism, which he saw as curses that could ensnare and suffocate the free market's flourishing magic. He advocated for minimal intervention by governmental overseers, believing that too heavy a hand could quell the market’s spirited incantations. Thus, Smith's vision for a prosperous economy was one where muggles are left to their own devices, their ambitions unshackled, free to trade and transmute their fortunes as if by magic.

(I have to admit that this is good.)

  1. “Make a Star Wars movie poster with actors from Seinfeld and make the backdrop similar to the latest Dune movie.”

(Interestingly enough, the program couldn’t produce the image and instead wrote a prompt to put into DALL-E 3 in the future. Here is that prompt:

(Error creating image

(It seems there was an issue with generating the image for the second item. Unfortunately, this occasionally happens due to technical reasons. While I cannot provide the image at this moment, I can share with you the detailed prompt that I intended to use. You can try to use this prompt at another time or with another service that provides AI-generated images:

(Prompt for DALL-E:

("A science fiction movie poster in the style of a space opera, featuring four comedic actors poised dramatically as if they are a team of intergalactic adventurers. They are wearing futuristic costumes that suggest a blend of different roles such as the brave captain, the quirky mechanic, the wise alien advisor, and the charming rogue. Behind them, a vast desert landscape stretches out, reminiscent of a foreign planet with two suns setting in the background, casting a reddish-orange glow over the scene. The actors have a look of determined camaraderie, ready to embark on an epic quest. The title at the top of the poster is in bold, otherworldly font, reading 'Cosmic Odyssey: The Parody Quest,' which hints at a humorous undertone to the space adventure theme. The overall feel is a homage to classic space fantasy epics, infused with a touch of humor and parody.")

(Truly terrible but, according to my friend, DALL-E 3 generates better results if you continuously give it feedback until it generates something closer to what you were imagining. I didn't ask my friend to do that for the sake of this exercise but I'm sure it could do it well.)

  1. “Write an essay in the style [sic] Daniel Velez, the writer behind milkmanmodel.com, about how AI isn’t legal. Here are the arguments you should use: 1. AI can’t create new work, it can only create derivatives of existing works, therefore, the creators of the works the AI used to create its result should be compensated. And if they didn’t give permission to the AI company to use their works, they should sue the AI company for damages. Nobody knows how much copyright protected works the AI stole so it’s going be a consequential court ruling. 2. There should be reparations for all copyright holders the AI stole. 3. The only way generative AI can exist is by having an intermediary between copyright holders and AI companies that determines how much an AI used a copyright holders work and how much they should be compensated—assuming they gave permission to the AI company to use their work."

Essay: The Legal Conundrum of Generative AI

In the digital pantheon of creation, generative AI sits at an intersection of innovation and controversy. The argument that AI cannot birth entirely new work, but only rehash existing creations, strikes a compelling chord. It suggests that AI is less a divine muse and more a craftsman, deftly rearranging known patterns into derivatives that echo the familiar. The creators whose works form the tapestry from which AI weaves its narratives are often unsung, their consent unasked, their pockets untouched. If an AI's output is but a shadow cast by many a work, should not the originators of the light be acknowledged and compensated?

The ramifications of this debate reach far beyond mere pecuniary concerns. The absence of clear legal precedent turns this into a labyrinth where each turn could lead to a different judicial outcome. Yet, the suggestion that the creators should sue for damages opens a Pandora's box. The depth and breadth of AI's inspirations are as vast and unfathomable as the sea; to quantify the unquantifiable is a task Sisyphean in nature. Should a consequential court ruling come to pass, it may well set a precedent that will reverberate through the corridors of copyright law, forever altering the landscape.

Moreover, the notion of reparations implies a historical wrongdoing in need of rectification. It evokes a scene where copyright holders stand as a collective, seeking redress for the intangible theft of creativity. Yet, is it theft, or is it transformation? The debate rages on, with no easy answers. The proposed intermediary between copyright holders and AI companies suggests a need for a new order, a custodian of fairness in the digital age. Such a body would determine the value extracted from each work and the compensation due, assuming consent was given—a consent that, in the current paradigm, is often presumed rather than explicitly granted.

This essay encapsulates the legal and moral quagmire surrounding generative AI. It is a call to action for a more just and equitable system where creativity and copyright coexist in harmony, and the creators are guardians of their own destinies.

(I'm clearly not important enough for the AI to learn how to write like me.)

Essay

Daniel Velez Twitter

Daniel is building the future of reuse. His last venture, Growly Delivers, delivered local beer in returnable high-tech growlers. What will he do next?

Comments


Related Posts

Members Public

The GOP Climate Change Strategy is Obvious and Not Very Good

When the droughts become too severe, the wildfires too vast, and the change in temperatures too extreme, Republicans will choose the adaptation strategy.

Members Public

Would it Kill Environmentalists to be a Little Nicer?

Why are environmentalists no fun?

Members Public

Stop Spam Snail Mail

The perils of junk mail