Please see the disclaimer.

Assumed Audience: Artists, hackers, and anyone affected by “AI” models. If you are, and you want to do something, please fight back. One way to do so is to support Matthew Butterick’s two lawsuits.

Discuss on Hacker News.

Epistemic Status: Confident.

I just had an email conversation with an AI evangelist. It opened my eyes.

This particular evangelist, Romain Beaumont, came to my attention with a Hacker News post about a request for Beaumont to allow website operators to opt-in to his scraping tool.

If you don’t follow that link, let me summarize it: his image scraping tool does not respect robots.txt and instead, has a custom opt-out procedure for website operators that the user of the tool can disable!

In other words, he gave users of his tool a way to opt-out of the opt-out!

Well, he locked the thread before I got to it, so I decided to email him since his email was public.

Now, Beaumont did not give me permission to post the email conversation we had. so I’m only going to summarize using his publicly available writing.

His refusal to give permission is ironic though, because while he didn’t give permission, Beaumont claims that it’s unethical to have opt-in rather than opt-out for scraping:

Why not be a good netizen and make it so it only works on sites that have opted in? I’m happy to give you a PR to do that, if you like?

That would be unethical, you can read the readme to understand why.

Letting a small minority prevent the large majority from sharing their images and from having the benefit of last gen AI tool would definitely be unethical yes.

Let me clarify that for you: Beaumont believes it is unethical to prevent users from using your work however they want just because you post it on your website.

Yes, this includes if you put “All rights reserved.”

That’s scary.

Why does he believe this? The clue lies in the last post he made on the request:

It is sad that several of you are not understanding the potential of AI and open AI and as a consequence have decided to fight it.

In other words,

You will be assimilated. Resistance is futile.

“Freedom is irrelevant. Self-determination is irrelevant. You must comply.”

That’s terrifying!

He seemed to think that AI was a pure good and that it should be pushed forward regardless of the consequences.

When I mentioned some problems, like no attribution, he said that they could be fixed.

If so, why were they not fixed before companies released them?

The truth is that AI is a black box and that their creators don’t understand them. You can’t fix what you don’t understand.

Even worse, they are black boxes that we use to make decisions for us.

IBM once had a fantastic opinion about computers making decisions:

A computer can never be held accountable. Therefore, a computer can never make a management decision.

IBM Slide

And it gets worse: companies will use that to diffuse and destroy liability:

If AI companies are allowed to market AI systems that are essentially black boxes, they could become the ultimate ends-justify-the-means devices. Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything.

Matthew Butterick

Do you want to live in that world? I don’t.

Yet AI evangelists seem to think that those bad things will just not happen, as though AI itself can’t be used for bad.

That’s either malice or wishful thinking.

In the case of Beaumont, I’m inclined to think there’s no malice, mostly because he talks as though AI will benefit us:

You will have many opportunities in the years to come to benefit from AI. I hope you see that sooner rather than later. As creators you have even more opportunities to benefit from it.

I told him that it’s all well and good if he’s right because if it is, AI will happen over my protests and the protests of others, and everything good will come to pass. People like me will just slow it down enough to make people think and work through problems, rather than dismissing them as “more training needed.”

But if he’s wrong, and he is, it will be catastrophic, even for AI evangelists.

So I will not go quietly into the night. I will show that AI will not increase the “uniqueness” of art and other content. I will refuse to engage and become an island unto myself and show that human content is, and will always be, king.

And if people try to use my work, I will fight every step of the way, even if I have to go to law school myself.

I will not be assimilated.