All of the following is part of my work in progress on luddism. Specifically, the work started with my stupid project aisplaining.us. I encourage you to take a look at it before you keep reading.
It started as a joke. As part of the ridiculous software project, I wanted to make an Instagram plug-in to take a picture and send it to ChatGPT to translate into a paragraph. The paragraph would then be forwarded to DALL-E 3 as a prompt to create an image. The resulting image would be posted without commentary on the user’s Instagram account.
I had many ideas I wanted to explore with that project: the nature of synthetic images, the delegation of human agency in social media, and the inevitability of AI slop in digital media. As it turns out, all of these topics could be interesting, but the real thing that obsessed me was hiding in plain sight: the revised prompt.
OpenAI’s API documentation clearly states that “the model now takes in the default prompt provided and automatically re-writes it for safety reasons, and to add more detail (more detailed prompts generally result in higher quality images).” (see the screenshot if you don’t believe me!). Of all the things that are problematic with generative AI, and especially with OpenAI’s approach, this is my favorite (the most difficult being how invisible the environmental impact of these technologies is for regular users). I love it because it’s arrogant, condescending, moderately petty, and ultimately demeaning in just 30 words. It’s aisplaining.
This approach has two issues: First, it’s invisible to end users. There are workarounds (again, see the screenshot); however, those are in the platform’s API and not part of the tool's general interaction flow. This is quite an apparent way of deceiving the user, a modern take on the fine print scheme beloved by banks and other modern vampires.
Second, this is a “safety” issue. Ah, this is the important one. What does this mean?
In aisplaining.us, you can either try the live version of my toy or look at the data I have gathered. The interactive toy is simple: upload or take a photo, and then the photo gets sent to OpenAI first for text generation and then for image generation. The page will return the original image, the original prompt based on ChatGPT’s description, the generated image, and the revised prompt. It’s just a tiny toy to let you see what an automatic rewriting “for safety reasons, and to add more detail”, implies.




One of the most remarkable things is its consistency in gender and ethnicity variation and adding colorful adjectives to the original prompt. I understand that DALL-E 3 was probably trained to excel with some precise wordings and prompts, but the level of specificity is interesting.
But that’s not even the biggest problem. This is supposed to be a “safety” mechanism. The discourse in Ethical AI is packed with ideas about “safety”: guardrails, model alignment, human-in-the-loop, … all oriented to make these technologies “safe.” But what does “safe” mean? In the aisplaining toy case, the tool is prevented from defaulting to its weirdness, characterized by profound gender and race biases, and a significant problem in creating coherent backgrounds. The results must be “safe” and of “higher quality”, even if that requires rewriting the user's input.
And that is the actual twisted point: OpenAI decided that to make the models “safe,” they needed to take over from the user, rewrite their prompts, and not inform them about it. That is an original approach, a “human-gtfo-of-the-loop” approach—one that is not even made explicit to the users, to prevent them from suspecting their new shiny toys.
It seems like the best way to make image generation “safe” is to edit the prompts we write to make them machine-clear, that is, less racist, sexist, and bland. This understanding of AI safety implies removing people from the prompting process. Oh, the people: we are not even good enough to prompt anymore!
This is the main point of my little experiment: if we need “ethical AI” or “safety” in our models, it’s already too late. With generative AI, we (as a society in the Western world) have fallen to the trick of a rentier class disguised as disruptive innovators, who have aggressively monetized genuinely interesting technology to extract as much wealth as possible so they can amass it for nobody’s profit but theirs.
We have allowed these craven homunculi to throw into the world tools that have unthinkable and unhinged effects on almost all aspects of our lives, from the environment and politics to culture and personal relations. We don’t know the implications of having easy, ubiquitous access to machines that are so good at pretending they understand. But we have accepted that these tools are now here and cannot do anything about it. Except, of course, develop “ethical AI,”1 which at worst is just a patch that allows some form of aisplaining: editing your prompt, making the system transparent, making it explainable … but still not answering the question “why do we need this?.”
My toy project highlights how (some) ethical AI discourse and practices are a ridiculous software project—an imaginary solution to a problem we have created but are too lazy to solve. If the only solution for “safety” and “efficiency” is to strip away layers of human agency from the system, if the weak point of such a system is the human, then it is, by definition, an inhuman system.
The question is, what should we do with inhuman systems?
As a dilettante ethicist, my take is simple: if a system is inhuman, we should break it. Turn it off. Refuse to engage with it. No matter what promises we are given about a better future that will never be realized, the very nature of such systems is the removal of our agency and its substitution by a labor extraction system with a nice interface and evil intentions.
This will take work. The hobgoblins pushing these systems want to make them invisible and ubiquitous. They want to hide them in our mundane practices, so when we notice it will be too late, we will trapped in a world in which AIs will prompt us to complete their tasks for an hourly wage. We need ethical AI, don’t get me wrong. However, a condition for ethical AI is the capacity, no, the duty to be able to say no, refuse, turn off, and deny these tools.
These are my ethical AI suggestions now: talk to others and share tips and tricks to get around these tools. Share the talks, books, podcasts, and anything else that helps you refuse to become a prompt. Talk to your union if you are worried about how an AI tool behaves or how it has been applied in your work without your consent. And if the systems are in place already, look online for ideas on jailbreaking, prompt injecting, and trying to force hallucinations. Many of these systems are only helpful if managers can pretend they work, so making them not work on purpose could be the most ethical use of AI.
We need AI safety, regulation, and ethics. But these things rely on us, people, having the most basic right possible: the right to say enough and the right to say no. AI Ethics does not start with companies or academics. It begins with people simply saying no.
I have enormous respect for my friends and colleagues who work in this area! Their efforts are outstanding, and I can only admire anybody who looks at this hot mess we call AI and its economics and thinks, “We can fix that.” I am just tired of trying to fix things. I want broken things, or to break things.
What do you mean "this system we made is made for the system's and greed's sake"? What do you mean "it's an inhuman system because really, the system hates the organically human even though it was trained on our data"?
Oh wait, yeah, no. You're right. The developers have deemed humans unfit as anything more than a loose, suggestive input machine because 'safety'. Humans aren't safe, our history, our culture, and our desires aren't safe. But if that's really the case, how did we survive those many thousands of years until we started cultivating the earth and being farmers? Hmmmmmmmm.