This Medium Sends a Pretty Confusing Message

Why don't I want to read an AI-generated novel? Drawing on communication theory might help describe some issues with AI content.

Yesterday’s main character in AI ethics discourse on Twitter was Sudowrite, a startup with an AI storywriting tool that promises to “help you write stories without crying”. As many have pointed out, writing isn’t just about the final written product. I mean, if you found out that your spouse’s wedding vows were AI-generated, I’m willing to bet that you might feel at least a twinge of disappointment, no? We all like dunking on generative AI being used for content creation, but recent events have many of us thinking about what the value of art and communication even is in today’s world. And beyond the obvious problematic roots and effects of generative AI in terms of stolen labour and job loss, how can we describe the icky feeling that some of us feel at the suggestion of consuming AI content?

This felt like a good time to revisit Marshall McLuhan’s writing, which I’d only encountered in passing in university. McLuhan is probably most well-known for his quote, “the medium is the message” – which is to say that the most salient information in any form of communication is encoded in the medium of choice (e.g. TV vs. film, print vs. speech, etc.) as opposed to the content that is being explictly transmitted. In other words, the choice that is made to tell a story as a TV show as opposed to a film, and the effect that the medium has on how an audience participates with it, is richer in value than the actual story being told. Keeping up with a TV show like Succession isn’t just about seeing how many ways Kendall Roy can drop the ball; there’s a special thrill that we get from talking about the episodes with our friends each week, listening to the podcasts, and sharing memes and fan theories.

For transparency, I’ve only read the first half of “Understanding Media” (1964) so far, but I’ve been surprised at how many of the things that he talks about in it are relevant in today’s conversations on AI. I should point out that I don’t agree with many of his arguments and conclusions, and that none of what I’m writing here is an endorsement of him or his work. For our purposes here, I just want to focus on what I think is a deceptive nuance in AI art and AI-generated content, where I think that looking at things from this “medium is the message” perspective could shed some light on part of why using AI-generated work for creative endeavours can feel so offensive to artists and art-appreciators.

On its surface, AI-generated art and text can often look like the “real” thing at first glance. Sometimes there’s an extra finger or two in a synthetic digital painting, or a confusingly constructed metaphor in a poem, but with clever prompting and polishing it’s possible to use AI to generate work that passes the sniff test. On its face, AI content seems to meet the expected criteria for art/poetry/fiction, and from that perspective I can understand why AI artists and corporations (e.g. Buzzfeed, with it’s plan for AI-generated content) would expect the public to engage with that content in the same way that it would engage with “handmade” content.

McLuhan points out, though, that every medium is composed of other mediums within it in sort of like a nesting doll arrangement. A film, for example, contains another medium: speech. In our case with AI-generated content, I think it’s important to make the distinction that AI content isn’t just content; it’s AI content. So if you use AI tools to generate something that looks like a photograph, the medium isn’t photography. If you use AI to create something that looks and feels like a novel, the medium you’re working with isn’t actually a novel. The traditional medium is wrapped by the AI medium, which fundamentally changes the experience. And since each medium invites its audience to engage with it in a different way, it shouldn’t be surprising that AI-generated content elicits substantially different responses than “traditional” content.

Borrowing an idea from Graham Harman’s “object-oriented ontology”, I think it’s valuable to consider any artistic artifact (e.g. a novel or a painting) as an artist’s attempt to realize some non-physical idea. When we read a novel or appreciate a painting, we place ourselves within the artwork to experience it, by doing things like asking ourselves questions about what the artifact represents, or considering the choices that the artist had to make, in order for us to construct an image of the idea or concept that the artist is trying to communicate. The way that we engage with an artifact depends on the medium, and each medium, according to McLuhan, invites people to participate with it in different ways and to different degrees. A reader of a traditional novel may ask themselves “why did the author choose this word?”, but if the reader knows that many of the words were chosen by an AI system, it feels sort of silly to ask that kind of question.

All summed up, I’d argue that AI content allows for dramatically less participation than traditional content, because there is no higher-level idea that AI systems are trying to express. We don’t need to dig into technical details here, but current AI systems have nothing that resembles a worldview, a conscience, or original thought. So what am I supposed to be engaging with? What do generative AI folks even want people to get from their work, beyond “oh yeah that’s a pretty picture”, or “oh yeah that does kind of sound like Drake”?

Why does this even matter? I’m not here to make a statement on whether AI content is “good” or “bad”. It’s content – if you enjoy it, good for you! The issue that I’m highlighting here is that we’re seeing grifters and corporations actively trying to hoodwink people into accepting content that is fundamentally not the same as the music, literature, and art that are part of the fabric of our societies. The content that’s being pitched is, at it’s core, designed for consumption and not participation. Is that not enough reason for the AI world to think about these things a bit more critically?