Skip to main content
joy cometh with the morning

They Not Like Us

Photo of the moon over Patcham

Everyone is different. Some people like to go to the club, others prefer to stay in; some people like spicy foods, some seldom eat anything that isn't beige; some people like rap and some country &c.&c. Our differences as people are what makes up the rich diverse tapestry of humankind. However, occasionally I will encounter examples of behaviour that's just completely alien to me.

A few months back I read this article in the Hollywood Reporter. The article is about a company that uses LLMs to generate podcasts at a rate (they claim) of over 3000 a week. What do the people behind it have to say for themselves?

“I think that people who are still referring to all AI-generated content as AI slop are probably lazy luddites. Because there’s a lot of really good stuff out there,” Wright said.

We'll leave aside the use of Luddite as a slur here—while, as I've mentioned, I have very complicated and mostly negative feelings towards LLMs, it's true some people are doing interesting or useful things with them. Matt Webb's poem clock is fun, Parse The Bill seems useful. However, this is not true of the majority of LLM-generated content, and I don't think it can possibly be true of the 3000 podcasts this company is churning out a month[1]. What kind of quality control can there be?

The content team, led by Katie Brown, a former lifestyle television host and home goods expert, gives each podcast a title, creates an outline of the podcast, with the content filled out by AI, and assigns it one of the personalities as a host. Other team members do a final check and add in music and sound. The shows are also spot-checked periodically.

Oh.

The company is able to produce each episode for $1 or less, depending on length and complexity, and attach programmatic advertising to it. This generally means that if about 20 people listen to that episode, the company made a profit on that episode, without factoring in overhead.

I think of podcasts—even the most people-badly-summarising-a-wikipedia-article-with-jokes kind—as some kind of creative endeavour, and these people see a Unit Of Content. Podcasts are a business, and like any media business there are the Business People who work for the company whose job is to think about things in terms of Units Of Content, however in the past these people were required to interface with the people who made the content. Even the most SEO'd-to-hell-and-back mush had to be, at some point, written by someone. The AI Difference™ is that the Business People don't at any point need to involve anyone who might actually care about what they're putting out.


Flashback: just over a year ago, I saw this video of a guy which had been shared in a Slack I'm a member of. He calls himself an "AI SEO Expert"—a sadder short story than the one Hemingway wrote—and he's automated his content production by asking Perplexity to summarise what's new in AI over the last few days, getting ChatGPT to rewrite it, then posting it to Linkedin et al.

As usual, if we leave aside any concerns about the ethics of using the AI models themselves[2], this feels bleak as anything. It is the kind of thing that people would've made jokes about a few years ago when LLMs were first blowing up. He says it saves him hours a week, but surely understanding what's going on in AI is... useful to him? Outsource the initial research bit, maybe, but isn't that kind of thing valuable for keeping up with the field you're allegedly expert in? Don't you want to maintain some quality control over what you post, either?

That he's admitting to it also feels very strange—but the audience concern he anticipates is not "why am I following you if all your stuff is AI-generated?", it's "but doesn't Google downrank AI-generated content? And here we're reminded: he's not just an AI guy, but an AI SEO guy. SEO itself is the understandable but fairly unfortunate process of deforming the web to better conform to the tools people use to access it. This is like the next step down from SEO; SEO tended to at least be bottlenecked somewhat by requiring people to generate the stuff.

But but but: because the corpuses of data used to train the models need huge, huge amounts of structured data, one of the things you find is that they're heavily trained on data that's already search-engine optimised!

It contains less about how humans see the world than it does about how search engines see the world. It is a dataset that is powerfully shaped by commercial logics.

I know this is basically the standard complaint/jeremiad people have been having about this stuff for ages, but it was somewhat astonishing that someone touting himself as the vanguard of this stuff is so willing to say "yes, I am not even slightly involved in the creation of stuff that has my name to it; no, I am not ashamed".


Back of the envelope calculations based on the numbers in the article seem to suggest they're getting maybe 35 listens an episode. Then again, the article says "It takes about an hour to create an episode" and also that the company has a team of eight, four of whom work on content, so they would have to be working 150 hours a day for that, so, y'know, who knows what they mean by that.

The idea behind the company came after Corbin accidentally developed a hit podcast during the pandemic in which he read daily CDC reports, and then branched out into weather reports and other shows that took off, including A Moment of Silence (an actual minute of silence). At the time, they were not using AI.

Now this is interesting. See, the CDC report thing (and the weather reports, and even the minute of silence!) are things that are actively useful or beneficial. The weather report thing seems a bit weird but sure, whatever, if people like it I guess. I'm actually somewhat amenable to "here's a boring thing that people want read out, I'm going to get the robot to do it"[3]. I'm not that steamed about them doing stuff that is boring, annoying and not valuably done by a person. (What is or isn't valuably done by a person is, of course, quite contested.)

The article then goes on to describe various AI personae the company has created to 'host' its podcasts, including one called "Nigel Thistledown". They then talk about ways that the 'hosts' could interact with listeners, but:

“I am not going to create a personality that somebody has a deep relationship with,” said William Corbin, co-founder and CTO of the company.

You know, the funny thing is that you don't get to choose that? People form deep personal attachments to cartoon animals. I mean, based on the descriptions here I cannot really imagine anyone developing a parasocial relationship with Nigel Thistledown or whatever, but.

The company now consists of a team of eight, with four working with content. Podcast topics are selected with the help of AI, based on Google and social media trends, and then the team may launch five different versions of the show with different titles to see what performs the best. The podcasts are often titled after simple SEO search terms, such as Whales, so that they’re discoverable. The shows that do stick can then be replicated and scaled.

And we're back to "The AI SEO Guy".

“We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life,” said CEO Jeanine Wright, who was previously chief operating officer of podcasting company Wondery, which has recently had to reorganize under the changing podcast landscape.

I have literally no idea what's meant by "half the people on the planet will be AI", but I'll note that immediately before reading the THR article I had read this article in Rolling Stone about the decline of narrative podcasts, in which Wondery is mentioned as having been gutted and laying off a bunch of staff.

I very well understand doing things that you're not passionate about to pay the rent, but this whole thing is based on making what are notionally creative works, but where those creating it are actively indifferent about the form, content, etc of them. I feel like I have more in common with serial killers than with people who are proud of the fact that they run an business that churns out thousands of AI-generated podcasts a week.


  1. In the wake of the James Somerton plagiarism stuff, I remember Sean making the point that obviously this guy was plagiarising stuff because there's no way to churn out content of that kind at that volume if you're not. ↩︎

  2. It's interesting: in the past I have had generally negative feelings toward intellectual property. I guess the problem is that I operate on a somewhat vibes-based model here; I don't have a clean Stallmanesque clarity of belief about it. It's bad to use IP law to stop comic artists from selling drawings of Batman at conventions or whatever, but I think it's fair for writers and artists not to want their work to be used as grist for the mill of a massive business in this way—the problem here is the industrial scale. A lot of software will have a free "personal/hobbyist" version and a paid-for "business" version, and I think maybe this is the mental model I'd cleave to. ↩︎

  3. I have a friend who, while doing their postgraduate studies, had a job recording course readings for visually impaired students. I imagine that's the kind of thing that you'd just do with the robot now. This is the kind of area where I feel like I have a legitimate conflicted feelings. On the one hand, it would have been a job (however low-paid) at one point; on the other you've got a very quick and efficient low-cost way to make something more accessible. ↩︎