What kinds of content can humans do better than AI?

Sometimes, you just need the human touch...

What kinds of content can humans do better than AI? The last few posts here have, I have to admit, been a bit of doom and gloom. I’ve looked at how conversational AI is going to squeeze search traffic to publisher sites, and at how adopting AI for content generation will remove the key competitive advantage of publishers. 

But there are areas of content creation where publishers can use their ability to do things at scale and the talent they have to make great work that audiences will love.

I’ve broken this post out into three parts, covering three different kinds of content. Today, I’m going to look at one which is close to my heart: reviews. Tomorrow and Thursday I’ll look at two other examples where humans can win.

Doing reviews right

One of the points that I made last week was that affiliate content, in particular, was susceptible to the shift to conversational ways of working with computers. However, that doesn’t mean that reviews are going to disappear. Certain types of article are likely to remain an area where humans will continue to produce better content for other humans for the foreseeable future.

For many sites, creating content for affiliate purposes has involved a lot of round-up articles, often created at least in part with what gets called “desk-based research”. You are not reviewing a product you have in your hand, you are researching everything about it that a consumer could possibly need to know, and summarizing it helpfully.

I’ve sometimes argued this was OK in certain circumstances, as long as you flag it and the amount of work that goes into the article is high. Just casting around for whatever is top-rated on Amazon doesn’t cut it because a reader can do that quickly themselves. But if you’re saving someone hours of time in research, you’re still performing a valuable service for them.

That kind of content isn’t going to survive the increased use of conversational AI because one thing that LLMs will be excellent at is ingesting lots of data and combining it into a cogent recommendation. LLMs can read every piece of Amazon feedback, every spec sheet and every piece of manufacturer data faster and more accurately than any human can. If your content is just research, it’s not going to be viable in the world of AI.

What will work is direct first-person experience of the product, written to focus on the less tangible things about it. An LLM can read a car spec sheet and tell you about its torque, but it can’t tell you how it feels to accelerate it around a corner. An LLM can look at a spec sheet for a laptop, but it can’t tell you how good the keyboard is to type on for extended periods.

If your editorial teams are focused on what I used to call “speeds, feeds and data” then part of your approach should be to shake up the way they write to get them closer to a more personal perspective. One way to do this is to change style.

Back when we launched Alphr at Dennis, one of the first changes I made to editorial style was to stop using the UK tech traditional plural in reviews (“we tested this and found blah”) and shift to first person (“I tested this and found blah”). Shifting into first person forces the writer into a more subjectively human perspective on the product you’re looking it. It frees the writer from an overly objective point of view into a more personal experience, and that is something which will survive the world of LLMs. Don’t just say what the specs are: say what it feels like, as a human being, to use this product.

Tomorrow, I’m going to look at the second area I think is a clear “win” for human-generated content: the often maligned area of real life stories.

Weeknote, Sunday 12th November 2023

This felt like a busy week, perhaps because it actually was

On Monday I had a call with Peter Bittner, who publishes The Upgrade, a newsletter about generative AI for storytellers which I highly recommend. It was great to chew the fat a little about what I've been writing about on my newsletter, and also to think about a few things we might do together in the future.

The on Thursday I caught up with Phil Clark, who has also recently left his corporate role and is working on a few interesting projects. Plus I spoke to Lucy Colback, who works for the FT, to talk about a project she's working on.

On Friday we headed down to Brighton for the weekend. Kim was doing a workshop on drawing (of course) and I took the opportunity to catch up with a couple of old friends, including my old Derby pal Kevin who I've known for 40 years. Forty bloody years. How does that even happen?

The three things which most caught my attention

  1. Here's something positive: the story of Manchester Mill, a subscription-based local news email in Manchester that's doing more than breaking even, which remaining independent, creating quality news, and not taking advertising.
  2. Tilda Swinton is just one of my favourite people. That's all.
  3. Mozilla wants to create a decentralised social network, based on Mastodon, that's actually easy for people to use.

Things I have been writing

Last week's Substack post looked at Apple's old Knowledge Navigator video and how computing is heading towards a conversational interaction model. This has some big implications for publishers, particularly those who have focused on giving "answers" to queries from Google: when you can effectively send an intelligent agent out to find the things you want via a conversation, web pages as we know them are largely redundant.

I wrote a post about Steven Sinofsky's criticism of regulating AI. I think Sinofsky is coming at this from a pretty naive perspective, but not one which is atypical of the kind of thinking you'll find amongst American tech boosters. It was ever thus: I feel when writing articles like this that it's just revisiting arguments I was having with the Wired crowd in the late 1990s. The era when "the long boom" was an article of faith, the era when George Gilder was being listened to seriously.

And that's not surprising, really. The kind of people who are loudly shouting about the need for corporate freedom to trample over rights (Marc Andreessen, Peter Thiel) grow up in that era and swallowed the Californian ideology whole. So did a lot of radicals who should have known better.

Things I have been reading

Having seen Brian Eno perform last week I'm working my way through A Year with Swollen Appendices, which is a sneaky book: the diary part is only a little over half of it, so just when you think you're coming to the end you have a lot of reading left to do. It's a good book though. Picking that up means I have had to put down Hilary Mantel's A Memoir of my Former Self, but that will be next on the list.

John G on Monica Chin's review of the Surface Laptop Go 3

Daring Fireball: Monica Chin on the Microsoft Surface Laptop Go 3: ‘Why Does This Exist?':

A $999 laptop that maxes out at 256 GB of storage and has a 1536 × 1024 display — yeah, I’m wondering why this exists in 2023, too. And I’m no longer wondering why Panos Panay left Microsoft for Amazon.

The $999 MacBook Air has 256Gb of storage, 8Gb of RAM, and a three year old processor. I’m kind of wondering why that exists in 2023, too.

Not to say that the Surface Laptop 3 is any good – it isn’t – but Microsoft isn’t the only company that has some bizarre pricing at the “low” end of its laptop range.

What a 36 year old video can tell us about the future of publishing

The future is arriving a little later than expected...

I have had the best life. Back in 1989, I left polytechnic with my first class honours degree in humanities (philosophy and astronomy) and walked into the kind of job which graduates back in the 80s just didn't get: a year-long internship with Apple Computer UK, working in the Information Systems and Technology team – the mighty IS&T.

It paid a lot better than my friends were getting working in record shops. And although it was only temporary – I was heading back into higher education to do a PhD in philosophy, working on AI – it suited me. Without it, I wouldn't have had my later career in technology journalism. The ability to take apart pretty-much any Mac you cared to name became very useful later on.

Apple treated new interns the same as every other new employee, which meant that there was an off-site induction for a couple of days when we were told about the past, present, and future of Apple. The only part of the induction that I remember is the future because that was when I first saw the Knowledge Navigator video.

If you haven't seen Knowledge Navigator, you should watch it now.

Why is a 36-year-old concept video relevant now, and what does it have to do with publishing? The vision of how humans and computers interact which Knowledge Navigator puts forward is finally on the cusp of coming true. And that has profound implications for how we find information, which in turn affects publishers.

There are three elements of the way Knowledge Navigator works which, I think, are most interesting: conversational interaction; querying information, not directing to pages; and the AI as proactive assistant. I'm going to look at the first one: interaction as conversation, and how close we are to it.

Interaction as conversation

The interaction model in Knowledge Navigator is conversational. Our lecturer talks to the AI as if it were a real person, and the interaction between them is two-way.

Lecturer: “Let me see the lecture notes from last semester”. Mhmm… no, that's not enough. I need to review the more recent literature. Pull up all the new articles I haven't read.”

Knowledge Navigator: "Journal articles only?”

Lecturer: "uhh… fine.”

Note one big difference with the current state of the art in large language models: Knowledge Navigator is proactive, while our current models are largely reactive. Bing Chat responds to questions, but it doesn't ask me to clarify my queries if it isn't certain about what I'm asking for… yet.

That aside, the way conversation happens between our lecturer and his intelligent agent is remarkably similar to what you can do with Bing Chat or Bard now. The “lecture notes from last semester” is a query about local data, which both Microsoft and Google are focused on for their business software, Microsoft 365 and Google Workspace. The external search for journal articles is the equivalent of interrogating Bing or Bard about a topic.

In fact, Bing already does a pretty good job here. I formed a similar question to our lecturer's about deforestation in the Amazon, to see how it would do:

Not bad, eh?

The publishing model of information – the one which makes publishers all their money – is largely not interactive. The interaction comes at Google's end, not the publishers. Our current model looks like this:

  1. A person interacts with Google, making a query.

  2. They click through to a result on the page which (hopefully) gives them an answer

  3. If they want to refine their query, they go back to Google and repeat the process – potentially going to another page

Interaction as conversation changes this dynamic completely, as an “intelligent” search engine gives the person the answer and then allows them to refine and converse about that query immediately – without going to another page.

Have a look at this conversation with Bard, where I am asking for a recommendation for a 14in laptop:

OK, that sounds good. Now let's drill down a little more. I want one which is light and has a good battery life:

That ZenBook sounds good: so who is offering a good deal?

By contrast, a standard article of the kind which publishers have been pumping out to capitalise on affiliate revenue (keyword: “best 14in laptop”) is a much worse experience for users.

And at the end of that conversation with Bard, I'm going to go direct to one of those retailers, with no publisher involvement required.

If that isn't making you worry about your affiliate revenue, it should be.

The model of finding information which search uses, based on queries and a list of suggested results, is pretty well-embedded in the way people use the internet. That's particularly true for those who grew up with the web, aged between 30-60. It may take time for this group to move away from wanting pages to wanting AI-driven conversations which lead to answers. But sooner or later, they will move. And younger demographics will move faster.

That, of course, assumes that Google will leave the choice to users. Google may instead decide it wants to have more time with “its” users and put more AI-derived answers directly at the top of searches, in the same way that Microsoft has with Bing. Do a keyword search on Bing, and you are already getting a prompt to have a conversation with an AI at the top of your results:

Once again, the best option for publishers is to begin the switch from a content strategy which relies on Google search and focuses on the kinds of keywords which are susceptible to replacement by AI (focused on answers) to content strategies which build direct audience and a long-term brand relationship.

Treat search traffic as a cash cow, to be milked for as long as possible before it eventually collapses. In the world of the Knowledge Navigator, there's not going to be much room for simple web pages built around a single answer.

On Steven Sinofsky's post on regulating AI

Regulating AI by Executive Order is the Real AI Risk:

The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation

Sinofsky’s response is fairly typical of the AI boosters, and as always, it fails to understand the point of regulation. And in particular it fails to understand why an executive order is entirely the correct approach at this point.

Regulation exists so that we gain the benefits of something while ameliorating the risks. To use an area that probably makes sense to Americans, we regulate guns, so we get the benefits of them (personal protection, national security) while avoiding the dangers (everyone having a gun tends to lead to lots and plenty of gun deaths).

AI is the same: we should regulate AI to ameliorate the dangers of it. Now, those dangers aren’t Terminators stomping around the world with machine guns. They are, instead, things like racial discrimination because of an intrinsic bias of algorithms. It’s looking at the implications for privacy of generative AI being able to perfectly impersonate a person. It’s the legal questions of accountability – if an AI makes a major error which leads to death, for example, who exactly is responsible?

But hey, I guess tech ethics is the enemy, right?

So why an EO? In part, I think the AI boosters only have themselves to blame. You can’t go around saying that AI is the most transformative technology since the invention of the PC and stoking the stock markets by claiming the impact will all be in the next couple of years and not be surprised if a government uses the tools it has to act expeditiously. Silicon Valley types constantly laugh at the slowness of the Federal government. Complaining when it does something quickly seems a bit rich. “Move fast and break stuff” sure – but not when it’s their gigantic wealth that might be the thing that gets broken.

Sinofsky also highlights the nay-sayers of the past, including posting some pictures of books which drew attention to the dangers of computers. The problem is some of those books are turning out to be correct: David Burnham’s The Rise of the Computer State looks pretty prescient in a world of ubiquitous surveillance where governments are encouraging police forces to make more use of facial recognition software, even though it discriminates against minorities because it finds it hard to recognise black faces. Arthur R. Miller may have been on to something, too, when he titled his book The Assault on Privacy.

Sinofsky gets to the heart of what ails him in a single paragraph:

Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing.

To which I would respond: “great! It’s about time!”

There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.

All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.

As for the idea that this is “preventing product(s) that don’t exist from existing”… that is precisely what we pay our taxes to do. We spend billions on defence to prevent the problem of someone dropping big bombs on our cities. We pay for education, so we won’t have the problem of a stupid population which votes in a charlatan in the future (why do you think the far right hates education?)

Good business leaders talk all the time about how proactive action prevents costly issues in the future. They scan horizons, and act decisively and early to make sure their businesses survive. The idea that the government should only react, especially when that’s usually too late, is just bizarre.

At one point, Sinofsky’s sings the praises of science fiction:

The best, enduring, and most thoughtful writers who most eloquently expressed the fragility and risks of technology also saw technology as the answer to forward progress. They did not seek to pre-regulate the problems but to innovate our way out of problems. In all cases, we would not have gotten to the problems on display without the optimism of innovation. There would be no problem with an onboard computer if the ship had already not traveled the far reaches of the universe.

It’s a mark of the Silicon Valley mind-set that he appears to forget the understandable point that this was all made up stuff. 2001 wasn’t real. Star Trek was not real.

Sinofsky then spends some time arguing that the government isn’t “compelled” to act, as AI is actually not moving that quickly:

No matter how fast you believe AI is advancing, it is not advancing at the exponential rates we saw in microprocessors as we all know today as Moore’s Law or the growth of data storage that made database technology possible, or the number of connected nodes on the internet starting in 1994 due to the WWW and browser.

All well and good, but not true: a Stanford study from 2019 found that AI computational power was advancing faster than processor development, and that was before the massive boost from the current AI frenzy. Intel has noted the speed at which AI programs can “train” themselves doubles every four months, compared to the 24 months that Moore’s Law predicted for processor speed.

Towards the end, of course, Sinofsky lapses into Andreessen-style gibberish:

The Order is about restricting the “We” to the government and constraining the “We” that is the people. Let that sink in.

Making “the people” synonymous with “extremely rich billionaires and their companies” is, of course, one of the tricks that the rich play again and again and again. AI is being created to enrich the already rich. It requires resources in computing power, which means my only option of accessing it is to rent time on someone else’s computer. It reinforces technofeudalism. Of course, Silicon Valley, which wants to make sure all of us pays a tithe to them, loves it.

It’s time that we have some assertion of democratic control over the forces that shape our lives. The Silicon Valley fat cats don’t like it. That, on its own, tells me that regulating AI is probably a good thing.

Weeknote, Sunday 5th November

Time passes. The highlight of this week was at the Royal Festival Hall on Monday, when we drove into London (more on that in a moment) to see Brian Eno and the Baltic Sea Philharmonic perform Eno’s album The Ship along with a handful of other songs.

It was of course brilliant, and incredibly moving. One of the additional songs that they performed was Bone Bomb from 2005’s Another Day on Earth, which is song rooted in the testimonies of two people; a teenage girl Palestinian suicide bomber; and an Israeli doctor who talked about how he had learned to pull fragments of the bones of suicide bombers from the bodies of their victims. It was incredibly affecting: I cried.

Eno is one of the artistic anchor points of my life. I first ran into his work in the early 80’s, when in my teens I bought a second hand copy of Another Green World and instantly knew that I wanted to be able to make art like that. I never quite succeeded in that aim – whatever my writing is, it’s not like Eno’s!. But he’s always been an inspiration in the way that he has had the fearlessness to do what he wanted to do without worrying too much about either immediate ability or the artificial boundaries which people set between the different creative domains.

Once a year, I reread his A Year with Swollen Appendices, which I think should be on the required reading list for any course in any creative field. No matter what you do creatively, you will get something out of it. You might not like Eno more at the end of it, but I think that’s actually to Eno’s credit.

As I mentioned, we drove rather than getting the train. Environmentally of course that is a poor decision. But it’s also literally half the price of travelling when there are two people, even including the parking and ULEZ charge. We drive to Woolwich and get the Elizabeth line from there; the fuel costs about £10, compared to about £80 for two people to get the train. I remember reading that the line to Canterbury is, per mile, the most expensive passenger railway in the world, and I can believe it.

However, this journey turned into a rather more expensive one, because on the way back we blew a tyre on the M2 and had to call recovery to pick us and the car up. Unfortunately our coverage had also run out, which meant that the total cost of recovery was just north of £200. Plus, of course, the tyre needed replacing (another £80). The God of Nature got their revenge.

A couple of weeks I started a Substack. I wanted to create a series of posts which look at how technology is impacting on publishing, and I have started with a focus on how the main sources of traffic for publishers – Google, Facebook – are going to fade in importance over the next few years as they begin to keep more people on their own pages and produce more immediate answers to queries using large language models (LLMs, which I am trying to not call AI – because while they are definitely artificial, they are not intelligent).

I think the audience development community of which I was (am?) a part is, at least publicly, in a bit of denial about this. The reaction to the article on The Verge about SEO’s impact on the web was a good demonstration of this: a lot of SEO people were very defensive about it, which is never a good look (if you’re confident about your work, you don’t get prickly when someone doesn’t understand what you do). I think I’m going to write something about that this week.

Some people have asked about Substack and why I’m using it. The answer is mostly that it’s much easier to get an audience there than it is to create a standalone newsletter. Substack does part of the work of promoting it for you, and it does work. That said, I also understand that some people have an ethical (and practical) objection to using a platform like Substack, so I’m going to create an alternative way of signing up this week somewhere else (probably Buttondown). It means more work for me of course, but that’s fine: and it also gives me a backup for when Substack inevitably starts to enshittify (which will be the moment you’re no longer able to export your subscriber list to move to another service).

Three things that caught my attention this week

  1. I feel like I end up recommending whatever Cory has written every week, but this week’s article on big tech’s “attention rents” really did knock it out of the park.
  2. The Guardian’s interview with Naomi Alderman was also brilliant. But that’s because Naomi is brilliant. We have only met once, but I have absolutely admired her ever since. Amongst the many clever and warm-hearted people I know, she’s pretty much top of the list.
  3. It’s been interesting to see how little reaction there has been to Sam Bankman-Fried’s inevitable guilty verdict from the Silicon Valley rich dude posse, but it makes sense: they want to portray him as simply a fraudster who got caught. The trouble is, he’s one of their creations who got caught.

Importing Apple Notes into Obsidian is now easy

Obsidian’s Importer Plugin Lets You Move Your Apple Notes to Any Note-Taking App That Supports Markdown - MacStories:

Apple Notes doesn’t have an export option. Instead, as Obsidian’s blog post on the Importer plugin update explains, it stores your notes in a local SQLite database. The format isn’t documented, but the developers of the plugin were able to reverse-engineer it to allow users to move notes and their attachments out of Notes and into two folders: one with Markdown versions of your notes and the other with the files attached to your notes. The folder with your notes includes subfolders that match any folders you set up in Notes, too.

This is just outstanding work from the Obsidian team. There are a couple of limitations, mostly that it can’t import password protected notes (obviously), but I’ve tested it and it worked well.

Related: undocumented SQLite databases should not be the way that a multi-gazillion dollar corporation is storing valuable data.

Who would have thought Amazon would behave like this?

Amazon deliberately deleted messages to hide dodgy business practices:

The FTC also alleges that Amazon tried to impede its investigation into the company’s business practices. “Amazon executives systematically and intentionally deleted internal communications using the ‘disappearing message’ feature of the Signal messaging app. Amazon prejudicially destroyed more than two years’ worth of such communications—from June 2019 to at least early 2022—despite Plaintiffs’ instructing Amazon not to do so.”

And the answer to the headline is, of course, “anyone that’s been paying attention.

AI content: Publishers' next burning platform moment

LLMs remove a key competitive advantage of publishers. You need to find a new one.

It still surprises me that I’m old enough to have been part of the transition from print publishing to digital, but what surprises me more is that publishers are again making some of the same mistakes they made in that early internet era. But this time, it’s about the use of large language models to generate content, and it’s even being made by digital natives.

A little bit of history is probably useful here. Back in the mid to late 1990s, many publishers saw online content in terms of its ability to reduce their costs. Paper, printing and distribution of physical magazines were expensive. Publishing content online, though, was basically free. This, the theory went, would allow publishers to cut costs those costs and make more money.

What most publishers didn’t understand was that the high costs of production associated with print were their main advantage because they acted as a barrier to entry for new competitors. Starting a magazine was hard: you had to not only have enough capital to allow you to print and distribute the thing, you needed access to news-stand distribution, which in the UK meant working with big distributors who had to be persuaded to stock you. You needed a sales team to sell enough advertising to support it, and they needed contact books that were thick enough to get their feet in the doors. Magazine publishing was expensive, and only large publishers were able to get it done at scale.

By the mid-1990s, though, anyone could publish online. All those competitive advantages disappeared within a couple of years. You could publish easily using platforms like Blogger, WordPress, or even Myspace. You could get ad revenue from systems like Google Ads, without a sales team of any sort. Not only that, but you could get your content seen via Google search and social platforms.

It took publishers a long time to realise that the old barriers to entry no longer protected them. Some publishers still act like they think they do, and so appear consistently dazzled when a new platform comes along and makes individuals who take advantage of it into millionaires. TikTok is the latest, but it’s by no means the first. Online was a burning platform moment for publishers, and some of them took far too long to see it.

The next burning platform

The ability of large language models (LLMs) like ChatGPT to create content is, of course, being seized on by publishers who see it as a method of creating editorial content without having to pay anyone to do it – or, at least, by paying fewer people to do it (and probably cheaper ones too – that was another outcome of the move from print to digital). If you’re a publisher reading that and shaking your head, thinking “well that’s not what we’re doing” I am going to give you a small monkey side eye because we all know that if you’re not thinking that way, your CFO probably is:

There’s nothing wrong with using new technology to reduce costs, as long as you retain your competitive advantage. And here’s where things are difficult for publishers because what LLMs do is similar to what happened with web publishing in the 1990s: it removes the competitive advantage of publishers in the creation of content, just as the web removed their advantage in publishing and distributing it. It is the next step in the democratisation of publishing.

In the early internet publishing era, anyone could create any content and put it online, but to be successful they needed to have the expertise to write the content in the first place. That’s why niches like technology publishing were impacted early and heavily: there was plenty of expertise out there, and suddenly, they could create content directly without an intermediary.

Now, thanks to LLMs, anyone in the proverbial bedroom can create swathes of “good enough” content on any topic they want. They can churn out hundreds of average pieces about anything, just by taking a list of the most popular search queries in that topic as their starting point. They’re not flawless, but they’re good enough, particularly to answer the kinds of search queries which publishers have used to generate traffic at scale.

This is why, for publishers, AI content creation is another burning platform moment. Combine it with the move towards providing more answers directly on search pages, and you have a one-two punch to publisher traffic which Mike Tyson would be proud of.

Of course, publishers can use LLMs too. But, as with early internet publishing, their size means they can neither move fast nor with low enough fixed costs to make it work. If a proverbial 16-year-old can create an article with ChatGPT on “10 things you didn’t know about Mila Kunis” at the same speed as a celebrity magazine, at the same quality, the magazine loses even if it has used technology to eliminate roles and cut its costs. Because, unlike our 16-year-old, it has big fixed costs: offices, equipment, pensions, you name it. And it has margins to protect because the stock market expects to see revenue growth every year.

Regaining competitive advantage

So what can publishers do to retain their competitive advantage? There really is no point in trying to pretend that the AI genii doesn’t exist, in the same way that publishers couldn’t pretend in the 90s that people would just carry on buying huge volumes of print.

Nor will legal claims aimed at the likes of OpenAI, Google and Microsoft succeed. Yes, your content has been scraped to create the language models in the first place. But given the result in the Author’s Guild vs Google Books case, I expect courts to hold that this kind of use is transformative, and therefore fair use. Either way, it will be tied up in the legal system for far too long to make a difference.

Some have suggested that the way forward will be private large language models built solely using the corpus of text publishers hold. There are a few issues with this, but the biggest one is simply that the horse has bolted. OpenAI, Google and others have already trained their models on everything you have published online to date. They probably even have access to content which you no longer have. How many redirects of old, outdated content do you have in place where the original no longer exists? How many of your articles now only exist in the Wayback Machine?

Instead, the only option for publishers is to focus on creating content of a higher quality than any current LLM. You cannot gain competitive advantage at the cheap, low-cost end of the market. Trying to do so will not only make you vulnerable to anyone else with the same tools (at $20 a month) but also devalue your brand over the long term.

Creating higher quality content means employing people, which is why that urge to use LLMs to replace your editorial teams will actually undermine the ability of publishers to survive. Putting that cost saving towards your bottom line today is a guarantee that you will be out-competed and lose revenue in the future.

So what can you do with LLMs? The most important thing is that LLMs can be used as a tool to amplify the creativity and ability of editorial teams. They are most useful as what Steve Jobs used to call “a bicycle for the mind”, capable of amplifying human creativity. An LLM can give you a starting point, suggest an outline on any topic, rewrite a headline 100 times using the word “crow” and it never gets tired doing so.

If you’re a publisher, you probably still have decades worth of experience, context, contacts and knowledge of audiences in your editorial teams. Train them on how to use LLMs to amplify their creativity (and if you want some help with that, email me!)

You’re going to have to change your content strategy to adapt to the new world of falling Google traffic anyway. LLMs should be seen as a chance to exit the market for low-quality, high-volume content.

Weeknote, Sunday 29th October 2023

An abbreviated weeknote this time, as I've not long got back from Orford Ness.

Orford Ness is a strange and interesting place. Used by the air force as a bombing run and a research centre, it has been partly rewilded, but with the unmistakable detritus of military and industrial use. In some ways, it reminded me of parts of the industrial edges of Derby combined with the brutal flat farm land of southern Derbyshire. But with shingle. Lots and lots of shingle. It's stark and beautiful, and I recommend a visit.

I started scribbling down a story based in part there last night, which I want to outline further. I'm going to crack into writing a short novel in November and see where I get with it: I would like to get a draft completed, although as I don't yet have a plot outline I'm a bit behind already. That will be this week's work.

Also on the agenda for this week is the second weekly post on my Substack, which focuses on the intersection between technology and the publishing business. Last week I posted about the impact of AI-driven changes in search on the ability of publishers to get traffic, the short version of which is “oh bugger”. There's no doubt in my mind that Google and Facebook really are intent on answering more queries without sending traffic to anyone else. That raises some huge problems, but there are ways out.

This week is all about using AI to create content, and the threat that poses to publishers. “Threat,” you're saying, “isn't it an opportunity?” Well, no – and tomorrow I'll explain why.

The three things which most caught my attention

  1. How tiny Qatar hosts the leaders of Hamas. In among the entirely correct condemnation of Hamas, what's being ignored is the role of “friendly” countries in “hosting” the Hamas leadership. Qatar, a country which has a track record of human rights abuses as long as your arm, gets rewarded with hosting World Cups and much more while it materially supports terrorism. Why does the West ignore this? In Britain's case, perhaps because of the [£40bn or so “investment” the country makes] – which mostly means buying and inflating property prices, benefiting our Tory masters.
  2. Many people jumped on the story that Spotify made higher-than-expected profits, citing the top-line number of around 1bn euro earnings. What they didn't cite was the actual profit: just 32m euro. Bearing in mind that it was after a quarter of crackdowns on password sharing, large increases in subscribers, and increases in prices, it's hard to see how Spotify will ever be a seriously profitable business.
  3. All the Whole Earth Catalog is now available online. Nostalgia in a bucket load.

DOJ probing Tesla’s EV range cheating

DOJ probing Tesla’s EV range after reports of exaggerated numbers - The Verge:

The US Department of Justice (DOJ) is investigating the range of Tesla’s electric vehicles after reports surfaced that the company was relying on exaggerated numbers.

In documents filed with the Securities and Exchange Commission, Tesla said that it had “received requests for information, including subpoenas from the DOJ, regarding certain matters associated with personal benefits, related parties, vehicle range and personnel decisions.”

This follows on from a Reuters' report earlier this year, which found Tesla was getting so many complaints about range it was cancelling appointments with its service centres for customers with the problem:

According to Reuters, there was nothing actually wrong with the vehicle’s battery. Rather, Tesla had allegedly created software to rig its driving range estimates to show a rosier picture. This led to thousands of customers seeking service appointments to figure out what was wrong with their vehicles. But because the vehicle was working as intended, Tesla’s diversion team simply canceled all the appointments.

So Tesla created software which gave a false reading of battery range, then when people spotted it, they just canceled any service to them.

It’s worth noting that when VW was caught cheating on its emissions tests by using a device to check when it was being tested and artificially improving results, it ended up being fined tens of billions of dollars.

This isn’t on quite that scale, but regulators tend to take a very dim view of cheating customers. It’s quite possible this will cost Tesla billions.

Anyone willing to bet that it will turn out this was done at Elon Musk’s insistence? And will that be the final nail in the coffin of his reputation?

SEO will be over for publishers. You need to adapt.

Position one for a query is no longer close to enough

I don't know of a single person in publishing who doesn't believe that large language models (LLMs) aren't going to have a profound impact on the industry. But most of the attention has been on using them to create content, something which many publishers see as a way of increasing efficiency (by which they usually mean reducing expensive headcount).

Whether that is actually possible or desirable is a topic for another time, but what I want to focus on is the other side of AI: what its adoption by Google is going to do to the traffic to publisher sites, and how we should be changing our content strategies to respond.

Google's large language models

It's worth starting by being clear about how Google is using LLMs. The company has two products which use large language models to deliver results for users. The first, and probably the best well-known, is Bard, which is similar to ChatGPT in that it uses a conversational interface where users ask questions or give prompts in natural language, and the programme responds.

The second – and the one which, I think, should be most concerning to publishers – is Search Generative Experience (SGE). SGE is currently in the experimental stage, but will ultimately deliver answers directly into the top of Google, generated by its large language model.

As you can see from the example, SGE takes up a lot of real estate in the query result, and delivers a complete answer based on what Google “knows”. Although it gives citations, there is no need to click on them if all you want is the answer to a query.

How this affects publishers

Obviously, anything which sits at the top of search results is going to impact on the amount of traffic which clicks through to publisher sites underneath. And this is potentially worse than anything we have seen before: if the answer to the query is given on Google's page, why would anyone bother to scroll down and click through?

This means the much-fought over positions one to three will be much less effective than every before, and there will be a big decline in publisher traffic.

The impact on different kinds of content

It is likely that some kinds of content will be impacted more than others. Answers to questions are an obvious one, and in 2017 they accounted for 8% of searches. That is likely to have grown already and grow still further as users get used to being able to ask machines questions and get good quality tailored answers.

But in its article on SGE, Google highlights a second area where publishers are likely to see a major impact: shopping. Many publishers have put significant effort into creating content focused on affiliate revenue, with some seeing affiliate overtaking advertising as a source of revenue. Affiliate content is almost always designed to capture traffic via search, for the simple reason that buying products usually starts with a Google search. An SGE-driven shopping search experience will ultimately bypass publishers and drive traffic direct to the retailer, with the AI making individually tailored recommendations on what to buy.

This threatens to be disastrous for publishers. Effectively, SGE delivers a one-two punch of reduced traffic as more search queries are answered on the results page, plus reduced traffic to and revenue from affiliate pages.

What publishers should do

SGE is currently in the experimental stage, which means publishers shouldn't see any significant impact for now. But there is a clear direction here: more answers to search queries will be delivered without any click-through to publishers. And product shopping queries are going to become something which Google channels to retailers (who, by complete coincidence, are also advertisers) rather than publishers (who, by and large, are not).

I estimate that publishers have a window of between three and five years to change content strategies to adapt to this new world, depending on the speed of user adoption. It could be faster: much will depend on how quickly Google's LLM work starts to move from an experiment to delivering real results.

The long-term answer for publishers is to reduce exposure to Google as a source of traffic. That's going to be tough: almost every site I have worked on relied on Google for between 60-90% of its traffic. And the more the site was focused on affiliate revenue and e-commerce, the higher that percentage was.

The answer is to focus on increasing your level of direct traffic, making your site a destination for content rather than something users hit once and bounce away from. Learn lessons from marketing: treat every piece of content you create as an opportunity to deepen your relationship with your audience.

There are five things I would recommend publishers start doing today:

  1. Refocus your KPIs and OKRs to be about deepening relationships, not just traffic. Focus on repeat visits and sign-ups. Look to increase the number of qualified email addresses you have (and whatever you do, don't succumb to the temptation to capture more data. If you deliver value, you will capture more over time -- but all you need now is a person's email address).

  2. Reevaluate your search strategy and focus on topics with complexity. The more complex the content, the higher its quality, the less likely it is that an LLM can deliver a good quality version of it. Expertise and depth will be essential, and complex topic areas might be the “last person standing” when it comes to Google searches which work for publishers.

  3. If you have three to five year revenue forecasts, ramp affiliate revenue down over time rather than predicting growth. The era of affiliate revenue as a major contributor will be over. Use the revenue you are getting from it to bootstrap other areas.

  4. Heavily invest in newsletters. And whatever you do, don't consider them to be a place for advertising. Nothing creeps users out more than thinking they are signing up for interesting content only to find it chock-full of ads or sponsored content.

  5. Don't think that AI-generated content is going to “save” you. Many publishers are looking at content created by LLMs as a way of lowering costs. It will. But it will also put you out of business. Remember that any content you can create with an LLM can be done better by Google at the top of its results pages. What publishers have in their favour is human talent, creativity and expertise. The more you lose that by trying to use LLMs to cut costs, the smaller your competitive advantage.

Next week I will return to that last topic, and look at the mirage of LLM content and why it's a death-trap for publishers.

China launches investigation into iPhone maker Foxconn, says state media

China launches investigation into iPhone maker Foxconn, says state media:

China has launched an investigation into Apple iPhone maker Foxconn over tax and land use, Chinese state media reported on Sunday. The Global Times, citing anonymous sources, said tax authorities inspected Foxconn’s sites in the provinces of Guangdong and Jiangsu and natural resources officials had inspected sites in Henan and Hubei… The Global Times article quoted an expert saying “Taiwan-funded enterprises, including Foxconn . . . should also assume corresponding social responsibilities and play a positive role in promoting the peaceful development of cross-strait relations”.

This is a very big deal and should be keeping Tim Cook awake at night. Effectively, it’s a small shot across the bows for Foxconn, a reminder that without the good graces of the Chinese government, it can’t exist.

Weeknote, Sunday 22 October 2023

It's been a while...

It’s been a while. I have missed the last couple of weeks not because I was too busy to write, but almost the opposite: I have felt like nothing much has happened.

Of course, that isn’t true. It’s never really true that nothing is happening in your life, but when you’re not working, what tends to happen is that the days elide into each other. The rhythm of most people’s life is work, or child-rearing, or the climbing frame of domesticity which they have erected around their time.

I haven’t really yet cultivated that. I have had no work to do other than to make myself get up and write something every day. We have no children to depend on our timekeeping. And keeping house has never been a routine for either of us.

The commemoration this weekend has been that of three months since I last had to get up in the morning, do eight hours of work, and sign off from Teams. I can’t say I haven’t enjoyed it. Having nothing to do, no one relying on your input to get on with their lives, is something I can recommend to anyone who wants to avoid waking up one day and asking “what the hell happened to me?” It provides that thing we most lack as we dance busily through life: perspective.

So, what new perspective on my life have I found? First, that I have a kind of pastoral radicalism, a communism-not-Marxism which believes in the collective good. That sounds abstract, but I think it’s important. It’s a deep and abiding value, and we live in an age when values are used as a debased common currency, but in actuality are as ephemeral and short-lived as muons, decaying quickly into more stable and entrenched positions.

The second thing I have come to understand is how deeply rooted impostor syndrome is in my life. I have always spent time denying my role in what I have achieved (at one point, one of my managers made “blowing my trumpet” a goal for the year because of my habit of deflecting praise). Because of this, I am not kind to myself in any meaningful way. Being forced to just stop has allowed me to start the process of letting some of this go.

The act of writing can be both an antidote to and a trigger for impostor syndrome. Writers crave the validation of an audience because it’s the one moment when the feelings of fraudulence are pushed into the shadows. But the fear of not living up to expectations, of having no originality, of creating nothing of value, is also right there, all the time.

I have thought a lot about this over the past couple of days. We were away, first in Hastings (Kim was teaching a life drawing class there) and then Eastbourne, seeing the Turner Prize show. If you get, go: Rory Pilgrim’s Rafts made me cry, as did Barbara Walker’s work. It reminded me that art is emotion, and it means that I really do have to tap into my emotions to make mine work. More of that, I suspect, over the coming months.

Meanwhile, at some point I will have to actually get some kind of income or other. I have a few more months when I don’t **need** to work, but at some point money will once again become a thing of concern, rather than an abstraction which I can deal with later. One learning about money: I need much less of it than I would have thought a few months ago. Debt, it turns out, robs you of your freedom quite effectively because you have to earn more than you need to pay back someone for the time when you couldn’t earn all that you required. I’m free of debt now, and that feels like an unshackling.

Things I have been reading this week

I finished Gary Gibson’s Europa Deep in two gluttonous sittings. It’s a neat, tidy and highly enjoyable hard SF story, and it reminded me how much of the SF genre is currently playing with the tropes of thrillers and crime drama. I need to think a bit more about this because somewhere in the race to make SF adhere to the structures, tropes and pacing of the thriller, something – quite a lot – is lost.

Reading Hilary Mantel’s A memoir of my former self feels like a delightful indulgence. It’s a collection of Mantel’s extensive back-catalogue of non-fiction, created because she developed the habit early in her career of writing for newspapers, periodicals, and magazines as well as books. It wasn’t really for fun: it was a survival mechanism because writing fiction (then as now) was not really enough to live on, at least until you become the kind of celebrated and storied writer Mantel grew to be.

I’m glad she had to do it because she applied her mind to it and the results are spectacular. In the first piece, “On the one hand”, she writes about the difference between fiction and journalism:

Fiction isn't made by scraping the bones of topicality for the last shreds and sinews, to be processed into mechanically recovered prose. Like journalism, it deals in ideas as well as facts, but also in metaphors, symbols and myths. It multiplies ambiguity. It's about the particular, which suggests the general: about inner meaning, seen with the inner eye, always glimpsed, always vanishing, always more or less baffling, and scuffled on to the page hesitantly, furtively, transgressively, by night and with the wrong hand.

It’s great. You should read it.

Weeknote, Sunday 22nd October

It’s been a while. I have missed the last couple of weeks not because I was too busy to write, but almost the opposite: I have felt like nothing much has happened.

Of course, that isn’t true. It’s never really true that nothing is happening in your life, but when you’re not working, what tends to happen is that the days elide into each other. The rhythm of most people’s life is work, or child-rearing, or the climbing frame of domesticity which they have erected around their time.

I haven’t really yet cultivated that. I have had no work to do other than to make myself get up and write something every day. We have no children to depend on our timekeeping. And keeping house has never been a routine for either of us.

The commemoration this weekend has been that of three months since I last had to get up in the morning, do eight hours of work, and sign off from Teams. I can’t say I haven’t enjoyed it. Having nothing to do, no one relying on your input to get on with their lives, is something I can recommend to anyone who wants to avoid waking up one day and asking “what the hell happened to me?” It provides that thing we most lack as we dance busily through life: perspective.

So, what new perspective on my life have I found? First, that I have a kind of pastoral radicalism, a communism-not-Marxism which believes in the collective good. That sounds abstract, but I think it’s important. It’s a deep and abiding value, and we live in an age when values are used as a debased common currency, but in actuality are as ephemeral and short-lived as muons, decaying quickly into more stable and entrenched positions.

The second thing I have come to understand is how deeply rooted impostor syndrome is in my life. I have always spent time denying my role in what I have achieved (at one point, one of my managers made “blowing my trumpet” a goal for the year because of my habit of deflecting praise). Because of this, I am not kind to myself in any meaningful way. Being forced to just stop has allowed me to start the process of letting some of this go.

The act of writing can be both an antidote to and a trigger for impostor syndrome. Writers crave the validation of an audience because it’s the one moment when the feelings of fraudulence are pushed into the shadows. But the fear of not living up to expectations, of having no originality, of creating nothing of value, is also right there, all the time.

I have thought a lot about this over the past couple of days. We were away, first in Hastings (Kim was teaching a life drawing class there) and then Eastbourne, seeing the Turner Prize show. If you get, go: Rory Pilgrim’s Rafts made me cry, as did Barbara Walker’s work. It reminded me that art is emotion, and it means that I really do have to tap into my emotions to make mine work. More of that, I suspect, over the coming months.

Meanwhile, at some point I will have to actually get some kind of income or other. I have a few more months when I don’t need to work, but at some point money will once again become a thing of concern, rather than an abstraction which I can deal with later. One learning about money: I need much less of it than I would have thought a few months ago. Debt, it turns out, robs you of your freedom quite effectively because you have to earn more than you need to pay back someone for the time when you couldn’t earn all that you required. I’m free of debt now, and that feels like an unshackling.

Things I have been reading this week

I finished Gary Gibson’s Europa Deep in two gluttonous sittings. It’s a neat, tidy and highly enjoyable hard SF story, and it reminded me how much of the SF genre is currently playing with the tropes of thrillers and crime drama. I need to think a bit more about this because somewhere in the race to make SF adhere to the structures, tropes and pacing of the thriller, something – quite a lot – is lost.

Reading Hilary Mantel’s A memoir of my former self feels like a delightful indulgence. It’s a collection of Mantel’s extensive back-catalogue of non-fiction, created because she developed the habit early in her career of writing for newspapers, periodicals, and magazines as well as books. It wasn’t really for fun: it was a survival mechanism because writing fiction (then as now) was not really enough to live on, at least until you become the kind of celebrated and storied writer Mantel grew to be.

I’m glad she had to do it because she applied her mind to it and the results are spectacular. In the first piece, “On the one hand”, she writes about the difference between fiction and journalism:

Fiction isn't made by scraping the bones of topicality for the last shreds and sinews, to be processed into mechanically recovered prose. Like journalism, it deals in ideas as well as facts, but also in metaphors, symbols and myths. It multiplies ambiguity. It's about the particular, which suggests the general: about inner meaning, seen with the inner eye, always glimpsed, always vanishing, always more or less baffling, and scuffled on to the page hesitantly, furtively, transgressively, by night and with the wrong hand.

It’s great. You should read it.

The new Apple Pencil

Apple has released a new Pencil for iPad and it’s weird. It looks like the Second Generation Pencil (the one which charges by sticking to the side of the iPad Pro or current Air). And it will attach there. But it won’t charge if you do – it charges through a hidden USB-C port via a cable.

Oh and it’s not pressure sensitive, which makes it worse for drawing than the old Pencil which charged via Lightning.

It is, though, £79 rather than the ONE HUNDRED AND THIRTY NINE POUNDS the second generation Pencil will cost. So that’s one thing.

Coming soon

This is Ian Betteridge's Three to Five.

Marc Andreessen's manifesto

It would take a far, far longer post than I’m prepared to spend my time writing to go through Marc Andreessen’s “Techno-Optimist Manifesto” paragraph by awful paragraph, but a few points probably won’t go amiss. - If you’re going to approvingly paraphrase “a manifesto of a different time and place”, you might want to check that said manifesto’s author wasn’t an early member of Mussolini’s fascist party.

- Writing “we believe technology is universalist. Technology doesn’t care about your ethnicity, race, religion, national origin, gender, sexuality, political views,” and then, two paragraphs later “We believe America and her allies should be strong and not weak” either shows you have no idea how to write, are being entirely disingenuous, or simply too stupid to think except in blocks of 240 characters. Either way, get an editor to help.

- If you are going to talk about the Greek notion of arete then having an understanding of its relationship to class in Greek society might be a good idea, too. Aristocrats were assumed, by definition, to be exemplars of arete. It wasn’t something that thetes like me would have.

- Believing that techno-optimism “is a material philosophy, not a political philosophy” while giving many repeated examples of what even a first year philosophy undergraduate which know was a political philosophy does not make you look smart.

I could go on – the whole thing is riddled with howlers – but really is there much point?

Thirty years ago, in a different life, I was a philosophy postgraduate student and taught first year undergraduates their introduction to metaphysics and ethics. In the first time, every time, someone would turn in an essay which read like this, and you would have to patiently explain to them they were going to have to rewrite it or fail, because philosophy does not mean writing down all the random thoughts you had when smoking that bundle of weed the night before the deadline.

This is the manifesto of an emotionally insecure man having a mid-life crisis as he realises that his life’s work is meaningless and all the gold and treasure he has accumulated will never make him happy. Mid-life crises in men are often surprisingly redolent of the emotional outpouring of pseudo-intellectual silliness that accompany late teenage, that first period of life when boys start to realise they are not the centre of the world and lash out at the injustice of it all.

Perhaps, then it’s no surprise this reads like it was written by a 14 year old and put on Pastebin. That it was written by a 52 year old with billions of dollars at his disposal says more about the failure of capitalism to imbue life with meaning than Andreessen could possibly imagine.

EDIT: The first draft of this contained something about A16Z’s investment in Uber. In fact, they passed on Uber. But as if to make the point about the kind of technology which Andreessen believes will save the world as long as we never question it, let’s ask an AI...

Screenshot 2023 10 17 at 08 48 51

Publishers need to wake up to the truth about Google traffic

Google-Extended does not stop Google Search Generative Experience from using your site’s content (searchengineland.com)

Google explained that SGE is part of the Google Search experience; it is a search feature and thus it should work as how normal search directives work. “The context is that AI is built into Search, not bolted on, and integral to how Search functions, which is why robots.txt is the control to give web publishers the option to manage access to how their sites are crawled,” Google told us.

I’ve been using both Bard and Bing CoPilot a lot lately and the direction is clear: while AI-driven search will link to original sources as references, they are not going to send much traffic your way. The aim is to provide the answer to any query on the results page, not one more click away.

This has massive implications for publisher traffic, particular for reviews and answers pages which I think are most vulnerable to AI-driven answers. I’ve been using CoPilot for purchasing research and it’s great. I can start by asking it for, say, laptops under £1000 with good battery life. I can then have a conversation to interrogate more about each product. It’s a superior experience to any web page I have ever used for that kind of product research.

Is it 100% accurate? No – but neither are a lot of reviews, particularly the kind of “best laptop for…” top tens that are written to hit the top of product searches on Google.

But it’s not just affiliate: search provides between 40-80% of publisher site traffic. And we have already seen Facebook traffic, the other biggest referrer, die off.

Publishers can no longer rely on Facebook and Google for the bulk of their traffic. The time has past when content strategies should focus on them. Instead, they need to focus on getting a loyal audience which they have direct relationships with. The SEO era is coming to an end, at least for large chunks of traffic.

GitHub Copilot costs more per user than it charges

Big Tech Struggles to Turn AI Hype Into Profits - WSJ:

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

The first stage of the enshittification cycle is often to charge customers less than it costs to run the service, in order to acquire and lock in as many as possible. After that, at some point, you dump on them from a great height.