The MacBook Pro
The MacBook Pro 16in which was effectively replaced by my MacBook Air M2 has been sitting in a corner for a while. I had wiped it completely – something that’s a bit of a saga in its own right – and intended to sell it.
The buyer I had in mind wasn’t able to take it as unfortunately, they had some financial mishaps, and I would rather not sell it via eBay or classified. So it has just sat there doing nothing.
I decided to set it up and use it for a while, just to remind myself of what it was like. It’s the last generation of Intel machine, and I bought it not long before the announcement of the M-series chips. Although it was the lowest-end of 16in MacBook Pro, it’s still a pretty good computer and it seemed a shame for it to be doing nothing.
I’m glad I did because it’s reminded me how much I like a big screen laptop. It’s nowhere near as good for either performance or battery life as the Air, but it’s still more than fast enough for everything I need to do. And as I am unlikely to stray far from the house with it I don’t need to worry too much about the battery.
Now I just have to think about a name for it because the default “Ian’s MacBook Pro” seems a little bit soulless.
Installing Brew
Whenever I get a new Mac or resurrect an old one, I start from scratch rather than reinstall from a backup. This lets me work out which applications I actually need. Because I like to try out many applications, I end up with a lot of software on my machines which I don’t actually use much.
The days when this mattered from the perspective of system maintenance are long gone. Most applications are spraying extensions, libraries, or even (lord help us) DLLs all over your system. Even Linux has self-contained application installs now, thanks to technologies like AppImage, Flatpak and Snap.
But it’s still a waste of disk space and feels inelegant, so I set everything up with a clean slate and only install when I like using.
One thing that always gets installed on any Mac is Brew, the package manager, which is the de facto standard for installing Unix apps on an Apple computer. macOS is, of course, based on Unix, but the default set up doesn’t include the kind of software which usually comes as standard – utilities like ffmpeg, for example.
You can install them, though, and Brew makes is easy. It’s a command line tool which works in the same way as a regular Linux package manager, like DNF on Fedora or APT on Debian-derivatives. Once you have installed Brew using a single line of commands, you can type brew install
and the name of the software you want, and it will do all the installation you need.
Brew lets you fill the holes which Apple has left. For example, the first thing I install with it is wget
, which isn’t part of the standard macOS and which I find very useful. I also add yt-dlp
so I can download video from YouTube and other services, as well as get_iplayer
to tap into the BBC’s archives.
There’s a lot more you can do with Brew, and if you are used to the command line I recommend it.
Weeknote, Sunday 3rd December 2023
How did it get to be December already? And how did it get so cold?
The Byte archive
I was a fairly religious reader of Byte magazine from the early 1980s until it finally bit the dust as a print publication in 1998. I always loved that it wasn’t focused on a single platform, but on “small computers” as a whole.
It also had the kind of deep technical content which I loved. If you wanted to know about new processors, the transputer, or something even more esoteric, Byte was a great place to keep informed.
It also had Jerry Pournelle. Science fiction writer, conservative, and holder (in later life) of some dubious views, Jerry was nonetheless one of the most influential early computer journalists. I loved his columns, which stretched out to 5,000 words or so per issue. They were written from the perspective of an ordinary computer user — albeit one that had the kind of knowledge required to run a computer in the days of the S100 bus and CP/M.
Thankfully, the Internet Archive has every issue of Byte, scanned and neatly labelled. Annoyingly though, there isn’t a single collection which has every issue in it, which means it’s not easy to just download everything.
And having local copies is vital for me, as I use DEVONThink for research, and it wants a local set of PDFs. So I have started putting together the definitive collection of every single issue, and once I’ve done it I will put them somewhere online, so people can download the whole set. It’s big – my incomplete version is about 7Gb, and I estimate the full set is about 10Gb – but at least they will be there.
This took quite a while to do this week, and I'm pleased with the results.
Chanel
I went to see the Chanel show at the V&A – I don’t really like Chanel’s clothes that much, but her accessories are amazing and she had a really fantastic eye for patterns.
Seeing the collection emphasised that once she created the classic suit, so much of what she did was just more of the same. Milking a hit isn’t necessarily a bad thing: but the small card which notes she “attempted to extend the suit from day to evening wear” is a bit of a giveaway. She didn’t just make more suits. But she was more than happy to keep churning out endless slight variants in a way which made her a lot of money.
It was a little disappointing that the exhibition basically skips over the nine years when she could not work in France because she was widely believed to have collaborated with the Nazis. She had an affair with a German officer and used his connections to protect a relative. Literally one sign, with no more than 30 words on it, and then skipping merrily on to her return in 1954.
In fact, there is more space devoted to the one document from 1943 which lists her as working with the resistance, although there is no documentation (and no one remembers) exactly what she did with them.
There is, of course, far more documentation listing her as a Nazi agent. She definitely benefitted from the Germans’ Aryanization laws, which let her get control of her perfume business from the Jewish Wertheimer brothers.
There’s no doubt that Chanel collaborated, and that her high-placed contacts (Churchill, Duff Cooper, and many others) protected her after the war. None of this is mentioned, perhaps because once you understand what she did and what she was, it’s much less likely that you will just want to admire the pretty clothes.
I don’t think it’s possible to understand Chanel-the-person without considering that period of her life. And the exhibition doesn’t have the excuse that it’s solely about her influence on fashion (there’s surprisingly little which contextualises her in that sense). It ends when she dies, so it's not about Chanel the brand or even really her legacy.
In that sense, it’s a massive contrast to Diva, which is also on at the V&A and which managed to reduce me to tears when I saw it. Diva is a brilliant bit of curation in ways that Chanel is not.
However, if you do go to the V&A, get yourself a piece of the pear and caramel cake. It’s really rather fine.
Three things you should read this week
- The End of Elon Musk. In any rational world, Musk’s performance at the Dealbook conference would be the end of his career. It probably won’t be, unfortunately. But, as Magary notes, Musk “appeared both high and made of plywood”. He does not seem like a well man, and I don’t say that either lightly or with any pleasure.
- Speaking of Musk, the Cybertruck is here1, and predictably it’s pricier and has less range than he claimed. Oh, and while the sides are bulletproof, as Musk said, the windows are not, which may prove an issue if someone is actually trying to kill you.
- And sticking with the theme of “people who really should grow up”, Basecamp lost a customer thank to DHH’s nonsense. Why is it that people who crow loudest about “keeping politics out of work” so often bring their politics to work? Of course, what they actually mean is “keep your politics out of work”. It’s the same as Elon “Free Speech” Musk. Free for them, not for you.
This week I have been reading…
The news that greenhouse gas emissions have been soaring rather than reducing ended the week on a sour note for me, but it makes it more obvious than ever that capitalism isn’t going to deliver a future of humanity. So reading Tim Jackson’s Post Growth has been pretty timely. Highly recommended.
-
Not actually here till 2024, or 2025 if you want the cheap model, and not here at all outside the US because it doesn’t meet any reasonable safety regulations. ↩︎
Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today) | Mozilla Add-ons Community Blog
Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today):
Starting December 14, 2023, extensions marked as Android compatible on addons.mozilla.org (AMO) will be openly available to Firefox for Android users.
But not of course for iOS, because Apple doesn’t allow companies to use any rendering engine other than Safari’s webview. And Apple also hates the idea of extensions that aren’t themselves applications, so don’t expect them to make the lives of extension developers easy once the EU forces them to open things up a little.
How to get a glimpse of the post-Google future of search
What does the search engine of the future look like? Forget 10 blue links...
You can break down the creative process into three big chunks: research, creation and revision. What happens in part depends largely on the kinds of content you're creating, the platforms you are making the content for, and many other factors.
Like every journalist, I spent a lot of time using search on the web to help with that research phase. I was quick off the mark with it, and I learned to adapt my queries to the kinds of phrases which delivered high-quality results in Google. My Google-fu was second to none.
But that was the biggest point: like all nascent technologies, I had to adapt to it rather than the other way around. Google was great compared to what came before it, but it was still a dumb computer which required human knowledge to make the most of out of it.
And Google was dumb in another way too: apart from spelling mistakes, it didn't really help you refine what you were looking for based on the results you got. If you typed in “property law” you would get a mishmash of results for developers, office managers and homeowners. You would have to do another search, say for “property law homeowners” to get an entirely different* set of results that were tailored for you.
Google got better at using other information it knows about you (your IP address, your Google profile) to refine what it shows you. But it didn't help you form the right query. It would ask you “hey, what aspects of property law are you interested in” and give you a list of more specific topics.
What's more, what it “knew” about you were pretty useless. You couldn't, at any point, tell it something which would really help it give you the kinds of results you wanted. You couldn't, for example, tell it "I'm a technology journalist with a lot of experience, and I favour sources which come from established sites which mostly cover tech. I also like to get results from people who work for the companies that the query is about, so make sure you show those to me too. Oh, and I'm in the UK, so take that into account."
Google isn't like that now. Partly that's down to the web itself being a much worse source of information. But that feels like a huge cop-out from a company whose mission is to “organise the world’s information and make it universally accessible and useful”. It sounds like what it is: a shrug, a way of saying that the company's technology isn't good enough to find "the good stuff".
The search engine of the future should:
Be able to parse a natural language query and understand all its nuances. Remember how in the Knowledge Navigator video, our professor could ask just for “recent papers”?
Know not just the kind of information about you that's useful for the targeting of ads (yes Google, this is you) but also the nuances of who you are and be able to base its results on what you're likely to need.
Reply in natural language, including links to any sources it has used to give you answers.
If it's not sure about the kind of information you require, ask you for clarification: search should be a conversation.
For the past few weeks, I've been using Perplexity as my main search engine. And it comes about as close as is currently possible to that ideal search engine. If you create content of any kind, you should take a look at it.
Perplexity AI allows users to pose questions directly and receive concise, accurate answers backed up by a curated set of sources. It's an “answer engine” powered by large language models (including both OpenAI's GPT-4 and Anthropic's Claude 2). The technology behind Perplexity AI involves an internal web browser that performs the user's query in the background using Bing, then feeds the obtained information to the AI model to generate a response
Basically, it uses an LLM-based model to create a prompt for a conventional search engine, does the search, finds answers and summarises what it's found in natural language, with links back to sources. But it also has a system it calls (confusingly) Copilot, which provides a more interactive and personalised search experience. It leverages OpenAI's GPT-4 model to guide users through their search process with interactive inputs, leading to more accurate and comprehensive responses.
Copilot is particularly useful for researching complex topics. It can go back and forth on the specific information users need before curating answers with links to websites and Wolfram Alpha data. It also has a strong summarisation ability and can sift through large texts to find the right answers to a user's question.
This kind of back-and-forth is obviously costly (especially as Copilot queries use GPT-4 rather than the cheaper GPT-3.5). To manage demand and the cost of accessing the advanced GPT-4 model, Perplexity AI limits users to five Copilot queries every four hours, or 600 a day if you are a paid for “Pro” user.
If you're not using Perplexity for research, I would strongly recommend giving it a go. And if you work for Google, get on the phone to Larry and tell him your company might need to spend a lot of money to buy Perplexity.
Latenote, Monday 27th November 2023
Between getting ridiculously excited about the goings-on OpenAI, I didn't get a lot of writing done this week. There are definitely times when too much is going on in the tech world, and my old habits die hard: I have to keep up with it all.
I wrote a post on Substack with my take on it, from the perspective of the longer-term impact on creative professionals. And, given how fast things were moving, I ended up rewriting it three times. That was a good reminder not to cover breaking news in that newsletter!
In case you're interested, the focus of that newsletter is the three-to-five year perspective on how technology will impact on what we occasionally call “the creative industries”. That includes magazine publishing, of course, but also writing and creativity more broadly. Hopefully, it should be interesting.
On Sunday, we went out with the wonderful and super-clever Deb Chachra, who has just published her book How infrastructure works (and there's a great review of it here if you are interested). We tempted Deb out of London on a trip to Dungeness, which has both Derek Jarman's cottage and Dungeness A and B nuclear reactors. What's not to like about art and infrastructure?
And more art on Sunday night, as we went down to Folkestone for a talk by the brilliant and wise Jeremy Deller. If you don't know Deller's work, honestly, where have you been for the last 20 years? This is the third time we have done something Deller-related this year, having seen him before in London and also seen Acid Brass. 2023: Year of the Deller.
The three things which most caught my attention
- Commiserations to my old comrades in SEO, who are dealing with some pretty turbulent times. I promise that I didn't sabotage Google.
- Bill Gates wrote a long post about the way AI is going to change the way you use computers. Gates is right – large language models are just the precursor to what might look from some angles like the end of application software altogether.
- Bloomberg looked at the way Elon Musk has been radicalised by social media, adopting a world-view that's completely in the thrall of what we would have called the alt-right not that long ago.
Things I have been writing
There were three… no, actually four drafts of my post about what was going on at OpenAI and why you should care. I am never doing breaking essays on news again.
To give myself a break from all things Orford, I picked up a short story that I had left to one side, about a very strange doctor. Might finish that this week.
What the heck is going on at OpenAI (and why should I care?)
Confused? You should be. I'm deliberately not looking at Techmeme so I don't have to update this post for the fifth time.
Twenty-four hours ago, this was a thoroughly different post. Heck, twelve hours ago, it was a different post.
One of the things I told myself when making this Substack was that I wouldn’t focus on current events. My focus is on the longer term: the three-to-five-year time frame, for publishers, communications professionals and others assorted nerds.
But the shenanigans at OpenAI over the weekend suckered me in, and now I have had to rewrite this post three times (and whatever I write will probably be wrong tomorrow). Still, the drama goes on.
The drama that’s been happening at OpenAI does matter and might be a turning point in how AI and large language models develop over the coming years. It has some implications for Google – which means it is relevant for publisher traffic – and Microsoft – which means it is significant for the business processes which keep everything flowing.
What’s happened at OpenAI?
If you’ve not been following the story, here’s a timeline created by Perplexity (about which I will have more to say in the future). But the basics are that OpenAI’s board dismissed Sam Altman, its founder and CEO, alleging he had been less than truthful with them. Half the company then decided they wanted to leave. Microsoft’s Satya Nadella then claimed Altman would be joining his company, only to walk that back later in the day. Now Altman is going back to OpenAI as CEO but not on the board and there will be an “independent investigation” into what went on, something that might not totally exonerate Altman.
Confused? You should be. Everyone else is. Partly this drama comes down to the unusual structure of OpenAI, which at its heart is a non-profit company that doesn’t really give two hoots about growth or profits or any of the things most companies do. Partly it’s down to Altman basically pushing ahead as if this wasn’t true, then realising too late that it was.
What’s the long-term impact on future AI development?
OpenAI has been at the forefront of developing the kind of conversations large language models which everyone now thinks of as “AI”. It’s fair to say that before the June 2020 launch of GPT-3, LLMs were mostly of interest to academic researchers rather than publishers.
And a huge number of tools have been built on top of OpenAI’s technology. By 2021 there were over 300 tools using GPT, and that number has almost certainly gone up an order of magnitude since. And of course, Microsoft is building OpenAI tech into everything across its whole stack, from developer tools to business apps to data analysis.
If there’s one company that you don’t want to start acting like a rogue chatbot having a hallucination, it’s OpenAI.
And yet, because of Microsoft’s investment in the company and commitment to AI, it probably matters a lot less than it would have if this schism had happened three or four years ago. The $13bn it has put in since 2019 for an estimated 49% stake of the company and the fact it is integrating OpenAI tech into everything it does mean it has a lot to lose (and Satya Nadella does not like losing.)
Because of this, I think the past few days won’t have much impact on the longer-term future of AI. In fact, it could end a good thing, as it means Microsoft has committed that it will step in should OpenAI start to slip.
The greatest challenge for Microsoft was that, although it had perpetual licenses to OpenAI’s models and code, it didn’t own the tech outright, and it didn’t have the people in house. And, when you’re betting your company’s future on a technology, you’re always in a better position if you own what you need (something that publishers should take note of).
Partners are great, but if you’re locked into a single partner, and they have what you require, you’re never going to be the driver of your fate. Now, though, if Altman and the gang join, Microsoft effectively owns all it needs to do whatever it wants. It has the team. It has the intellectual property. Everything runs, and will continue to run, on Azure, and it has the financial muscle to invest in the huge amount of hardware required to make it available to more businesses.
The big question for me is how all this impacts on Google over the next few years. If Altman and half of OpenAI ends up joining Microsoft, I think it weakens Google substantially: at that point, Microsoft owns everything it needs to power ahead with AI in all its products, and the more Microsoft integrates AI, the stronger a competitor it will be.
If, on the other hand, Altman goes back to OpenAI with more of a free hand to push the technology further and harder, Microsoft still benefits through its partnership, but to a lesser degree.
If I was running Google, I would be calling Aravind Srinivas and asking how much it would take to buy Perplexity. But that’s another story, maybe for next week.
"Journalism is picking up the phone"
Remembering the craft and process of original reporting can help build a loyal audience.
So far this week, I have looked at a couple of strategies for creating stand-out content over the coming years: hands-on reviews and real-life stories. There is a third area, and in a sense it’s about going back to the future and focusing on something that never truly went out of fashion: original reporting.
Back in 2008, my reserve arch enemy Danny O’Brien and I were debating what the difference was between blogging and “proper” journalism, and Danny ended up liking one of the ways I put it: that “journalism is when you pick up the phone”. Even then, that didn’t mean a literal phone – email was the hot communications thing. But it meant, as Danny put it, “journalism requires some actual original research, rather than just randomly googling or getting emailed something and writing it up as news.”
That’s the core of original reporting, and as Danny also pointed out, a great deal of what passes as editorial doesn’t meet that standard (opinion columnists of the UK media, stop looking so shifty).
Original reporting in any topic area is about uncovering truths, providing context, and delivering stories that matter to audiences. AI, while adept at aggregating and rephrasing existing information, lacks the ability to conduct investigative journalism, engage in ethical decision-making, and provide the human empathy that is often central to impactful storytelling. I would consider myself broadly an optimist about the developing capabilities of AI, and even I don’t think it’s likely to be able to do this in my lifetime.
And “picking up the phone” is definitely having something of a renaissance. Take, for example, the series that The Verge is currently working on under the label of “we only get one planet”. Digging into how Apple and others add to the mountain of e-waste while claiming to be on top of their environmental efforts takes a lot of work, and importantly, original research and interviews. The Verge might not be physically picking up the phone, but they’re more than living up to the spirit.
Obviously, investing in original reporting is expensive, and it can’t just be a moral imperative. It has to be a sound business strategy, too. First, audiences appreciate its value. According to a 2019 Pew Research survey, “about seven-in-ten U.S. adults (71%) say it is very important for journalists to do their own reporting, rather than relying on sources that do not do their own reporting, such as aggregators or social media. Another 22% say this is somewhat important, while just 6% say it is not too or not at all important.”
Original reporting can elevate a publisher's brand reputation and recognition, which can be a key to unlocking more direct traffic. In a saturated market, having a distinct journalistic voice and a reputation for in-depth reporting can be a significant differentiator.
Publications like The New York Times and The Guardian have successfully leveraged their reputations for quality journalism to build robust subscription or contribution-based revenue models, with The Guardian hitting record annual revenue this year. And, importantly for its long-term profitability, nearly half its traffic is direct (and its biggest search terms are branded ones).
One thing that’s worth noting: The Guardian’s strategy was a three-year plan. Do you have a three-year plan to diversify revenue, have a more direct relationship with your audience, and leave yourself less vulnerable to the whims of Google or Facebook?
Telling human stories: where AI ends and people begin
The second area where humans can do a better job than an LLM: real life storytelling
One of the best parts of my last year working at Bauer was getting to know the team which works on real life content. Real life, sometimes called true life or reader stories, focuses on stories derived from ordinary people caught up in extraordinary events – usually not the national news, but their own personal dramas.
There are many magazines whose focus is entirely real life, and you will have seen them on many supermarket shelves with multiple cover lines, often focused on shocking stories. But the key part about them, and the thing which differentiates them from tabloids, is that the stories are those told by the people involved in the drama. It's not third-person reporting: it is focused on first-person experience.
And now a confession: before I worked with that team, and I suspect like many journalists, my view of real life wasn't all that positive. I considered it to be cheap, and pretty low-end.
How wrong I was.
I worked with the team creating the content to implement a new planning system, which needed to capture every part of their story creation process. What I learned was how thorough their process is, and how much human care and attention they had to take when telling what were sometimes traumatic stories, working directly with the subject.
I don't think I have ever worked with a team that had a more thorough legal and fact-checking process, and I came away a bit awed by them. I ended up thinking that if all journalists operated with their level of professionalism and standards, the industry would be in a much better place.
Bringing the human into the story
Where does AI come into this? I talked earlier this week about how injecting more of a human, emotional element into reviews was a way to make them stand out in a field that AI is going to disrupt. Real life is a perfect example of a topic where it's difficult to ever see a large learning model (LLM) being able to create the story.
An LLM can't do an interview, and because of the incredible sensitivity of the stories being told, I wouldn't trust a machine to write even a first draft of it. But there are aspects of the way that real life content is created which, I believe, can give lessons to every kind of journalism.
First, whatever your topic area, telling the human story is always going to be something that humans do better than machines. Build emotion and empathy into telling a personal story, rather than relating just the facts. That doesn’t just mean technique: yes, use emotional arcs, and yes, show don’t tell, but technique alone won’t bring across the way that the subject felt when going through whatever event they are describing.
On a three-to-five-year timescale, I would be looking to shift human journalists into telling more of these kinds of stories, regardless of what your topic area. Remember that humans are empathic storytellers and focus on the emotion of the story. So, think about how you can change your content strategy to be more focused on the human story.
The process is the practice
Don't, though, be tempted to work on these kinds of stories with an ad hoc process. Process is important in journalism – but it is crucial if you want to do real life stories well.
To do this well, make sure you codify and document the process to a high level. Documenting the process is often something that journalists can push back on because it's considered stifling creativity, but that's not true at all. In fact, a documented process allows you to free up time to focus on creative tasks, rather than reinventing the wheel with every story.
And that is where you can start to think about how to use LLMs to streamline your processes and make them move faster. But this is a business process problem, rather than a creative one.
For example, if your pitching process involves creating a summary of a story, an LLM can write the summary – there's no need to waste a human's time to do it. Can you write a specialist GPT to check if a story has been used before? Can you use an LLM to query your content management system for similar stories you may have run in the past?
If you are thinking about how to be a successful publisher in three to five years, you need to be looking at the process. If it's not documented – in detail – then make sure that's done. That can't be a one-off because a process is never a single entity fixed for all time. New technologies and new more efficient practices will come along, and someone needs to be responsible within your organisation for it.
So, ask yourself some questions:
Who, in my company, is directly responsible for documenting and maintaining our editorial and audience development processes?
Where are they documented?
How often are they maintained?
Are they transparent? Does everyone know where they are?
Once you have a fully documented process, you can start to interrogate it for points where AI can be used to speed things up, where using natural language queries to a specialist model can improve the work. That way, you can leave humans to do the work they're best at: emotion, and storytelling.
What kinds of content can humans do better than AI?
Sometimes, you just need the human touch...
What kinds of content can humans do better than AI? The last few posts here have, I have to admit, been a bit of doom and gloom. I’ve looked at how conversational AI is going to squeeze search traffic to publisher sites, and at how adopting AI for content generation will remove the key competitive advantage of publishers.
But there are areas of content creation where publishers can use their ability to do things at scale and the talent they have to make great work that audiences will love.
I’ve broken this post out into three parts, covering three different kinds of content. Today, I’m going to look at one which is close to my heart: reviews. Tomorrow and Thursday I’ll look at two other examples where humans can win.
Doing reviews right
One of the points that I made last week was that affiliate content, in particular, was susceptible to the shift to conversational ways of working with computers. However, that doesn’t mean that reviews are going to disappear. Certain types of article are likely to remain an area where humans will continue to produce better content for other humans for the foreseeable future.
For many sites, creating content for affiliate purposes has involved a lot of round-up articles, often created at least in part with what gets called “desk-based research”. You are not reviewing a product you have in your hand, you are researching everything about it that a consumer could possibly need to know, and summarizing it helpfully.
I’ve sometimes argued this was OK in certain circumstances, as long as you flag it and the amount of work that goes into the article is high. Just casting around for whatever is top-rated on Amazon doesn’t cut it because a reader can do that quickly themselves. But if you’re saving someone hours of time in research, you’re still performing a valuable service for them.
That kind of content isn’t going to survive the increased use of conversational AI because one thing that LLMs will be excellent at is ingesting lots of data and combining it into a cogent recommendation. LLMs can read every piece of Amazon feedback, every spec sheet and every piece of manufacturer data faster and more accurately than any human can. If your content is just research, it’s not going to be viable in the world of AI.
What will work is direct first-person experience of the product, written to focus on the less tangible things about it. An LLM can read a car spec sheet and tell you about its torque, but it can’t tell you how it feels to accelerate it around a corner. An LLM can look at a spec sheet for a laptop, but it can’t tell you how good the keyboard is to type on for extended periods.
If your editorial teams are focused on what I used to call “speeds, feeds and data” then part of your approach should be to shake up the way they write to get them closer to a more personal perspective. One way to do this is to change style.
Back when we launched Alphr at Dennis, one of the first changes I made to editorial style was to stop using the UK tech traditional plural in reviews (“we tested this and found blah”) and shift to first person (“I tested this and found blah”). Shifting into first person forces the writer into a more subjectively human perspective on the product you’re looking it. It frees the writer from an overly objective point of view into a more personal experience, and that is something which will survive the world of LLMs. Don’t just say what the specs are: say what it feels like, as a human being, to use this product.
Tomorrow, I’m going to look at the second area I think is a clear “win” for human-generated content: the often maligned area of real life stories.
Weeknote, Sunday 12th November 2023
This felt like a busy week, perhaps because it actually was
On Monday I had a call with Peter Bittner, who publishes The Upgrade, a newsletter about generative AI for storytellers which I highly recommend. It was great to chew the fat a little about what I've been writing about on my newsletter, and also to think about a few things we might do together in the future.
The on Thursday I caught up with Phil Clark, who has also recently left his corporate role and is working on a few interesting projects. Plus I spoke to Lucy Colback, who works for the FT, to talk about a project she's working on.
On Friday we headed down to Brighton for the weekend. Kim was doing a workshop on drawing (of course) and I took the opportunity to catch up with a couple of old friends, including my old Derby pal Kevin who I've known for 40 years. Forty bloody years. How does that even happen?
The three things which most caught my attention
- Here's something positive: the story of Manchester Mill, a subscription-based local news email in Manchester that's doing more than breaking even, which remaining independent, creating quality news, and not taking advertising.
- Tilda Swinton is just one of my favourite people. That's all.
- Mozilla wants to create a decentralised social network, based on Mastodon, that's actually easy for people to use.
Things I have been writing
Last week's Substack post looked at Apple's old Knowledge Navigator video and how computing is heading towards a conversational interaction model. This has some big implications for publishers, particularly those who have focused on giving "answers" to queries from Google: when you can effectively send an intelligent agent out to find the things you want via a conversation, web pages as we know them are largely redundant.
I wrote a post about Steven Sinofsky's criticism of regulating AI. I think Sinofsky is coming at this from a pretty naive perspective, but not one which is atypical of the kind of thinking you'll find amongst American tech boosters. It was ever thus: I feel when writing articles like this that it's just revisiting arguments I was having with the Wired crowd in the late 1990s. The era when "the long boom" was an article of faith, the era when George Gilder was being listened to seriously.
And that's not surprising, really. The kind of people who are loudly shouting about the need for corporate freedom to trample over rights (Marc Andreessen, Peter Thiel) grow up in that era and swallowed the Californian ideology whole. So did a lot of radicals who should have known better.
Things I have been reading
Having seen Brian Eno perform last week I'm working my way through A Year with Swollen Appendices, which is a sneaky book: the diary part is only a little over half of it, so just when you think you're coming to the end you have a lot of reading left to do. It's a good book though. Picking that up means I have had to put down Hilary Mantel's A Memoir of my Former Self, but that will be next on the list.
John G on Monica Chin's review of the Surface Laptop Go 3
Daring Fireball: Monica Chin on the Microsoft Surface Laptop Go 3: ‘Why Does This Exist?':
A $999 laptop that maxes out at 256 GB of storage and has a 1536 × 1024 display — yeah, I’m wondering why this exists in 2023, too. And I’m no longer wondering why Panos Panay left Microsoft for Amazon.
The $999 MacBook Air has 256Gb of storage, 8Gb of RAM, and a three year old processor. I’m kind of wondering why that exists in 2023, too.
Not to say that the Surface Laptop 3 is any good – it isn’t – but Microsoft isn’t the only company that has some bizarre pricing at the “low” end of its laptop range.
What a 36 year old video can tell us about the future of publishing
The future is arriving a little later than expected...
I have had the best life. Back in 1989, I left polytechnic with my first class honours degree in humanities (philosophy and astronomy) and walked into the kind of job which graduates back in the 80s just didn't get: a year-long internship with Apple Computer UK, working in the Information Systems and Technology team – the mighty IS&T.
It paid a lot better than my friends were getting working in record shops. And although it was only temporary – I was heading back into higher education to do a PhD in philosophy, working on AI – it suited me. Without it, I wouldn't have had my later career in technology journalism. The ability to take apart pretty-much any Mac you cared to name became very useful later on.
Apple treated new interns the same as every other new employee, which meant that there was an off-site induction for a couple of days when we were told about the past, present, and future of Apple. The only part of the induction that I remember is the future because that was when I first saw the Knowledge Navigator video.
If you haven't seen Knowledge Navigator, you should watch it now.
Why is a 36-year-old concept video relevant now, and what does it have to do with publishing? The vision of how humans and computers interact which Knowledge Navigator puts forward is finally on the cusp of coming true. And that has profound implications for how we find information, which in turn affects publishers.
There are three elements of the way Knowledge Navigator works which, I think, are most interesting: conversational interaction; querying information, not directing to pages; and the AI as proactive assistant. I'm going to look at the first one: interaction as conversation, and how close we are to it.
Interaction as conversation

The interaction model in Knowledge Navigator is conversational. Our lecturer talks to the AI as if it were a real person, and the interaction between them is two-way.
Lecturer: “Let me see the lecture notes from last semester”. Mhmm… no, that's not enough. I need to review the more recent literature. Pull up all the new articles I haven't read.”
Knowledge Navigator: "Journal articles only?”
Lecturer: "uhh… fine.”
Note one big difference with the current state of the art in large language models: Knowledge Navigator is proactive, while our current models are largely reactive. Bing Chat responds to questions, but it doesn't ask me to clarify my queries if it isn't certain about what I'm asking for… yet.
That aside, the way conversation happens between our lecturer and his intelligent agent is remarkably similar to what you can do with Bing Chat or Bard now. The “lecture notes from last semester” is a query about local data, which both Microsoft and Google are focused on for their business software, Microsoft 365 and Google Workspace. The external search for journal articles is the equivalent of interrogating Bing or Bard about a topic.
In fact, Bing already does a pretty good job here. I formed a similar question to our lecturer's about deforestation in the Amazon, to see how it would do:

Not bad, eh?
The publishing model of information – the one which makes publishers all their money – is largely not interactive. The interaction comes at Google's end, not the publishers. Our current model looks like this:
A person interacts with Google, making a query.
They click through to a result on the page which (hopefully) gives them an answer
If they want to refine their query, they go back to Google and repeat the process – potentially going to another page
Interaction as conversation changes this dynamic completely, as an “intelligent” search engine gives the person the answer and then allows them to refine and converse about that query immediately – without going to another page.
Have a look at this conversation with Bard, where I am asking for a recommendation for a 14in laptop:

OK, that sounds good. Now let's drill down a little more. I want one which is light and has a good battery life:

That ZenBook sounds good: so who is offering a good deal?

By contrast, a standard article of the kind which publishers have been pumping out to capitalise on affiliate revenue (keyword: “best 14in laptop”) is a much worse experience for users.
And at the end of that conversation with Bard, I'm going to go direct to one of those retailers, with no publisher involvement required.
If that isn't making you worry about your affiliate revenue, it should be.
The model of finding information which search uses, based on queries and a list of suggested results, is pretty well-embedded in the way people use the internet. That's particularly true for those who grew up with the web, aged between 30-60. It may take time for this group to move away from wanting pages to wanting AI-driven conversations which lead to answers. But sooner or later, they will move. And younger demographics will move faster.
That, of course, assumes that Google will leave the choice to users. Google may instead decide it wants to have more time with “its” users and put more AI-derived answers directly at the top of searches, in the same way that Microsoft has with Bing. Do a keyword search on Bing, and you are already getting a prompt to have a conversation with an AI at the top of your results:

Once again, the best option for publishers is to begin the switch from a content strategy which relies on Google search and focuses on the kinds of keywords which are susceptible to replacement by AI (focused on answers) to content strategies which build direct audience and a long-term brand relationship.
Treat search traffic as a cash cow, to be milked for as long as possible before it eventually collapses. In the world of the Knowledge Navigator, there's not going to be much room for simple web pages built around a single answer.
On Steven Sinofsky's post on regulating AI
Regulating AI by Executive Order is the Real AI Risk:
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation
Sinofsky’s response is fairly typical of the AI boosters, and as always, it fails to understand the point of regulation. And in particular it fails to understand why an executive order is entirely the correct approach at this point.
Regulation exists so that we gain the benefits of something while ameliorating the risks. To use an area that probably makes sense to Americans, we regulate guns, so we get the benefits of them (personal protection, national security) while avoiding the dangers (everyone having a gun tends to lead to lots and plenty of gun deaths).
AI is the same: we should regulate AI to ameliorate the dangers of it. Now, those dangers aren’t Terminators stomping around the world with machine guns. They are, instead, things like racial discrimination because of an intrinsic bias of algorithms. It’s looking at the implications for privacy of generative AI being able to perfectly impersonate a person. It’s the legal questions of accountability – if an AI makes a major error which leads to death, for example, who exactly is responsible?
But hey, I guess tech ethics is the enemy, right?
So why an EO? In part, I think the AI boosters only have themselves to blame. You can’t go around saying that AI is the most transformative technology since the invention of the PC and stoking the stock markets by claiming the impact will all be in the next couple of years and not be surprised if a government uses the tools it has to act expeditiously. Silicon Valley types constantly laugh at the slowness of the Federal government. Complaining when it does something quickly seems a bit rich. “Move fast and break stuff” sure – but not when it’s their gigantic wealth that might be the thing that gets broken.
Sinofsky also highlights the nay-sayers of the past, including posting some pictures of books which drew attention to the dangers of computers. The problem is some of those books are turning out to be correct: David Burnham’s The Rise of the Computer State looks pretty prescient in a world of ubiquitous surveillance where governments are encouraging police forces to make more use of facial recognition software, even though it discriminates against minorities because it finds it hard to recognise black faces. Arthur R. Miller may have been on to something, too, when he titled his book The Assault on Privacy.
Sinofsky gets to the heart of what ails him in a single paragraph:
Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing.
To which I would respond: “great! It’s about time!”
There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.
All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.
As for the idea that this is “preventing product(s) that don’t exist from existing”… that is precisely what we pay our taxes to do. We spend billions on defence to prevent the problem of someone dropping big bombs on our cities. We pay for education, so we won’t have the problem of a stupid population which votes in a charlatan in the future (why do you think the far right hates education?)
Good business leaders talk all the time about how proactive action prevents costly issues in the future. They scan horizons, and act decisively and early to make sure their businesses survive. The idea that the government should only react, especially when that’s usually too late, is just bizarre.
At one point, Sinofsky’s sings the praises of science fiction:
The best, enduring, and most thoughtful writers who most eloquently expressed the fragility and risks of technology also saw technology as the answer to forward progress. They did not seek to pre-regulate the problems but to innovate our way out of problems. In all cases, we would not have gotten to the problems on display without the optimism of innovation. There would be no problem with an onboard computer if the ship had already not traveled the far reaches of the universe.
It’s a mark of the Silicon Valley mind-set that he appears to forget the understandable point that this was all made up stuff. 2001 wasn’t real. Star Trek was not real.
Sinofsky then spends some time arguing that the government isn’t “compelled” to act, as AI is actually not moving that quickly:
No matter how fast you believe AI is advancing, it is not advancing at the exponential rates we saw in microprocessors as we all know today as Moore’s Law or the growth of data storage that made database technology possible, or the number of connected nodes on the internet starting in 1994 due to the WWW and browser.
All well and good, but not true: a Stanford study from 2019 found that AI computational power was advancing faster than processor development, and that was before the massive boost from the current AI frenzy. Intel has noted the speed at which AI programs can “train” themselves doubles every four months, compared to the 24 months that Moore’s Law predicted for processor speed.
Towards the end, of course, Sinofsky lapses into Andreessen-style gibberish:
The Order is about restricting the “We” to the government and constraining the “We” that is the people. Let that sink in.
Making “the people” synonymous with “extremely rich billionaires and their companies” is, of course, one of the tricks that the rich play again and again and again. AI is being created to enrich the already rich. It requires resources in computing power, which means my only option of accessing it is to rent time on someone else’s computer. It reinforces technofeudalism. Of course, Silicon Valley, which wants to make sure all of us pays a tithe to them, loves it.
It’s time that we have some assertion of democratic control over the forces that shape our lives. The Silicon Valley fat cats don’t like it. That, on its own, tells me that regulating AI is probably a good thing.
Weeknote, Sunday 5th November
Time passes. The highlight of this week was at the Royal Festival Hall on Monday, when we drove into London (more on that in a moment) to see Brian Eno and the Baltic Sea Philharmonic perform Eno’s album The Ship along with a handful of other songs.
It was of course brilliant, and incredibly moving. One of the additional songs that they performed was Bone Bomb from 2005’s Another Day on Earth, which is song rooted in the testimonies of two people; a teenage girl Palestinian suicide bomber; and an Israeli doctor who talked about how he had learned to pull fragments of the bones of suicide bombers from the bodies of their victims. It was incredibly affecting: I cried.
Eno is one of the artistic anchor points of my life. I first ran into his work in the early 80’s, when in my teens I bought a second hand copy of Another Green World and instantly knew that I wanted to be able to make art like that. I never quite succeeded in that aim – whatever my writing is, it’s not like Eno’s!. But he’s always been an inspiration in the way that he has had the fearlessness to do what he wanted to do without worrying too much about either immediate ability or the artificial boundaries which people set between the different creative domains.
Once a year, I reread his A Year with Swollen Appendices, which I think should be on the required reading list for any course in any creative field. No matter what you do creatively, you will get something out of it. You might not like Eno more at the end of it, but I think that’s actually to Eno’s credit.
As I mentioned, we drove rather than getting the train. Environmentally of course that is a poor decision. But it’s also literally half the price of travelling when there are two people, even including the parking and ULEZ charge. We drive to Woolwich and get the Elizabeth line from there; the fuel costs about £10, compared to about £80 for two people to get the train. I remember reading that the line to Canterbury is, per mile, the most expensive passenger railway in the world, and I can believe it.
However, this journey turned into a rather more expensive one, because on the way back we blew a tyre on the M2 and had to call recovery to pick us and the car up. Unfortunately our coverage had also run out, which meant that the total cost of recovery was just north of £200. Plus, of course, the tyre needed replacing (another £80). The God of Nature got their revenge.
A couple of weeks I started a Substack. I wanted to create a series of posts which look at how technology is impacting on publishing, and I have started with a focus on how the main sources of traffic for publishers – Google, Facebook – are going to fade in importance over the next few years as they begin to keep more people on their own pages and produce more immediate answers to queries using large language models (LLMs, which I am trying to not call AI – because while they are definitely artificial, they are not intelligent).
I think the audience development community of which I was (am?) a part is, at least publicly, in a bit of denial about this. The reaction to the article on The Verge about SEO’s impact on the web was a good demonstration of this: a lot of SEO people were very defensive about it, which is never a good look (if you’re confident about your work, you don’t get prickly when someone doesn’t understand what you do). I think I’m going to write something about that this week.
Some people have asked about Substack and why I’m using it. The answer is mostly that it’s much easier to get an audience there than it is to create a standalone newsletter. Substack does part of the work of promoting it for you, and it does work. That said, I also understand that some people have an ethical (and practical) objection to using a platform like Substack, so I’m going to create an alternative way of signing up this week somewhere else (probably Buttondown). It means more work for me of course, but that’s fine: and it also gives me a backup for when Substack inevitably starts to enshittify (which will be the moment you’re no longer able to export your subscriber list to move to another service).
Three things that caught my attention this week
- I feel like I end up recommending whatever Cory has written every week, but this week’s article on big tech’s “attention rents” really did knock it out of the park.
- The Guardian’s interview with Naomi Alderman was also brilliant. But that’s because Naomi is brilliant. We have only met once, but I have absolutely admired her ever since. Amongst the many clever and warm-hearted people I know, she’s pretty much top of the list.
- It’s been interesting to see how little reaction there has been to Sam Bankman-Fried’s inevitable guilty verdict from the Silicon Valley rich dude posse, but it makes sense: they want to portray him as simply a fraudster who got caught. The trouble is, he’s one of their creations who got caught.
Importing Apple Notes into Obsidian is now easy
Apple Notes doesn’t have an export option. Instead, as Obsidian’s blog post on the Importer plugin update explains, it stores your notes in a local SQLite database. The format isn’t documented, but the developers of the plugin were able to reverse-engineer it to allow users to move notes and their attachments out of Notes and into two folders: one with Markdown versions of your notes and the other with the files attached to your notes. The folder with your notes includes subfolders that match any folders you set up in Notes, too.
This is just outstanding work from the Obsidian team. There are a couple of limitations, mostly that it can’t import password protected notes (obviously), but I’ve tested it and it worked well.
Related: undocumented SQLite databases should not be the way that a multi-gazillion dollar corporation is storing valuable data.
Who would have thought Amazon would behave like this?
Amazon deliberately deleted messages to hide dodgy business practices:
The FTC also alleges that Amazon tried to impede its investigation into the company’s business practices. “Amazon executives systematically and intentionally deleted internal communications using the ‘disappearing message’ feature of the Signal messaging app. Amazon prejudicially destroyed more than two years’ worth of such communications—from June 2019 to at least early 2022—despite Plaintiffs’ instructing Amazon not to do so.”
And the answer to the headline is, of course, “anyone that’s been paying attention.
AI content: Publishers' next burning platform moment
LLMs remove a key competitive advantage of publishers. You need to find a new one.
It still surprises me that I’m old enough to have been part of the transition from print publishing to digital, but what surprises me more is that publishers are again making some of the same mistakes they made in that early internet era. But this time, it’s about the use of large language models to generate content, and it’s even being made by digital natives.
A little bit of history is probably useful here. Back in the mid to late 1990s, many publishers saw online content in terms of its ability to reduce their costs. Paper, printing and distribution of physical magazines were expensive. Publishing content online, though, was basically free. This, the theory went, would allow publishers to cut costs those costs and make more money.
What most publishers didn’t understand was that the high costs of production associated with print were their main advantage because they acted as a barrier to entry for new competitors. Starting a magazine was hard: you had to not only have enough capital to allow you to print and distribute the thing, you needed access to news-stand distribution, which in the UK meant working with big distributors who had to be persuaded to stock you. You needed a sales team to sell enough advertising to support it, and they needed contact books that were thick enough to get their feet in the doors. Magazine publishing was expensive, and only large publishers were able to get it done at scale.
By the mid-1990s, though, anyone could publish online. All those competitive advantages disappeared within a couple of years. You could publish easily using platforms like Blogger, WordPress, or even Myspace. You could get ad revenue from systems like Google Ads, without a sales team of any sort. Not only that, but you could get your content seen via Google search and social platforms.
It took publishers a long time to realise that the old barriers to entry no longer protected them. Some publishers still act like they think they do, and so appear consistently dazzled when a new platform comes along and makes individuals who take advantage of it into millionaires. TikTok is the latest, but it’s by no means the first. Online was a burning platform moment for publishers, and some of them took far too long to see it.
The next burning platform
The ability of large language models (LLMs) like ChatGPT to create content is, of course, being seized on by publishers who see it as a method of creating editorial content without having to pay anyone to do it – or, at least, by paying fewer people to do it (and probably cheaper ones too – that was another outcome of the move from print to digital). If you’re a publisher reading that and shaking your head, thinking “well that’s not what we’re doing” I am going to give you a small monkey side eye because we all know that if you’re not thinking that way, your CFO probably is:

There’s nothing wrong with using new technology to reduce costs, as long as you retain your competitive advantage. And here’s where things are difficult for publishers because what LLMs do is similar to what happened with web publishing in the 1990s: it removes the competitive advantage of publishers in the creation of content, just as the web removed their advantage in publishing and distributing it. It is the next step in the democratisation of publishing.
In the early internet publishing era, anyone could create any content and put it online, but to be successful they needed to have the expertise to write the content in the first place. That’s why niches like technology publishing were impacted early and heavily: there was plenty of expertise out there, and suddenly, they could create content directly without an intermediary.
Now, thanks to LLMs, anyone in the proverbial bedroom can create swathes of “good enough” content on any topic they want. They can churn out hundreds of average pieces about anything, just by taking a list of the most popular search queries in that topic as their starting point. They’re not flawless, but they’re good enough, particularly to answer the kinds of search queries which publishers have used to generate traffic at scale.
This is why, for publishers, AI content creation is another burning platform moment. Combine it with the move towards providing more answers directly on search pages, and you have a one-two punch to publisher traffic which Mike Tyson would be proud of.
Of course, publishers can use LLMs too. But, as with early internet publishing, their size means they can neither move fast nor with low enough fixed costs to make it work. If a proverbial 16-year-old can create an article with ChatGPT on “10 things you didn’t know about Mila Kunis” at the same speed as a celebrity magazine, at the same quality, the magazine loses even if it has used technology to eliminate roles and cut its costs. Because, unlike our 16-year-old, it has big fixed costs: offices, equipment, pensions, you name it. And it has margins to protect because the stock market expects to see revenue growth every year.
Regaining competitive advantage
So what can publishers do to retain their competitive advantage? There really is no point in trying to pretend that the AI genii doesn’t exist, in the same way that publishers couldn’t pretend in the 90s that people would just carry on buying huge volumes of print.
Nor will legal claims aimed at the likes of OpenAI, Google and Microsoft succeed. Yes, your content has been scraped to create the language models in the first place. But given the result in the Author’s Guild vs Google Books case, I expect courts to hold that this kind of use is transformative, and therefore fair use. Either way, it will be tied up in the legal system for far too long to make a difference.
Some have suggested that the way forward will be private large language models built solely using the corpus of text publishers hold. There are a few issues with this, but the biggest one is simply that the horse has bolted. OpenAI, Google and others have already trained their models on everything you have published online to date. They probably even have access to content which you no longer have. How many redirects of old, outdated content do you have in place where the original no longer exists? How many of your articles now only exist in the Wayback Machine?
Instead, the only option for publishers is to focus on creating content of a higher quality than any current LLM. You cannot gain competitive advantage at the cheap, low-cost end of the market. Trying to do so will not only make you vulnerable to anyone else with the same tools (at $20 a month) but also devalue your brand over the long term.
Creating higher quality content means employing people, which is why that urge to use LLMs to replace your editorial teams will actually undermine the ability of publishers to survive. Putting that cost saving towards your bottom line today is a guarantee that you will be out-competed and lose revenue in the future.
So what can you do with LLMs? The most important thing is that LLMs can be used as a tool to amplify the creativity and ability of editorial teams. They are most useful as what Steve Jobs used to call “a bicycle for the mind”, capable of amplifying human creativity. An LLM can give you a starting point, suggest an outline on any topic, rewrite a headline 100 times using the word “crow” and it never gets tired doing so.
If you’re a publisher, you probably still have decades worth of experience, context, contacts and knowledge of audiences in your editorial teams. Train them on how to use LLMs to amplify their creativity (and if you want some help with that, email me!)
You’re going to have to change your content strategy to adapt to the new world of falling Google traffic anyway. LLMs should be seen as a chance to exit the market for low-quality, high-volume content.
Weeknote, Sunday 29th October 2023
An abbreviated weeknote this time, as I've not long got back from Orford Ness.
Orford Ness is a strange and interesting place. Used by the air force as a bombing run and a research centre, it has been partly rewilded, but with the unmistakable detritus of military and industrial use. In some ways, it reminded me of parts of the industrial edges of Derby combined with the brutal flat farm land of southern Derbyshire. But with shingle. Lots and lots of shingle. It's stark and beautiful, and I recommend a visit.
I started scribbling down a story based in part there last night, which I want to outline further. I'm going to crack into writing a short novel in November and see where I get with it: I would like to get a draft completed, although as I don't yet have a plot outline I'm a bit behind already. That will be this week's work.
Also on the agenda for this week is the second weekly post on my Substack, which focuses on the intersection between technology and the publishing business. Last week I posted about the impact of AI-driven changes in search on the ability of publishers to get traffic, the short version of which is “oh bugger”. There's no doubt in my mind that Google and Facebook really are intent on answering more queries without sending traffic to anyone else. That raises some huge problems, but there are ways out.
This week is all about using AI to create content, and the threat that poses to publishers. “Threat,” you're saying, “isn't it an opportunity?” Well, no – and tomorrow I'll explain why.
The three things which most caught my attention
- How tiny Qatar hosts the leaders of Hamas. In among the entirely correct condemnation of Hamas, what's being ignored is the role of “friendly” countries in “hosting” the Hamas leadership. Qatar, a country which has a track record of human rights abuses as long as your arm, gets rewarded with hosting World Cups and much more while it materially supports terrorism. Why does the West ignore this? In Britain's case, perhaps because of the [£40bn or so “investment” the country makes] – which mostly means buying and inflating property prices, benefiting our Tory masters.
- Many people jumped on the story that Spotify made higher-than-expected profits, citing the top-line number of around 1bn euro earnings. What they didn't cite was the actual profit: just 32m euro. Bearing in mind that it was after a quarter of crackdowns on password sharing, large increases in subscribers, and increases in prices, it's hard to see how Spotify will ever be a seriously profitable business.
- All the Whole Earth Catalog is now available online. Nostalgia in a bucket load.

DOJ probing Tesla’s EV range cheating
DOJ probing Tesla’s EV range after reports of exaggerated numbers - The Verge:
The US Department of Justice (DOJ) is investigating the range of Tesla’s electric vehicles after reports surfaced that the company was relying on exaggerated numbers.
In documents filed with the Securities and Exchange Commission, Tesla said that it had “received requests for information, including subpoenas from the DOJ, regarding certain matters associated with personal benefits, related parties, vehicle range and personnel decisions.”
This follows on from a Reuters' report earlier this year, which found Tesla was getting so many complaints about range it was cancelling appointments with its service centres for customers with the problem:
According to Reuters, there was nothing actually wrong with the vehicle’s battery. Rather, Tesla had allegedly created software to rig its driving range estimates to show a rosier picture. This led to thousands of customers seeking service appointments to figure out what was wrong with their vehicles. But because the vehicle was working as intended, Tesla’s diversion team simply canceled all the appointments.
So Tesla created software which gave a false reading of battery range, then when people spotted it, they just canceled any service to them.
It’s worth noting that when VW was caught cheating on its emissions tests by using a device to check when it was being tested and artificially improving results, it ended up being fined tens of billions of dollars.
This isn’t on quite that scale, but regulators tend to take a very dim view of cheating customers. It’s quite possible this will cost Tesla billions.
Anyone willing to bet that it will turn out this was done at Elon Musk’s insistence? And will that be the final nail in the coffin of his reputation?
SEO will be over for publishers. You need to adapt.
Position one for a query is no longer close to enough
I don't know of a single person in publishing who doesn't believe that large language models (LLMs) aren't going to have a profound impact on the industry. But most of the attention has been on using them to create content, something which many publishers see as a way of increasing efficiency (by which they usually mean reducing expensive headcount).
Whether that is actually possible or desirable is a topic for another time, but what I want to focus on is the other side of AI: what its adoption by Google is going to do to the traffic to publisher sites, and how we should be changing our content strategies to respond.
Google's large language models
It's worth starting by being clear about how Google is using LLMs. The company has two products which use large language models to deliver results for users. The first, and probably the best well-known, is Bard, which is similar to ChatGPT in that it uses a conversational interface where users ask questions or give prompts in natural language, and the programme responds.
The second – and the one which, I think, should be most concerning to publishers – is Search Generative Experience (SGE). SGE is currently in the experimental stage, but will ultimately deliver answers directly into the top of Google, generated by its large language model.

As you can see from the example, SGE takes up a lot of real estate in the query result, and delivers a complete answer based on what Google “knows”. Although it gives citations, there is no need to click on them if all you want is the answer to a query.
How this affects publishers
Obviously, anything which sits at the top of search results is going to impact on the amount of traffic which clicks through to publisher sites underneath. And this is potentially worse than anything we have seen before: if the answer to the query is given on Google's page, why would anyone bother to scroll down and click through?
This means the much-fought over positions one to three will be much less effective than every before, and there will be a big decline in publisher traffic.
The impact on different kinds of content
It is likely that some kinds of content will be impacted more than others. Answers to questions are an obvious one, and in 2017 they accounted for 8% of searches. That is likely to have grown already and grow still further as users get used to being able to ask machines questions and get good quality tailored answers.
But in its article on SGE, Google highlights a second area where publishers are likely to see a major impact: shopping. Many publishers have put significant effort into creating content focused on affiliate revenue, with some seeing affiliate overtaking advertising as a source of revenue. Affiliate content is almost always designed to capture traffic via search, for the simple reason that buying products usually starts with a Google search. An SGE-driven shopping search experience will ultimately bypass publishers and drive traffic direct to the retailer, with the AI making individually tailored recommendations on what to buy.
This threatens to be disastrous for publishers. Effectively, SGE delivers a one-two punch of reduced traffic as more search queries are answered on the results page, plus reduced traffic to and revenue from affiliate pages.
What publishers should do
SGE is currently in the experimental stage, which means publishers shouldn't see any significant impact for now. But there is a clear direction here: more answers to search queries will be delivered without any click-through to publishers. And product shopping queries are going to become something which Google channels to retailers (who, by complete coincidence, are also advertisers) rather than publishers (who, by and large, are not).
I estimate that publishers have a window of between three and five years to change content strategies to adapt to this new world, depending on the speed of user adoption. It could be faster: much will depend on how quickly Google's LLM work starts to move from an experiment to delivering real results.
The long-term answer for publishers is to reduce exposure to Google as a source of traffic. That's going to be tough: almost every site I have worked on relied on Google for between 60-90% of its traffic. And the more the site was focused on affiliate revenue and e-commerce, the higher that percentage was.
The answer is to focus on increasing your level of direct traffic, making your site a destination for content rather than something users hit once and bounce away from. Learn lessons from marketing: treat every piece of content you create as an opportunity to deepen your relationship with your audience.
There are five things I would recommend publishers start doing today:
Refocus your KPIs and OKRs to be about deepening relationships, not just traffic. Focus on repeat visits and sign-ups. Look to increase the number of qualified email addresses you have (and whatever you do, don't succumb to the temptation to capture more data. If you deliver value, you will capture more over time -- but all you need now is a person's email address).
Reevaluate your search strategy and focus on topics with complexity. The more complex the content, the higher its quality, the less likely it is that an LLM can deliver a good quality version of it. Expertise and depth will be essential, and complex topic areas might be the “last person standing” when it comes to Google searches which work for publishers.
If you have three to five year revenue forecasts, ramp affiliate revenue down over time rather than predicting growth. The era of affiliate revenue as a major contributor will be over. Use the revenue you are getting from it to bootstrap other areas.
Heavily invest in newsletters. And whatever you do, don't consider them to be a place for advertising. Nothing creeps users out more than thinking they are signing up for interesting content only to find it chock-full of ads or sponsored content.
Don't think that AI-generated content is going to “save” you. Many publishers are looking at content created by LLMs as a way of lowering costs. It will. But it will also put you out of business. Remember that any content you can create with an LLM can be done better by Google at the top of its results pages. What publishers have in their favour is human talent, creativity and expertise. The more you lose that by trying to use LLMs to cut costs, the smaller your competitive advantage.
Next week I will return to that last topic, and look at the mirage of LLM content and why it's a death-trap for publishers.