Uncategorized

    Sky falls in, Walt Mossberg may be wrong

    Over on Threads, Walt Mossberg has commented on the Apple/DOJ case. First up, if you do not respect Walt's opinions, you're a fool. Walt is one of my tech journalism heroes. That said, I think he's missing a couple of points here.

    Walt is correct that the vertically integrated model has been Apple's since the start. But what is permissible when you're a small company or in a nascent market is no longer permissible when you are in a position of market power. And no one doubts that Apple is in a position of significant market power, not least Apple itself.

    Second, like most people, Walt is being tripped up by the word "monopoly". The DOJ definition makes absolutely no mention of a percentage: it talks only of "market power". That's why the DOJ's filing is careful to refer to Apple having both market power AND significant market share.

    Back to market power. Does Apple have it? Well, Apple has certainly -- publicly -- said so. Remember Apple very happily talking about the $1.1 trillion in developer billings and sales made through the App Store in 2022? As Apple owns that app store, it's almost a dictionary definition of market power being exerted.

    And this is another thing which people will get tripped up over: market power doesn't just mean power over the market where you sell goods: it's also about your control of markets adjacent to that.

    For example: Microsoft had no market power in PC manufacturing. It didn't make PCs. But it did have huge market power in operating system software, which it could (and did) leverage to control things in PCs to its favour. OEMs didn't have to sign the contract which MS put in front of them, in theory. In practice, they did, even though MS wasn't a part of their market, because of the power Microsoft had in operating systems.

    This was one of the new aspects of the Microsoft/DOJ case which, outside academic circles, didn't get noted: it was the first major case which focused on the network effects of having monopoly power in one market could mean in other markets. Although the Clayton Act made tying one product to another illegal, generally that was within the same market, not in adjacent ones.

    If you are keen to know more about how the Microsoft/DOJ case redefined some aspects of antitrust, this paper from 2010 is a good read.

    Walt notes that he doesn't think there's a right for other companies to use Apple's IP, which is proprietary to them. And he's right, up to now. But in a situation where monopoly power exists, the game changes. Remember that IBM's 1956 consent decree forced it to not only publish technical data freely but also license its patents under fair terms (including some which had to be royalty-free).

    This is another case of "what was legal becoming illegal". IBM developed those patents: it's monopoly power meant that it was prevented from using them in a way which would otherwise be legal. And, of course, the IBM 1956 consent decree basically created the PC industry as we know it.

    All of this is going to trip many people up. How can a company have market power when it doesn't have a 80%+ share of a market? Why are we talking about third-party developers when Apple doesn't compete (much) against them? Isn't Apple just trying to protect their customers?

    I would urge everyone to read The Verge's coverage because they really know their stuff when it comes to legals. There's going to be a lot of opinions (including from me, sorry). But it will be fun: writing tech news while Microsoft/DOJ and Microsoft/European Commission were on was some of the best fun I had as a reporter. Settle in for a long ride.

    One final point: currently, we only have a filing. This is not the point where the DOJ starts talking about/showing its evidence. Does the DOJ have a case? Until we (or rather, the judge) sees evidence, anyone who says they absolutely know is wrong.

    Disclosure of the evidence is where things REALLY get interesting. And it's also where things get dangerous for Apple because it has carefully cultivated a brand which focuses on being pro-consumer. Documents which show collusion, patterns of bad behaviour, and more could damage that brand. Get the popcorn ready.

    Ten blue links for 3rd Feb, 2024

    I mentioned a while ago on Mastodon that I had such a backlog of stuff I had saved to read and could potentially write about that I was going to have to steal Cory's approach and do a weekly linkblog post. That idea got put on the back burner for a couple of weeks as I had both a feature (forthcoming for PC Pro magazine) and a short training session (end of February, details to follow once its advertised).

    And of course, I needed a concept -- what we call in journalism a franchise. Putting in the work of creating something weekly is a lot easier if you can force it into some kind of theme. But I have been scratching my head trying to think of something.

    Of course, as soon as you think of something it's obvious: hence Ten Blue Links. Like the old-school Google we knew and loved (and was useful) I'm going to create a page every Saturday which just lists ten things which have amused/entertained/informed me, and that I think are worth your time reading. There's no topic theme -- I read a lot, so that wouldn't make sense -- although every now and then if something big has happened I might make one.

    Some words about my tools and process

    I'm far too online, and I hop about between tools far too often. But there are two online services which have stuck with me for quite a while now: Raindrop, and Readwise Reader.

    Raindrop is a bookmarking service, like Pinboard but a lot better. I use it to dump in links which I know will be useful to me in the future, but which aren't in-depth reading. How-to's, tips, that kind of thing, all of which I categorise and tag so they form even more useful collections.

    Readwise Reader, on the other hand, is a read-it-later service like Pocket -- but it's the Olympic Gold medal winning version, the Pele and Maradonna and Messi combined into one of reading things. It's perfectly happy ingesting feeds, or emails, or PDFs as well as simple saving articles, and it integrates with Readwise (of course) which I use to funnel all kinds of stuff into my Obsidian notes. It costs money, but it's a service that is well worth it. I would imagine that most of what I write about in Ten Blue Links is going to come from Readwise.

    This one has ended up long, but I promise I'll make it shorter next time...

    The ten blue links for this week

    1. Apple's culture shaped its DMA response

    I wrote at length about my feelings over Apple's response to the EU DMA -- childish is the kindest way of putting it -- but I really enjoyed Manton Reece's short post about it. Manton's focus is Apple's culture, how that has been shaped, and how that has really influenced their response:

    Because of their decades of truly great products, Apple thinks they are more clever than anyone else. Because of their focus on privacy, Apple thinks they are righteous. Because of their financial success, Apple thinks they are more powerful than governments. The DMA will test whether they’re right.

    2. Return to office = failure of management

    Apple, of course, is one of the companies that has mandated its workers return to office -- and they are not alone. But some new research has found that not only does RTO not improve productivity, and damages worker engagement, it actually stems from a simple need for control from managers. Simply put: bad leadership:

    "Results of our determinant analyses are consistent with managers using RTO mandates to reassert control over employees and blame employees as a scapegoat for bad performance".

    3. Everyone is a sellout now

    The creative industries are having a bad decade. Journalism, in particular, is in a horrible place with jobs lost left, right and centre. Rebecca Jennings wrote a great article about how everyone now has to be a pitch person, and how basically if, for example, you're a writer you're now expected to also be able to market your work -- and won't get employment if you don't. And of course, this has a direct, and negative, impact on your actual work:

    Next thing you know, it’s been three years and you’ve spent almost no time on your art,” he tells me. “You’re getting worse at it, but you’re becoming a great marketer for a product which is less and less good.”

    4. Fertile fallacies

    Sam Freedman's article on fertile fallacies and policy bubbles was specifically about politics, but I think it's equally applicable to many areas of life. Sam's point is that sometimes bad ideas work at first, up until the point where they don't. This is because they often have a kernel of truth about them, or are a reaction to something which has pushed too far.

    A policy belief that initially began with an important truth – governments need to have control over state spending and some process to maintain it – has ended up distorted into an absurd farce whereby Treasury officials are frantically changing their policy proposals for the Chancellor based on daily fluctuations in projected borrowing for 2029.

    But you can apply this idea everywhere. Consider tech: app stores were a reaction to the absolute hell that was mobile apps in the pre-iPhone era, coupled to the opportunity to make something that was a little more secure for users than the PC. This has inflated to the point where you'll find people who genuinely believe that no one should have the right to install software on a device they own, and that developers owe a tithe to whoever made the platform they're using.

    5. The evolution of the Conservative mind

    There is a connection here, I think, with Simon Wren-Lewis' piece on the evolution of the British Conservative party from neoliberalism as economic doctrine to social conservatism which solely acts in the interests of the wealthy. As with the policy fallacies that Freedman focuses on, the central doctrinal fallacy of neoliberalism has inflated into a bubble that goes well beyond its original intent.

    In the UK, the inflationary force in this bubble was Brexit:

    The key moment in this transformation in the UK was of course Brexit. Although it is just about possible to rationalise Brexit in neoliberal terms, if we think about power, Brexit was far from neoliberal. The overwhelming majority of businesses and corporations selling to and from the UK suffered serious damage at the hands of newspaper owners and a few very wealthy individuals. This kind of capture of a neoliberal party by monied interests is not really surprising, because once a politician sees themselves as representing the interests of corporations and businesses generally rather than society, it is a small step to start representing the interests of particular and potentially unrepresentative corporations and businesses (and their ‘think tanks’), especially if those businesses happen to be newspapers or party donors or future employers. Corruption inevitably follows.

    Of course, this doesn't explain the similar process that has happened in the US and across the world, but there were, no doubt, similar processes at work.

    6. AI Agents are the future of computing

    One piece that I have read and reread a few times now is Bill Gates' article on how AI agents are the future of computing. I wrote about this a while ago, too, focusing on Apple's 1987 Knowledge Navigator concept. Conversational interfaces change everything, and Gates thinks it effectively means the end of applications as we know them:

    In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.

    I think Bill is right, and he also raises a lot of challenges -- mainly, that to do this stuff properly requires a lot of your personal information to be known by the agent, and managing that in a way which preserves privacy is going to be a tough thing to do.

    7. A long interview with Satya Nadella

    Another article that I have been rereading is the interview by Axel Spring CEO Mathias Döpfner with Satya Nadella. I find Nadella fascinating: equal parts MBA-bland and tough as old boots, and someone who has done that rarest of things: taken an established business and remade it. While the DNA of the old Microsoft is still there, he's turned the company into something quite different.

    There's a tonne of interesting stuff in there -- Obsidian tells me I have nearly 1500 words of quotes saved from it -- covering AI, China, and leadership. But I found this quote pretty interesting, on the relationship between AI and publishers:

    After all, with synthetic data training, I think that the incentive is that we create more synthetic data. And if you're training on synthetic data, where you don't have stable attribution to likeness, that becomes a hard thing. So, there is some technological disruption we will have to be mindful of. The fact is, that no publisher will allow you to crawl their content if there isn't a value exchange, and the value exchange has to come in two forms. One is traffic, and the other is revenue share.

    8. Technology as rent-seeking process

    This article by Wendy Liu from 2019 -- which feels a lot longer ago than just five years -- looks at the business models of technology service companies as effectively being tax collection, and boy she was right. Every company you can think of now, from Apple to Meta to Google and beyond, seems to be hitching its "growth" wagon to collecting tithes of one sort or another. It's no longer enough to simply make products and sell them: you have have have ongoing revenue from users to grow.1

    What if we thought of some of the most lucrative tech companies as essentially tax collectors, but privately-run (and thus not democratically accountable)? Economists call this rent-seeking, and what we’re seeing with a lot of tech companies is that their telos is little more than “rent-seeking as a service”. It’s basically baked in to their business model. Once you’ve fully developed the technology underpinning your service - be it coordinating food delivery, or processing payments, or displaying intrusive ads to people who just want to read a goddamn page on the Internet without being entreated to buy stuff - then your whole schtick then becomes collecting taxes on a whole ecosystem of economic activity.

    9. Elon Musk continues lying

    Joan Westenberg -- who you must read every time -- notes that Elon Musk is, of course, a liar and yet gets a free pass on his lies every single time from credulous journalists. This time it's Neuralink, and its claim to have implanted some kind of brain chip into a human:

    Despite providing no evidence of this milestone, and without any 3rd party verification, the claim was quickly republished by major news outlets without scrutiny or confirmation. Journalists (or, more charitably, their editors) have once again eagerly provided publicity to Musk in the pursuit of advertising traffic to their sites, failing in their basic journalistic responsibility to fact-check. To question. To scrutinise. To ask for the truth.

    And journalists wonder why journalism is in trouble.

    10. More universal public services, please

    Jason Hickel is someone you should be reading, generally, and I could have linked to any one of about 10 articles of his I've read recently. But one place to start is his essay on how universal public services help to eliminate the artificial scarcity that capitalism -- and particularly rent seekers -- profit most from:

    By universal services here I mean not only healthcare and education, but also housing, transit, nutritious food, energy, water, and communications. In other words, a decommodification of the core social sector — the means of everyday survival. And I mean attractive, high-quality, democratically managed, properly universal services, not the purposefully shitty last-resort systems we see in the US and other neoliberal countries. What does this look like? How do we get there?

    I would add a few technology platforms to this list too… but that's another story.

    AI is about people, not just machines

    It would be a little remiss of me if I didn’t mention the launch this week of Microsoft’s consumer and small business AI play. Microsoft Copilot Pro integrates with Word, Outlook, PowerPoint, and OneNote, and offers suggestions, corrections, and insights based on the context and purpose of the document. It’s available now for anyone with a Microsoft 365 Personal or Family plan, at – by complete coincidence, I’m sure – almost the same monthly price as ChatGPT Plus.

    Microsoft's approach with Copilot Pro and other AI services is primarily aimed at enhancing practical business and personal productivity, rather than implementing radical changes in the ways people work. 

    For me, the real short-term win from Large Language Models lies in their ability to clear away yawn-inducing office tasks. Copilot, especially, is a superstar at this. It helps you tackle all the routine stuff, and leaves people to get on with the creative work. 

    The people factor

    AI tools like Copilot are not magic bullets that can solve all our problems, and they don’t magically do things on their own. And that highlights something that I suspect is getting neglected: long-term training and support for users in businesses.

    One thing I have noticed in the creative industries time and again: technology often gets side-lined in learning and development. Sure, publishing companies have massively improved when it comes to fostering skills in leadership, coaching and other business areas. Yet, when it comes to embracing and learning new tech, training tends to be pretty old-fashioned, rolled out in a “one and done” approach. Updates are relegated to the odd email (which no one reads). 

    In the old days, that worked because the pace of change of technology was comparatively slow. A new version of QuarkXpress (yes I am that old) would come out every couple of years, you would do an update session and that was it. 

    But for cloud technologies this is not enough, and when there is a complete paradigm shift in tech – as we’re experiencing with AI – it risks putting you well behind more agile businesses.  

    According to a report by Oliver Wyman Forum, there is a significant gap between the skills employees believe they need training in, such as AI and big data, creative thinking, and leadership, and the training that employers are currently offering. 57% of employees think the training they are getting isn’t sufficient. And I think they’re right. 

    Of course, you can implement short-term fixes. But this is also a good opportunity to set up the way you train and the way your people learn for the long term. The next three to five years are going to see the pace of change accelerate, and you need to adapt the systems which allow your people to learn.

    Continuous structured learning

    Integrating AI tools into your team's workflow isn't a one-time event, but rather a journey of continuous learning. Begin by setting up a framework for ongoing training and support. This could mean setting up regular training sessions to providing access to online courses, interactive tutorials, and detailed manuals. It's not just about the initial learning curve; it's about keeping the knowledge fresh and relevant.

    To foster a culture of continuous learning, encourage your team to see AI as an evolving toolset, one that offers new opportunities for growth and innovation. Promote an environment where experimentation is the norm, and learning from mistakes is valued. This approach helps to maintain a level of curiosity and enthusiasm for what AI can bring to the table.

    Remember, the key to continuous learning is collaboration and knowledge sharing. By encouraging your team members to share their experiences and insights gained from using AI tools, you create a knowledge-rich environment. Regular team discussions, workshops, or even informal chat sessions can be great platforms for this exchange of ideas.

    Not everyone is going to want to get on board. To get tech-hesitant people excited about AI, relate it to their interests and show how it simplifies work or hobbies. Demystify AI with jargon-free explanations and introduce them to easy-to-use tools through hands-on sessions. Sharing success stories of others who've overcome similar fears can motivate them. Ensure support is available for any questions, making their AI journey smooth and approachable, while focusing on its practical, real-world applications.

    To put this into action, consider scheduling a monthly 'AI day' where team members can share new findings, discuss challenges, and brainstorm on how to better integrate AI into your workflows. Think about establishing a mentorship program where more experienced team members can guide others through learning about AI. And finally, make sure you are making use of your best communicators, not just the people who are really enthusiastic about AI.

    What the Epic vs Google case means for content

    A games company and a search giant tussling over Fortnight doesn't sound like it has much to do with publishing. But it begins a process which could see publishers get more revenue

    There has been a lot of controversy over Google and Facebook “stealing” content from publishers. Publishers have been accusing Google of stealing their content for a long time. They claim that Google's search engine and news aggregator are infringing on their copyrights and depriving them of revenue, particularly in a world where more and more answers are appearing on Google’s results pages rather than pushing traffic to publishers.

    The argument is that Google is using publisher content without paying them or asking for their permission, and that this is unfair and illegal. They also complain that Google is dominating the online advertising market and squeezing out their competitors. Some publishers want Google to pay them for displaying snippets of their articles or linking to their websites, or to stop using their content altogether.

    Google, of course, argues that it is providing a valuable service to both publishers and users. It says that it is helping publishers reach a wider audience and drive more traffic to their websites, while also giving users quick and easy access to relevant and high-quality information. Google also points out that it respects the choices of publishers and allows them to opt out of its platforms or customize how their content is displayed.

    Gannett, the publisher of USA Today, is not convinced. It has sued Google for allegedly monopolizing the advertising technology market, arguing that Google's broad control of the ad tech market has hurt the news industry, as online readership has grown while the proportion of online ad spending taken by publishers has decreased.

    Gannett isn’t the only one. The Daily Mail alleges that Google has too much control over the online advertising market, which has resulted in newspapers seeing little of the revenue that their content produces. The lawsuit claims that Google controls the tools used to sell ad inventory, the space on publishers' pages where ads can be placed, and the exchange that decides where ads will be placed. The Daily Mail argues that this lack of competition depresses prices and reduces the amount and quality of news available to readers. The lawsuit also alleges that Google “punished” publishers who "do not submit to its practices”.

    A lot of this hinges on copyright, and what amounts to effectively extending its provisions to cover something that’s previously been considered fair use: showing snippets of publisher content to let users decide if they want to click through to the content.

    Similarly to Cory Doctorow, I would argue that extending copyright like this is foolish, and ultimately won’t benefit publishers and the rest of the creative industries. What will benefit us is removing the choke points that big tech companies like Google and Facebook own, and that world took a significant step forward this week.

    Yesterday, a federal court jury ruled in favour of Epic Games in its lawsuit against Google, marking a significant victory for the game developer. The jury found that Google's Android app store, Google Play, uses anticompetitive practices that harm consumers and software developers.

    Epic Games had accused Google of abusing its dominant position as the developer of Android to strike deals with handset makers and collect excess fees from consumers. Google collects between 15% and 30% for all digital purchases made through its shopfront. Epic tried to bypass those fees by charging users directly for purchases in the popular game Fortnite; Google then removed the game from its store, which led to the lawsuit.

    The jury agreed with Epic on every question it was asked to consider, including that Google has monopoly power in the Android app distribution markets and in-app billing services markets, that Google did anticompetitive things in those markets, and that Epic was injured by that behaviour. What the remedy will be is yet to be determined. But even if it is confined solely to Epic, it’s a step towards breaking the stranglehold that Google and Apple have on the app stores and making the 30/15% tithe they take a thing of the past.

    Why does this matter to publishers? Because it opens the possibility of publishers getting more control over the way their content is distributed to smartphones, and ultimately take more revenue by, in effect, running their own app stores just for content.

    Being able to have an app store just for your publications would mean an immediate increase in revenue. Currently, between 15- and 30% of every in-app subscription or micropayment goes to the tech giants. If publishers could bypass this fee, they would retain a larger portion of their sales, potentially leading to significant financial gains.

    This, as Doctorow highlighted, is where Google and Apple have been taking money from publishers – not by “stealing content”, but by stealing revenue. And it has the added bonus of allowing publishers to get a closer relationship with their customers, something that the app store intermediary model removes. . Customers won’t be buying from Apple or Google and the platform passing on the money to you: they will be buying directly from the people who make the content.

    Of course, there are also challenges and risks associated with running an app store. These include the technical and financial resources required to develop and maintain it, ensuring security and privacy for users. Publishers would also have to convince users to switch from established platforms like Google Play and the Apple App Store, which takes more investment in marketing.

    Of course, there are downsides. Publishers will need to spend more on discovery because it’s unlikely that either Apple or Google would ever promote a competing App Store. But that’s a small price to pay for restoring the direct relationship you have with customers. 

    How to get a glimpse of the post-Google future of search

    What does the search engine of the future look like? Forget 10 blue links...

    You can break down the creative process into three big chunks: research, creation and revision. What happens in part depends largely on the kinds of content you're creating, the platforms you are making the content for, and many other factors.

    Like every journalist, I spent a lot of time using search on the web to help with that research phase. I was quick off the mark with it, and I learned to adapt my queries to the kinds of phrases which delivered high-quality results in Google. My Google-fu was second to none.

    But that was the biggest point: like all nascent technologies, I had to adapt to it rather than the other way around. Google was great compared to what came before it, but it was still a dumb computer which required human knowledge to make the most of out of it.

    And Google was dumb in another way too: apart from spelling mistakes, it didn't really help you refine what you were looking for based on the results you got. If you typed in “property law” you would get a mishmash of results for developers, office managers and homeowners. You would have to do another search, say for “property law homeowners” to get an entirely different* set of results that were tailored for you.

    Google got better at using other information it knows about you (your IP address, your Google profile) to refine what it shows you. But it didn't help you form the right query. It would ask you “hey, what aspects of property law are you interested in” and give you a list of more specific topics.

    What's more, what it “knew” about you were pretty useless. You couldn't, at any point, tell it something which would really help it give you the kinds of results you wanted. You couldn't, for example, tell it "I'm a technology journalist with a lot of experience, and I favour sources which come from established sites which mostly cover tech. I also like to get results from people who work for the companies that the query is about, so make sure you show those to me too. Oh, and I'm in the UK, so take that into account."

    Google isn't like that now. Partly that's down to the web itself being a much worse source of information. But that feels like a huge cop-out from a company whose mission is to “organise the world’s information and make it universally accessible and useful”. It sounds like what it is: a shrug, a way of saying that the company's technology isn't good enough to find "the good stuff".

    The search engine of the future should:

    • Be able to parse a natural language query and understand all its nuances. Remember how in the Knowledge Navigator video, our professor could ask just for “recent papers”?

    • Know not just the kind of information about you that's useful for the targeting of ads (yes Google, this is you) but also the nuances of who you are and be able to base its results on what you're likely to need.

    • Reply in natural language, including links to any sources it has used to give you answers.

    • If it's not sure about the kind of information you require, ask you for clarification: search should be a conversation.

    For the past few weeks, I've been using Perplexity as my main search engine. And it comes about as close as is currently possible to that ideal search engine. If you create content of any kind, you should take a look at it.

    Perplexity AI allows users to pose questions directly and receive concise, accurate answers backed up by a curated set of sources. It's an “answer engine” powered by large language models (including both OpenAI's GPT-4 and Anthropic's Claude 2). The technology behind Perplexity AI involves an internal web browser that performs the user's query in the background using Bing, then feeds the obtained information to the AI model to generate a response

    Basically, it uses an LLM-based model to create a prompt for a conventional search engine, does the search, finds answers and summarises what it's found in natural language, with links back to sources. But it also has a system it calls (confusingly) Copilot, which provides a more interactive and personalised search experience. It leverages OpenAI's GPT-4 model to guide users through their search process with interactive inputs, leading to more accurate and comprehensive responses.

    Copilot is particularly useful for researching complex topics. It can go back and forth on the specific information users need before curating answers with links to websites and Wolfram Alpha data. It also has a strong summarisation ability and can sift through large texts to find the right answers to a user's question.

    This kind of back-and-forth is obviously costly (especially as Copilot queries use GPT-4 rather than the cheaper GPT-3.5). To manage demand and the cost of accessing the advanced GPT-4 model, Perplexity AI limits users to five Copilot queries every four hours, or 600 a day if you are a paid for “Pro” user.

    If you're not using Perplexity for research, I would strongly recommend giving it a go. And if you work for Google, get on the phone to Larry and tell him your company might need to spend a lot of money to buy Perplexity.

    Latenote, Monday 27th November 2023

    Between getting ridiculously excited about the goings-on OpenAI, I didn't get a lot of writing done this week. There are definitely times when too much is going on in the tech world, and my old habits die hard: I have to keep up with it all.

    I wrote a post on Substack with my take on it, from the perspective of the longer-term impact on creative professionals. And, given how fast things were moving, I ended up rewriting it three times. That was a good reminder not to cover breaking news in that newsletter!

    In case you're interested, the focus of that newsletter is the three-to-five year perspective on how technology will impact on what we occasionally call “the creative industries”. That includes magazine publishing, of course, but also writing and creativity more broadly. Hopefully, it should be interesting.

    On Sunday, we went out with the wonderful and super-clever Deb Chachra, who has just published her book How infrastructure works (and there's a great review of it here if you are interested). We tempted Deb out of London on a trip to Dungeness, which has both Derek Jarman's cottage and Dungeness A and B nuclear reactors. What's not to like about art and infrastructure?

    And more art on Sunday night, as we went down to Folkestone for a talk by the brilliant and wise Jeremy Deller. If you don't know Deller's work, honestly, where have you been for the last 20 years? This is the third time we have done something Deller-related this year, having seen him before in London and also seen Acid Brass. 2023: Year of the Deller.

    The three things which most caught my attention

    1. Commiserations to my old comrades in SEO, who are dealing with some pretty turbulent times. I promise that I didn't sabotage Google.
    2. Bill Gates wrote a long post about the way AI is going to change the way you use computers. Gates is right – large language models are just the precursor to what might look from some angles like the end of application software altogether.
    3. Bloomberg looked at the way Elon Musk has been radicalised by social media, adopting a world-view that's completely in the thrall of what we would have called the alt-right not that long ago.

    Things I have been writing

    There were three… no, actually four drafts of my post about what was going on at OpenAI and why you should care. I am never doing breaking essays on news again.

    To give myself a break from all things Orford, I picked up a short story that I had left to one side, about a very strange doctor. Might finish that this week.

    What the heck is going on at OpenAI (and why should I care?)

    Confused? You should be. I'm deliberately not looking at Techmeme so I don't have to update this post for the fifth time.

    Twenty-four hours ago, this was a thoroughly different post. Heck, twelve hours ago, it was a different post.

    One of the things I told myself when making this Substack was that I wouldn’t focus on current events. My focus is on the longer term: the three-to-five-year time frame, for publishers, communications professionals and others assorted nerds.

    But the shenanigans at OpenAI over the weekend suckered me in, and now I have had to rewrite this post three times (and whatever I write will probably be wrong tomorrow). Still, the drama goes on.

    The drama that’s been happening at OpenAI does matter and might be a turning point in how AI and large language models develop over the coming years. It has some implications for Google – which means it is relevant for publisher traffic – and Microsoft – which means it is significant for the business processes which keep everything flowing.

    What’s happened at OpenAI?

    If you’ve not been following the story, here’s a timeline created by Perplexity (about which I will have more to say in the future). But the basics are that OpenAI’s board dismissed Sam Altman, its founder and CEO, alleging he had been less than truthful with them. Half the company then decided they wanted to leave. Microsoft’s Satya Nadella then claimed Altman would be joining his company, only to walk that back later in the day. Now Altman is going back to OpenAI as CEO but not on the board and there will be an “independent investigation” into what went on, something that might not totally exonerate Altman.

    Confused? You should be. Everyone else is. Partly this drama comes down to the unusual structure of OpenAI, which at its heart is a non-profit company that doesn’t really give two hoots about growth or profits or any of the things most companies do. Partly it’s down to Altman basically pushing ahead as if this wasn’t true, then realising too late that it was.

    What’s the long-term impact on future AI development?

    OpenAI has been at the forefront of developing the kind of conversations large language models which everyone now thinks of as “AI”. It’s fair to say that before the June 2020 launch of GPT-3, LLMs were mostly of interest to academic researchers rather than publishers.

    And a huge number of tools have been built on top of OpenAI’s technology. By 2021 there were over 300 tools using GPT, and that number has almost certainly gone up an order of magnitude since. And of course, Microsoft is building OpenAI tech into everything across its whole stack, from developer tools to business apps to data analysis.

    If there’s one company that you don’t want to start acting like a rogue chatbot having a hallucination, it’s OpenAI.

    And yet, because of Microsoft’s investment in the company and commitment to AI, it probably matters a lot less than it would have if this schism had happened three or four years ago. The $13bn it has put in since 2019 for an estimated 49% stake of the company and the fact it is integrating OpenAI tech into everything it does mean it has a lot to lose (and Satya Nadella does not like losing.)

    Because of this, I think the past few days won’t have much impact on the longer-term future of AI. In fact, it could end a good thing, as it means Microsoft has committed that it will step in should OpenAI start to slip.

    The greatest challenge for Microsoft was that, although it had perpetual licenses to OpenAI’s models and code, it didn’t own the tech outright, and it didn’t have the people in house. And, when you’re betting your company’s future on a technology, you’re always in a better position if you own what you need (something that publishers should take note of).

    Partners are great, but if you’re locked into a single partner, and they have what you require, you’re never going to be the driver of your fate. Now, though, if Altman and the gang join, Microsoft effectively owns all it needs to do whatever it wants. It has the team. It has the intellectual property. Everything runs, and will continue to run, on Azure, and it has the financial muscle to invest in the huge amount of hardware required to make it available to more businesses.

    The big question for me is how all this impacts on Google over the next few years. If Altman and half of OpenAI ends up joining Microsoft, I think it weakens Google substantially: at that point, Microsoft owns everything it needs to power ahead with AI in all its products, and the more Microsoft integrates AI, the stronger a competitor it will be.

    If, on the other hand, Altman goes back to OpenAI with more of a free hand to push the technology further and harder, Microsoft still benefits through its partnership, but to a lesser degree.

    If I was running Google, I would be calling Aravind Srinivas and asking how much it would take to buy Perplexity. But that’s another story, maybe for next week.

    "Journalism is picking up the phone"

    Remembering the craft and process of original reporting can help build a loyal audience.

    So far this week, I have looked at a couple of strategies for creating stand-out content over the coming years: hands-on reviews and real-life stories. There is a third area, and in a sense it’s about going back to the future and focusing on something that never truly went out of fashion: original reporting.

    Back in 2008, my reserve arch enemy Danny O’Brien and I were debating what the difference was between blogging and “proper” journalism, and Danny ended up liking one of the ways I put it: that “journalism is when you pick up the phone”. Even then, that didn’t mean a literal phone – email was the hot communications thing. But it meant, as Danny put it, “journalism requires some actual original research, rather than just randomly googling or getting emailed something and writing it up as news.”

    That’s the core of original reporting, and as Danny also pointed out, a great deal of what passes as editorial doesn’t meet that standard (opinion columnists of the UK media, stop looking so shifty).  

    Original reporting in any topic area is about uncovering truths, providing context, and delivering stories that matter to audiences. AI, while adept at aggregating and rephrasing existing information, lacks the ability to conduct investigative journalism, engage in ethical decision-making, and provide the human empathy that is often central to impactful storytelling. I would consider myself broadly an optimist about the developing capabilities of AI, and even I don’t think it’s likely to be able to do this in my lifetime.  

    And “picking up the phone” is definitely having something of a renaissance. Take, for example, the series that The Verge is currently working on under the label of “we only get one planet”. Digging into how Apple and others add to the mountain of e-waste while claiming to be on top of their environmental efforts takes a lot of work, and importantly, original research and interviews. The Verge might not be physically picking up the phone, but they’re more than living up to the spirit.  

    Obviously, investing in original reporting is expensive, and it can’t just be a moral imperative. It has to be a sound business strategy, too. First, audiences appreciate its value. According to a 2019 Pew Research survey, “about seven-in-ten U.S. adults (71%) say it is very important for journalists to do their own reporting, rather than relying on sources that do not do their own reporting, such as aggregators or social media. Another 22% say this is somewhat important, while just 6% say it is not too or not at all important.”

    Original reporting can elevate a publisher's brand reputation and recognition, which can be a key to unlocking more direct traffic. In a saturated market, having a distinct journalistic voice and a reputation for in-depth reporting can be a significant differentiator.

    Publications like The New York Times and The Guardian have successfully leveraged their reputations for quality journalism to build robust subscription or contribution-based revenue models, with The Guardian hitting record annual revenue this year. And, importantly for its long-term profitability, nearly half its traffic is direct (and its biggest search terms are branded ones).

    One thing that’s worth noting: The Guardian’s strategy was a three-year plan. Do you have a three-year plan to diversify revenue, have a more direct relationship with your audience, and leave yourself less vulnerable to the whims of Google or Facebook?

    Thank you for reading Ian Betteridge's Substack. This post is public so feel free to share it.

    Share

    Telling human stories: where AI ends and people begin

    The second area where humans can do a better job than an LLM: real life storytelling

    One of the best parts of my last year working at Bauer was getting to know the team which works on real life content. Real life, sometimes called true life or reader stories, focuses on stories derived from ordinary people caught up in extraordinary events – usually not the national news, but their own personal dramas.

    There are many magazines whose focus is entirely real life, and you will have seen them on many supermarket shelves with multiple cover lines, often focused on shocking stories. But the key part about them, and the thing which differentiates them from tabloids, is that the stories are those told by the people involved in the drama. It's not third-person reporting: it is focused on first-person experience.

    And now a confession: before I worked with that team, and I suspect like many journalists, my view of real life wasn't all that positive. I considered it to be cheap, and pretty low-end.

    How wrong I was.

    I worked with the team creating the content to implement a new planning system, which needed to capture every part of their story creation process. What I learned was how thorough their process is, and how much human care and attention they had to take when telling what were sometimes traumatic stories, working directly with the subject.

    I don't think I have ever worked with a team that had a more thorough legal and fact-checking process, and I came away a bit awed by them. I ended up thinking that if all journalists operated with their level of professionalism and standards, the industry would be in a much better place.

    Bringing the human into the story

    Where does AI come into this? I talked earlier this week about how injecting more of a human, emotional element into reviews was a way to make them stand out in a field that AI is going to disrupt. Real life is a perfect example of a topic where it's difficult to ever see a large learning model (LLM) being able to create the story.

    An LLM can't do an interview, and because of the incredible sensitivity of the stories being told, I wouldn't trust a machine to write even a first draft of it. But there are aspects of the way that real life content is created which, I believe, can give lessons to every kind of journalism.

    First, whatever your topic area, telling the human story is always going to be something that humans do better than machines. Build emotion and empathy into telling a personal story, rather than relating just the facts. That doesn’t just mean technique: yes, use emotional arcs, and yes, show don’t tell, but technique alone won’t bring across the way that the subject felt when going through whatever event they are describing.

    On a three-to-five-year timescale, I would be looking to shift human journalists into telling more of these kinds of stories, regardless of what your topic area. Remember that humans are empathic storytellers and focus on the emotion of the story. So, think about how you can change your content strategy to be more focused on the human story.

    The process is the practice

    Don't, though, be tempted to work on these kinds of stories with an ad hoc process. Process is important in journalism – but it is crucial if you want to do real life stories well.

    To do this well, make sure you codify and document the process to a high level. Documenting the process is often something that journalists can push back on because it's considered stifling creativity, but that's not true at all. In fact, a documented process allows you to free up time to focus on creative tasks, rather than reinventing the wheel with every story.

    And that is where you can start to think about how to use LLMs to streamline your processes and make them move faster. But this is a business process problem, rather than a creative one.

    For example, if your pitching process involves creating a summary of a story, an LLM can write the summary – there's no need to waste a human's time to do it. Can you write a specialist GPT to check if a story has been used before? Can you use an LLM to query your content management system for similar stories you may have run in the past?

    If you are thinking about how to be a successful publisher in three to five years, you need to be looking at the process. If it's not documented – in detail – then make sure that's done. That can't be a one-off because a process is never a single entity fixed for all time. New technologies and new more efficient practices will come along, and someone needs to be responsible within your organisation for it.

    So, ask yourself some questions:

    • Who, in my company, is directly responsible for documenting and maintaining our editorial and audience development processes?

    • Where are they documented?

    • How often are they maintained?

    • Are they transparent? Does everyone know where they are?

    Once you have a fully documented process, you can start to interrogate it for points where AI can be used to speed things up, where using natural language queries to a specialist model can improve the work. That way, you can leave humans to do the work they're best at: emotion, and storytelling.

    What kinds of content can humans do better than AI?

    Sometimes, you just need the human touch...

    What kinds of content can humans do better than AI? The last few posts here have, I have to admit, been a bit of doom and gloom. I’ve looked at how conversational AI is going to squeeze search traffic to publisher sites, and at how adopting AI for content generation will remove the key competitive advantage of publishers. 

    But there are areas of content creation where publishers can use their ability to do things at scale and the talent they have to make great work that audiences will love.

    I’ve broken this post out into three parts, covering three different kinds of content. Today, I’m going to look at one which is close to my heart: reviews. Tomorrow and Thursday I’ll look at two other examples where humans can win.

    Doing reviews right

    One of the points that I made last week was that affiliate content, in particular, was susceptible to the shift to conversational ways of working with computers. However, that doesn’t mean that reviews are going to disappear. Certain types of article are likely to remain an area where humans will continue to produce better content for other humans for the foreseeable future.

    For many sites, creating content for affiliate purposes has involved a lot of round-up articles, often created at least in part with what gets called “desk-based research”. You are not reviewing a product you have in your hand, you are researching everything about it that a consumer could possibly need to know, and summarizing it helpfully.

    I’ve sometimes argued this was OK in certain circumstances, as long as you flag it and the amount of work that goes into the article is high. Just casting around for whatever is top-rated on Amazon doesn’t cut it because a reader can do that quickly themselves. But if you’re saving someone hours of time in research, you’re still performing a valuable service for them.

    That kind of content isn’t going to survive the increased use of conversational AI because one thing that LLMs will be excellent at is ingesting lots of data and combining it into a cogent recommendation. LLMs can read every piece of Amazon feedback, every spec sheet and every piece of manufacturer data faster and more accurately than any human can. If your content is just research, it’s not going to be viable in the world of AI.

    What will work is direct first-person experience of the product, written to focus on the less tangible things about it. An LLM can read a car spec sheet and tell you about its torque, but it can’t tell you how it feels to accelerate it around a corner. An LLM can look at a spec sheet for a laptop, but it can’t tell you how good the keyboard is to type on for extended periods.

    If your editorial teams are focused on what I used to call “speeds, feeds and data” then part of your approach should be to shake up the way they write to get them closer to a more personal perspective. One way to do this is to change style.

    Back when we launched Alphr at Dennis, one of the first changes I made to editorial style was to stop using the UK tech traditional plural in reviews (“we tested this and found blah”) and shift to first person (“I tested this and found blah”). Shifting into first person forces the writer into a more subjectively human perspective on the product you’re looking it. It frees the writer from an overly objective point of view into a more personal experience, and that is something which will survive the world of LLMs. Don’t just say what the specs are: say what it feels like, as a human being, to use this product.

    Tomorrow, I’m going to look at the second area I think is a clear “win” for human-generated content: the often maligned area of real life stories.

    What a 36 year old video can tell us about the future of publishing

    The future is arriving a little later than expected...

    I have had the best life. Back in 1989, I left polytechnic with my first class honours degree in humanities (philosophy and astronomy) and walked into the kind of job which graduates back in the 80s just didn't get: a year-long internship with Apple Computer UK, working in the Information Systems and Technology team – the mighty IS&T.

    It paid a lot better than my friends were getting working in record shops. And although it was only temporary – I was heading back into higher education to do a PhD in philosophy, working on AI – it suited me. Without it, I wouldn't have had my later career in technology journalism. The ability to take apart pretty-much any Mac you cared to name became very useful later on.

    Apple treated new interns the same as every other new employee, which meant that there was an off-site induction for a couple of days when we were told about the past, present, and future of Apple. The only part of the induction that I remember is the future because that was when I first saw the Knowledge Navigator video.

    If you haven't seen Knowledge Navigator, you should watch it now.

    Why is a 36-year-old concept video relevant now, and what does it have to do with publishing? The vision of how humans and computers interact which Knowledge Navigator puts forward is finally on the cusp of coming true. And that has profound implications for how we find information, which in turn affects publishers.

    There are three elements of the way Knowledge Navigator works which, I think, are most interesting: conversational interaction; querying information, not directing to pages; and the AI as proactive assistant. I'm going to look at the first one: interaction as conversation, and how close we are to it.

    Interaction as conversation

    The interaction model in Knowledge Navigator is conversational. Our lecturer talks to the AI as if it were a real person, and the interaction between them is two-way.

    Lecturer: “Let me see the lecture notes from last semester”. Mhmm… no, that's not enough. I need to review the more recent literature. Pull up all the new articles I haven't read.”

    Knowledge Navigator: "Journal articles only?”

    Lecturer: "uhh… fine.”

    Note one big difference with the current state of the art in large language models: Knowledge Navigator is proactive, while our current models are largely reactive. Bing Chat responds to questions, but it doesn't ask me to clarify my queries if it isn't certain about what I'm asking for… yet.

    That aside, the way conversation happens between our lecturer and his intelligent agent is remarkably similar to what you can do with Bing Chat or Bard now. The “lecture notes from last semester” is a query about local data, which both Microsoft and Google are focused on for their business software, Microsoft 365 and Google Workspace. The external search for journal articles is the equivalent of interrogating Bing or Bard about a topic.

    In fact, Bing already does a pretty good job here. I formed a similar question to our lecturer's about deforestation in the Amazon, to see how it would do:

    Not bad, eh?

    The publishing model of information – the one which makes publishers all their money – is largely not interactive. The interaction comes at Google's end, not the publishers. Our current model looks like this:

    1. A person interacts with Google, making a query.

    2. They click through to a result on the page which (hopefully) gives them an answer

    3. If they want to refine their query, they go back to Google and repeat the process – potentially going to another page

    Interaction as conversation changes this dynamic completely, as an “intelligent” search engine gives the person the answer and then allows them to refine and converse about that query immediately – without going to another page.

    Have a look at this conversation with Bard, where I am asking for a recommendation for a 14in laptop:

    OK, that sounds good. Now let's drill down a little more. I want one which is light and has a good battery life:

    That ZenBook sounds good: so who is offering a good deal?

    By contrast, a standard article of the kind which publishers have been pumping out to capitalise on affiliate revenue (keyword: “best 14in laptop”) is a much worse experience for users.

    And at the end of that conversation with Bard, I'm going to go direct to one of those retailers, with no publisher involvement required.

    If that isn't making you worry about your affiliate revenue, it should be.

    The model of finding information which search uses, based on queries and a list of suggested results, is pretty well-embedded in the way people use the internet. That's particularly true for those who grew up with the web, aged between 30-60. It may take time for this group to move away from wanting pages to wanting AI-driven conversations which lead to answers. But sooner or later, they will move. And younger demographics will move faster.

    That, of course, assumes that Google will leave the choice to users. Google may instead decide it wants to have more time with “its” users and put more AI-derived answers directly at the top of searches, in the same way that Microsoft has with Bing. Do a keyword search on Bing, and you are already getting a prompt to have a conversation with an AI at the top of your results:

    Once again, the best option for publishers is to begin the switch from a content strategy which relies on Google search and focuses on the kinds of keywords which are susceptible to replacement by AI (focused on answers) to content strategies which build direct audience and a long-term brand relationship.

    Treat search traffic as a cash cow, to be milked for as long as possible before it eventually collapses. In the world of the Knowledge Navigator, there's not going to be much room for simple web pages built around a single answer.

    AI content: Publishers' next burning platform moment

    LLMs remove a key competitive advantage of publishers. You need to find a new one.

    It still surprises me that I’m old enough to have been part of the transition from print publishing to digital, but what surprises me more is that publishers are again making some of the same mistakes they made in that early internet era. But this time, it’s about the use of large language models to generate content, and it’s even being made by digital natives.

    A little bit of history is probably useful here. Back in the mid to late 1990s, many publishers saw online content in terms of its ability to reduce their costs. Paper, printing and distribution of physical magazines were expensive. Publishing content online, though, was basically free. This, the theory went, would allow publishers to cut costs those costs and make more money.

    What most publishers didn’t understand was that the high costs of production associated with print were their main advantage because they acted as a barrier to entry for new competitors. Starting a magazine was hard: you had to not only have enough capital to allow you to print and distribute the thing, you needed access to news-stand distribution, which in the UK meant working with big distributors who had to be persuaded to stock you. You needed a sales team to sell enough advertising to support it, and they needed contact books that were thick enough to get their feet in the doors. Magazine publishing was expensive, and only large publishers were able to get it done at scale.

    By the mid-1990s, though, anyone could publish online. All those competitive advantages disappeared within a couple of years. You could publish easily using platforms like Blogger, WordPress, or even Myspace. You could get ad revenue from systems like Google Ads, without a sales team of any sort. Not only that, but you could get your content seen via Google search and social platforms.

    It took publishers a long time to realise that the old barriers to entry no longer protected them. Some publishers still act like they think they do, and so appear consistently dazzled when a new platform comes along and makes individuals who take advantage of it into millionaires. TikTok is the latest, but it’s by no means the first. Online was a burning platform moment for publishers, and some of them took far too long to see it.

    The next burning platform

    The ability of large language models (LLMs) like ChatGPT to create content is, of course, being seized on by publishers who see it as a method of creating editorial content without having to pay anyone to do it – or, at least, by paying fewer people to do it (and probably cheaper ones too – that was another outcome of the move from print to digital). If you’re a publisher reading that and shaking your head, thinking “well that’s not what we’re doing” I am going to give you a small monkey side eye because we all know that if you’re not thinking that way, your CFO probably is:

    There’s nothing wrong with using new technology to reduce costs, as long as you retain your competitive advantage. And here’s where things are difficult for publishers because what LLMs do is similar to what happened with web publishing in the 1990s: it removes the competitive advantage of publishers in the creation of content, just as the web removed their advantage in publishing and distributing it. It is the next step in the democratisation of publishing.

    In the early internet publishing era, anyone could create any content and put it online, but to be successful they needed to have the expertise to write the content in the first place. That’s why niches like technology publishing were impacted early and heavily: there was plenty of expertise out there, and suddenly, they could create content directly without an intermediary.

    Now, thanks to LLMs, anyone in the proverbial bedroom can create swathes of “good enough” content on any topic they want. They can churn out hundreds of average pieces about anything, just by taking a list of the most popular search queries in that topic as their starting point. They’re not flawless, but they’re good enough, particularly to answer the kinds of search queries which publishers have used to generate traffic at scale.

    This is why, for publishers, AI content creation is another burning platform moment. Combine it with the move towards providing more answers directly on search pages, and you have a one-two punch to publisher traffic which Mike Tyson would be proud of.

    Of course, publishers can use LLMs too. But, as with early internet publishing, their size means they can neither move fast nor with low enough fixed costs to make it work. If a proverbial 16-year-old can create an article with ChatGPT on “10 things you didn’t know about Mila Kunis” at the same speed as a celebrity magazine, at the same quality, the magazine loses even if it has used technology to eliminate roles and cut its costs. Because, unlike our 16-year-old, it has big fixed costs: offices, equipment, pensions, you name it. And it has margins to protect because the stock market expects to see revenue growth every year.

    Regaining competitive advantage

    So what can publishers do to retain their competitive advantage? There really is no point in trying to pretend that the AI genii doesn’t exist, in the same way that publishers couldn’t pretend in the 90s that people would just carry on buying huge volumes of print.

    Nor will legal claims aimed at the likes of OpenAI, Google and Microsoft succeed. Yes, your content has been scraped to create the language models in the first place. But given the result in the Author’s Guild vs Google Books case, I expect courts to hold that this kind of use is transformative, and therefore fair use. Either way, it will be tied up in the legal system for far too long to make a difference.

    Some have suggested that the way forward will be private large language models built solely using the corpus of text publishers hold. There are a few issues with this, but the biggest one is simply that the horse has bolted. OpenAI, Google and others have already trained their models on everything you have published online to date. They probably even have access to content which you no longer have. How many redirects of old, outdated content do you have in place where the original no longer exists? How many of your articles now only exist in the Wayback Machine?

    Instead, the only option for publishers is to focus on creating content of a higher quality than any current LLM. You cannot gain competitive advantage at the cheap, low-cost end of the market. Trying to do so will not only make you vulnerable to anyone else with the same tools (at $20 a month) but also devalue your brand over the long term.

    Creating higher quality content means employing people, which is why that urge to use LLMs to replace your editorial teams will actually undermine the ability of publishers to survive. Putting that cost saving towards your bottom line today is a guarantee that you will be out-competed and lose revenue in the future.

    So what can you do with LLMs? The most important thing is that LLMs can be used as a tool to amplify the creativity and ability of editorial teams. They are most useful as what Steve Jobs used to call “a bicycle for the mind”, capable of amplifying human creativity. An LLM can give you a starting point, suggest an outline on any topic, rewrite a headline 100 times using the word “crow” and it never gets tired doing so.

    If you’re a publisher, you probably still have decades worth of experience, context, contacts and knowledge of audiences in your editorial teams. Train them on how to use LLMs to amplify their creativity (and if you want some help with that, email me!)

    You’re going to have to change your content strategy to adapt to the new world of falling Google traffic anyway. LLMs should be seen as a chance to exit the market for low-quality, high-volume content.

    SEO will be over for publishers. You need to adapt.

    Position one for a query is no longer close to enough

    I don't know of a single person in publishing who doesn't believe that large language models (LLMs) aren't going to have a profound impact on the industry. But most of the attention has been on using them to create content, something which many publishers see as a way of increasing efficiency (by which they usually mean reducing expensive headcount).

    Whether that is actually possible or desirable is a topic for another time, but what I want to focus on is the other side of AI: what its adoption by Google is going to do to the traffic to publisher sites, and how we should be changing our content strategies to respond.

    Google's large language models

    It's worth starting by being clear about how Google is using LLMs. The company has two products which use large language models to deliver results for users. The first, and probably the best well-known, is Bard, which is similar to ChatGPT in that it uses a conversational interface where users ask questions or give prompts in natural language, and the programme responds.

    The second – and the one which, I think, should be most concerning to publishers – is Search Generative Experience (SGE). SGE is currently in the experimental stage, but will ultimately deliver answers directly into the top of Google, generated by its large language model.

    As you can see from the example, SGE takes up a lot of real estate in the query result, and delivers a complete answer based on what Google “knows”. Although it gives citations, there is no need to click on them if all you want is the answer to a query.

    How this affects publishers

    Obviously, anything which sits at the top of search results is going to impact on the amount of traffic which clicks through to publisher sites underneath. And this is potentially worse than anything we have seen before: if the answer to the query is given on Google's page, why would anyone bother to scroll down and click through?

    This means the much-fought over positions one to three will be much less effective than every before, and there will be a big decline in publisher traffic.

    The impact on different kinds of content

    It is likely that some kinds of content will be impacted more than others. Answers to questions are an obvious one, and in 2017 they accounted for 8% of searches. That is likely to have grown already and grow still further as users get used to being able to ask machines questions and get good quality tailored answers.

    But in its article on SGE, Google highlights a second area where publishers are likely to see a major impact: shopping. Many publishers have put significant effort into creating content focused on affiliate revenue, with some seeing affiliate overtaking advertising as a source of revenue. Affiliate content is almost always designed to capture traffic via search, for the simple reason that buying products usually starts with a Google search. An SGE-driven shopping search experience will ultimately bypass publishers and drive traffic direct to the retailer, with the AI making individually tailored recommendations on what to buy.

    This threatens to be disastrous for publishers. Effectively, SGE delivers a one-two punch of reduced traffic as more search queries are answered on the results page, plus reduced traffic to and revenue from affiliate pages.

    What publishers should do

    SGE is currently in the experimental stage, which means publishers shouldn't see any significant impact for now. But there is a clear direction here: more answers to search queries will be delivered without any click-through to publishers. And product shopping queries are going to become something which Google channels to retailers (who, by complete coincidence, are also advertisers) rather than publishers (who, by and large, are not).

    I estimate that publishers have a window of between three and five years to change content strategies to adapt to this new world, depending on the speed of user adoption. It could be faster: much will depend on how quickly Google's LLM work starts to move from an experiment to delivering real results.

    The long-term answer for publishers is to reduce exposure to Google as a source of traffic. That's going to be tough: almost every site I have worked on relied on Google for between 60-90% of its traffic. And the more the site was focused on affiliate revenue and e-commerce, the higher that percentage was.

    The answer is to focus on increasing your level of direct traffic, making your site a destination for content rather than something users hit once and bounce away from. Learn lessons from marketing: treat every piece of content you create as an opportunity to deepen your relationship with your audience.

    There are five things I would recommend publishers start doing today:

    1. Refocus your KPIs and OKRs to be about deepening relationships, not just traffic. Focus on repeat visits and sign-ups. Look to increase the number of qualified email addresses you have (and whatever you do, don't succumb to the temptation to capture more data. If you deliver value, you will capture more over time -- but all you need now is a person's email address).

    2. Reevaluate your search strategy and focus on topics with complexity. The more complex the content, the higher its quality, the less likely it is that an LLM can deliver a good quality version of it. Expertise and depth will be essential, and complex topic areas might be the “last person standing” when it comes to Google searches which work for publishers.

    3. If you have three to five year revenue forecasts, ramp affiliate revenue down over time rather than predicting growth. The era of affiliate revenue as a major contributor will be over. Use the revenue you are getting from it to bootstrap other areas.

    4. Heavily invest in newsletters. And whatever you do, don't consider them to be a place for advertising. Nothing creeps users out more than thinking they are signing up for interesting content only to find it chock-full of ads or sponsored content.

    5. Don't think that AI-generated content is going to “save” you. Many publishers are looking at content created by LLMs as a way of lowering costs. It will. But it will also put you out of business. Remember that any content you can create with an LLM can be done better by Google at the top of its results pages. What publishers have in their favour is human talent, creativity and expertise. The more you lose that by trying to use LLMs to cut costs, the smaller your competitive advantage.

    Next week I will return to that last topic, and look at the mirage of LLM content and why it's a death-trap for publishers.

    Weeknote, Sunday 22 October 2023

    It's been a while...

    It’s been a while. I have missed the last couple of weeks not because I was too busy to write, but almost the opposite: I have felt like nothing much has happened.

    Of course, that isn’t true. It’s never really true that nothing is happening in your life, but when you’re not working, what tends to happen is that the days elide into each other. The rhythm of most people’s life is work, or child-rearing, or the climbing frame of domesticity which they have erected around their time.

    I haven’t really yet cultivated that. I have had no work to do other than to make myself get up and write something every day. We have no children to depend on our timekeeping. And keeping house has never been a routine for either of us.

    The commemoration this weekend has been that of three months since I last had to get up in the morning, do eight hours of work, and sign off from Teams. I can’t say I haven’t enjoyed it. Having nothing to do, no one relying on your input to get on with their lives, is something I can recommend to anyone who wants to avoid waking up one day and asking “what the hell happened to me?” It provides that thing we most lack as we dance busily through life: perspective.

    So, what new perspective on my life have I found? First, that I have a kind of pastoral radicalism, a communism-not-Marxism which believes in the collective good. That sounds abstract, but I think it’s important. It’s a deep and abiding value, and we live in an age when values are used as a debased common currency, but in actuality are as ephemeral and short-lived as muons, decaying quickly into more stable and entrenched positions.

    The second thing I have come to understand is how deeply rooted impostor syndrome is in my life. I have always spent time denying my role in what I have achieved (at one point, one of my managers made “blowing my trumpet” a goal for the year because of my habit of deflecting praise). Because of this, I am not kind to myself in any meaningful way. Being forced to just stop has allowed me to start the process of letting some of this go.

    The act of writing can be both an antidote to and a trigger for impostor syndrome. Writers crave the validation of an audience because it’s the one moment when the feelings of fraudulence are pushed into the shadows. But the fear of not living up to expectations, of having no originality, of creating nothing of value, is also right there, all the time.

    I have thought a lot about this over the past couple of days. We were away, first in Hastings (Kim was teaching a life drawing class there) and then Eastbourne, seeing the Turner Prize show. If you get, go: Rory Pilgrim’s Rafts made me cry, as did Barbara Walker’s work. It reminded me that art is emotion, and it means that I really do have to tap into my emotions to make mine work. More of that, I suspect, over the coming months.

    Meanwhile, at some point I will have to actually get some kind of income or other. I have a few more months when I don’t **need** to work, but at some point money will once again become a thing of concern, rather than an abstraction which I can deal with later. One learning about money: I need much less of it than I would have thought a few months ago. Debt, it turns out, robs you of your freedom quite effectively because you have to earn more than you need to pay back someone for the time when you couldn’t earn all that you required. I’m free of debt now, and that feels like an unshackling.

    Things I have been reading this week

    I finished Gary Gibson’s Europa Deep in two gluttonous sittings. It’s a neat, tidy and highly enjoyable hard SF story, and it reminded me how much of the SF genre is currently playing with the tropes of thrillers and crime drama. I need to think a bit more about this because somewhere in the race to make SF adhere to the structures, tropes and pacing of the thriller, something – quite a lot – is lost.

    Reading Hilary Mantel’s A memoir of my former self feels like a delightful indulgence. It’s a collection of Mantel’s extensive back-catalogue of non-fiction, created because she developed the habit early in her career of writing for newspapers, periodicals, and magazines as well as books. It wasn’t really for fun: it was a survival mechanism because writing fiction (then as now) was not really enough to live on, at least until you become the kind of celebrated and storied writer Mantel grew to be.

    I’m glad she had to do it because she applied her mind to it and the results are spectacular. In the first piece, “On the one hand”, she writes about the difference between fiction and journalism:

    Fiction isn't made by scraping the bones of topicality for the last shreds and sinews, to be processed into mechanically recovered prose. Like journalism, it deals in ideas as well as facts, but also in metaphors, symbols and myths. It multiplies ambiguity. It's about the particular, which suggests the general: about inner meaning, seen with the inner eye, always glimpsed, always vanishing, always more or less baffling, and scuffled on to the page hesitantly, furtively, transgressively, by night and with the wrong hand.

    It’s great. You should read it.

    Coming soon

    This is Ian Betteridge's Three to Five.

    Weeknote, Sunday 24th September 2023

    I spent Monday and Tuesday working on a short story submission. The workshop that I went to last week at the Barbican on horror was the last one a series run by Good Bad Books over the summer -- I hadn't known about it till the last one, otherwise I would have been to all of them -- and they were taking submissions from attendees for a chapbook of work.

    However, the submission date was Wednesday which basically gave me two days to write and edit something. I could have simply pulled an idea off the shelf, or even taking a preexisting piece. But I wanted to do something based on the exercises in the workshop, so I was essentially almost working from a fragment which was never really intended to be a full story.

    I got it done. Submissions had to be less than 1000 words, and mine was about 750. It had a beginning, a middle and an end. And it got accepted, so I'll get a couple of copies at the event they're holding this week (tickets still available!)

    It's a horror story about a man on a train, a small child, and some plastic dinosaurs. You might enjoy it.

    Yesterday we went over the the Isle of Sheppey -- which, it turns out, was named by the Romans who called it "Island of Sheep" -- for Flood III, a walking tour combined with writing workshop. It was part of a series of workshops run by the Fieldnotes group across southern England aiming to explore creative practice situated in place, and there is definitely something interesting and inspiring about moving from location to location while exploring some prompts for creative work.

    We ended in the best possible fashion: a cup of tea and slice of cake at the Criterion Bluetown Heritage Centre. This is a brilliant small museum and music hall which is doing a huge amount to preserve the history of Sheppey and Bluetown in particular. Once a cramped working class district created to serve the docks, Bluetown housed thousands of people thanks to the adjacent docks, which made workers live within a mile. Now there are only about 200 people living in Bluetown. It's fascinating -- and outside of the island (and even on it) a lot of this history is invisible.

    This week also saw the arrival of two new bits of technology. The first was a 2TB internal SSD, which I fitted into my ThinkPad X1 Carbon – which means it now has 32GB RAM and enough storage to last quite a while. It's mainly a Linux machine these days which means it is massively over-specced, but the performance is really good and I like using it. That keyboard!

    The second arrival was a Keyboard Folio for my Remarkable 2 tablet. I recently started using this again after a long hiatus (I'll write something about this on Technovia soon), but I'm really enjoying it and the Keyboard Folio means I can use it as a little distraction-free device for getting words written in draft.

    I'm considering writing a monthly old-school tech column. Not business focused (lord knows there's enough of that). But something more in line with Jerry Pournelle's Byte stuff, which was mostly just about the tech travails he had encountered that month. I've actually got enough for one this month, so might kick it off this week.

    The three things which most caught my attention

    1. Rupert Murdoch "retired" (hint: he's not retired) and Mic Wright wrote the best thing you will read about him. Includes the line "When Murdoch is finally pronounced dead — perhaps for tax reasons…"
    2. Apple publicly states it's all in favour of right to repair, while undermining it through whatever technical and legal means it has to hand. This company really does not deserve your money. It sure as heck isn't getting any more of mine.
    3. This one came via Cory too, and it's a beaut: the B612 font, which is used in Airbus cockpits and designed for legibility, is actually open source and free to use. Mmmmm, fonts. You can download it. It's nice.

    Things I have been writing

    After finishing off the story for the workshop I did some more work fleshing out the world of the wolves that I mentioned last week. I think there is something in this.

    Things I have been reading

    My pile of books grows ever larger. Arrived this week was something new by Gary GIbson (Europa Deep), Stephen Baxter (Creation Node), and I haven't even finished Neal Asher's War Bodies, which is working really hard not to keep me reading.

    All that's on top of a bunch of non-fiction: Danny Cipriani's autobiography and Tiago Forte's The PARA Method. I have much reading to do.

    Weeknote, 2nd April 2023

    I bought a new Mac. There was a bonus from work and it was exactly the amount that a new M2 MacBook Air cost, which I took as a sign from the fates that it was time to replace my 16in Intel-based MacBook Pro. I can also sell some machines which I have lying around and no longer really use to effectively cover the entire cost.

    Of course I still have (and use) the ThinkPad running Linux but I can’t get out of the old technology journalist habit of having one machine for each of the main operating systems, because you never know when someone might commission me to write something. No one actually commissions me to write anything these days, mainly because I don’t actually have the time to write other than stuff for work, the odd blog post, and my creative writing. One day I will get everything down to one computer, but that day isn’t today.

    Early impressions though are positive and it makes me realise quite how compromised the machines which Apple made during the 2010s were. Prior to 2015, I used a MacBook Air – first an 11in version, then a 13in one – and then in 2015 got the new 12in ultra-thin and light MacBook. That computer was Apple’s first to use the much-loathed butterfly keyboard, which was forgivable on a laptop which was designed to be incredibly thin. But using it on the rest of the range was one of Apple’s worst mistakes in its history, because it made their best-selling computers horrible to type on.

    The 12in MacBook got replaced by a 13in i5-based MacBook Air (horrible keyboard, underpowered) and then a 16in Intel MacBook Pro (expensive, improved but still crap keyboard, underpowered because Intel was at a low point).

    The M2 Air replaces that MacBook Pro and it’s like night and day. Literally, because I bought the “Midnight” version which is a delicious shade of almost-black blue. It’s also far snappier than the MacBook Pro, completely silent and – IMPORTANT – has a keyboard which you can type on. I can’t say how much of an improvement this keyboard is, and when you spend much of your life typing that really does matter a lot.

    This MacBook Air is the first design that I’ve loved since the mid-00’s MacBooks, with their chunky polycarbonate (who didn’t love the black MacBook?). It’s almost as if Apple has remembered to make the Mac a combination of loveable and functional, after a fallow decade when it really lost its way.

    Meanwhile we went to a local village fate yesterday where Kim was the judge for the cake category:

    The fruit cakes were, apparently, all of high standard. While Kim was judging I spent some time working on some fiction I’ve been playing with for a while, re-plotting and outlining a story which hasn’t been quite hanging together. I’ve been using Aeon Timeline to do the outline, because it features some nice capabilities around navigating the complexities of multiple timelines while integrating with Scrivener. It’s a complex piece of software and I feel like I’m only just getting my head around it, despite using it for over a year.

    This week I have been writing…

    I wrote a short piece about the limitations of current AI, which takes me back about 25 years. Before I joined MacUser and became a journalist, I did a PhD in philosophy, looking at the implications of Kant’s philosophy of mind for artificial intelligence. At that time, cognitive science – a blend of computer science, philosophy and psychology – was the hot thing in AI, but in the past 10 years or so there seems to have been a return to the AI of the earlier years, which attempts to subdivide “intelligence” into a set of discreet functions capable of being developed in parallel.

    My old thesis basically said the opposite: consciousness is a necessary part of what we mean when we talk about intelligence as it’s instantiated in humans, and consciousness is unitary (there are many functions in the brain, but only one “I”, no matter if you’re a human, a monkey, or a lizard). No amount of bolting a vision system on to an abstract reasoning processor on to a large language model will get you to unitary consciousness.

    Was I right? I think the fact that despite the best efforts of very smart people, we are no closer to creating animal-like consciousness probably means I was. Large language models are impressive, but they “know” nothing – saying they do is a category mistake, as Gilbert Ryle would have put it.

    This week I have been watching and reading…

    We binge-watched What we do in the shadows this week and I haven’t watched anything which made me laugh out loud so much for a while. We’ve been wandering round the house randomly shouting “BAT!” in a Matt Berry voice, then collapsing into laughter. With Ted Lasso and The Mandalorian on at the same time, and The Power also now out, there’s plenty to watch.

    Two new books on the virtual book pile: Ken McLeod’s Beyond the reach of Earth and Katherine May’s Enchantment. I enjoyed McLeod’s book is the second in a series, the first of which I enjoyed quite a bit, and I greatly enjoyed May’s Wintering too (pretty much a lockdown book) so I’m looking forward to both. But first I need to finish Becky Chambers’ Record of a spaceborn few which I have been dithering over for a while.

    AI, like Jon Snow, knows nothing

    This is a great illustration of how AIs don’t “know” anything – they generate an answer one word at a time based on a huge corpus of text, predicting which the most likely next word is based on what it thinks is relevant to the answer.

    Even though Bing “knows” that Sunak is PM, as you can see from the second question, it can’t use that in an answer about public school members of the cabinet because the corpus of training data trends towards talking about Johnson’s cabinet (for a good reason – his percentage of public schoolers was much higher than that of Truss, so many people wrote about it).

    Google’s bard has even less accuracy:

    Almost every fact in this response is wrong. Johnson went to Eton, but is no longer PM; Sunak is no longer chancellor and went to Winchester, not Eton; and Truss is no longer in the cabinet and went to a state school.

    The counterpoint to this is the idea that AI is only at the start of its journey, and all this will be ironed out “eventually”. My view is the opposite: I don’t think that, as currently constituted, large language model-based AI Is capable of much improvement. Like almost every kind of AI research in the last 30 years, it’s a one-trick pony rather than a generalised system. And the story of AI research since its foundation is littered with one-trick ponies which can’t be grafted onto a more generalised intelligence.

    Animal-style intelligence is a set of emergent properties that evolved in parallel, not separately. Our ability to do vision and other sensations, abstract reasoning, and communications – which covers most of what we think of as intelligence – continually interacted with and reinforced each other over millions of years. We didn’t evolve any of those capabilities in isolation.

    And that’s why all machine learning efforts that solve one thing at a time will fail to produce truly intelligent systems. You can’t just “solve the vision problem” then graft on a large language model, then crowbar in an abstract game-playing system and have something intelligent. It’s like putting together a jigsaw by ignoring the shapes and just cutting off bits of the pieces till they “fit” – you lose the complete picture.

    Weeknote, Sunday 19th March 2023

    Where exactly is the year going? This is week twelve, which means we are 20% of the way through 2023. I've been talking about 2024 as if it's laughably far away, but it's right around the corner.

    This, of course, is part of what it means to get old. Our perception of time is inherently linked to how much time has passed for us, which means this feeling of the years rushing by will only worsen. I sit here with perhaps a third of my life left, assuming the Tories don't manage to dump the entire country into poverty and destroy the NHS and welfare system.

    I'm writing this sitting in my sister's house, on a visit to their home in Suffolk. Both my siblings are older than me -- my sister is just inching up to 70, while my brother is a couple of years younger. One of the joys of that shrinking of the perception of years has been that the mental distance that an 11-13 year age gap created has vanished. I am still very much the little brother: but now, our concerns, interests and thoughts are those of people of almost the same generation rather than entirely different ones.

    The more negative part of this temporal senescence is that putting anything off becomes much more deadly to the prospect of doing something. You decide to delay getting something done to your house, and the next time you think about it, a year has passed, and nothing has happened. You think you need to do some preventative maintenance on your roof, and then the next time you consider it, your joists are failing. This is why old people's lives slide into ruin: the "someday" that you say you will get around to doing something passes in the blink of an eye.

    There are a lot of "somedays" surrounding our house at the moment. I'll get on to that one of these days.

    At the opposite end of the age spectrum, young people are being sold the lie that life is short, and if you're not a "success" by age 25, you might as well give up on life. It's one thing being taught some tips about making an effective to-do list and thinking about your priorities. It's another thing being bombarded with toxic masculinity, which defines you as a failure unless you have a lambo.

    The right talks about "groomers", but if preying upon the anxieties of young people to indoctrinate them into a system where they can only fail isn't a form of grooming, then I don't know what is.

    Of course, all this is an attempt to tap into the alienation that capitalism causes, persuading its victims that it's all their fault and that if they just did the right things, they too could be rich, successful and forever young. There's a certain element of gamer culture about this: if you hone your skills or know the right cheat codes, you can win the game. The problem is that this isn't a game created for our amusement. It's a game where the designers will never let you win and will change the rules if you start to do too well or if there's a chance they won't win.

    I would much rather be old right now than young.

    Weeknote, Sunday 12th February

    Quiet week.

    I went along to give blood on Wednesday, only to be turned down because my iron levels were too low. Nothing, apparently, to worry about -- they were 128 and the minimum level is 135 -- but something I'm going to keep an eye on anyway.

    Today we headed over to Folkestone with our friend Edward. While Kim and Edward met up with Judith, their old drawing teacher, for a bit of cake and a chat I went into the newly-opened Fond for a cup of coffee and a good read.

    Reading and watching

    I've been rereading John Sculley's book Odyssey: From Pepsi to Apple. I got a copy given to me when I started at Apple in 1989, and I remember reading it and learning a lot. It's a blend of Sculley's story, including the period when he booted Steve Jobs out of the company, and business advice, which holds up well.

    It's often forgotten Sculley took Apple from a $1bn company to a $10bn one, a major feat of growth. He also made a mistake at the start of the history of the Mac of pumping up the price by $500 per device to pay for a massive advertising campaign, and he kept Apple's margins high.

    In some ways, Apple is still the company Sculley built: high margins, well-designed products, and proprietary technology. After Sculley left, Mike Spindler and then Gil Amelio attempted to take the company in different directions, towards more generic hardware and licensing MacOS. When he came back, Steve Jobs returned Apple to the model which Sculley had set -- which was ironic given he had been "exited" by Sculley.

Older Posts →