Ten blue links, Folsom Prison Blues Edition
1. Oh, WordPress!
Not content with their CEO getting into a stupid public spat with a user and apparently revealing information about them which should have remained private, WordPress announced it was doing deals to give access to their customers' posts and content to train "selected AI partners" (although not, at all, people hosting their own version of WordPress, so please shut that rumour down). The most charitable interpretation of this is that the company messed up its comms. The least is that it's started into an enshittification spiral, which will ultimately lead to it becoming the same kind of terrible service as everyone else. Related: I'm pondering whether I should start self-hosting again.
2. Good bus services or a tunnel that sets your skin on fire? Who knows which one is best for America
The Boring Company was always a joke, producing precisely one usable tunnel with money that should have been spent improving public transport infrastructure. Now its one tunnel is causing maintenance workers to get chemical burns from toxic waste. Go Elon. Where's my pitchfork?
3. Apple gets stuck in traffic
Apple's car project was a legacy of the era when Jony Ive ruled the roost and had decided he could design better products than anyone else in the world. I'm actually quite surprised that it lasted as long as it did after he left the company. And now, apparently, it's finally dead. From a business perspective, it never made any sense: historically, car manufacturer margins have been far smaller than the 30-40% that Apple wants. Tesla had higher margins than everyone else mainly because it bilked the government of the US for massive subsidies, cut corners in its manufacturing, and did everything possible to avoid providing any kind of proper customer service. While I'm sure Apple would have loved some of those sweet, sweet corporate welfare cheques, the rest of the Tesla Method of Business™ is probably not where it wants to be.
4. How publishing is losing its soul
There have always been publishers whose relationship with advertisers was a little too cosy. Even back in the days when selling ads was like shooting fish in a barrel with a bazooka, ad sales people would try a little “friendly chat” with a journalist to "check in and see how product X is doing". Most journalists always told them, in a friendly way, where to get off. But as times get tougher and things get more desperate, it's natural that executives are going to lean on journalists to "do the right thing for the company" rather than for their audience. This piece, from a year ago, is about CNET, but I guarantee they are not the only ones doing the same. Private equity companies only care about getting a return on their investment as soon as possible. They aren't concerned about the long-term viability of a brand — and they definitely aren't concerned about the people reading the content. Of course, once they have fired all the journalists and replaced them with “prompt engineers” that will be problem solved because there will be no one left to complain.
5. Prison laptops are a thing?
I didn't know they were, until I read this Twitter thread. Amazing.
6. Desperate times make desperate publishers
It's not that long since publishers were wary of getting into a legal tangle with the likes of Google because they wanted to keep the provider of most of their traffic onside, but these days things are different. I don't know whether this will be successful or not — despite my 'O' level in law, I am not a lawyer — but I'm absolutely certain that Google's 50,000 gorilla-like presence in the adtech market distorts it in a variety of ways. Let's not even start on Facebook. Yet. The one caveat to all this is that $2.3bn is 1% of the amount the company made from advertising last year, so it's another case of what sounds like a high number to publishers actually being cost-of-business to Google.
7. At last, the worst use of AI (special government edition)
I'm not against the use of LLMs for summarisation. In fact, sometimes it's one of the best uses for it. LLMs can be really good at picking out the salient points of an email, for example, and if you have ever worked in a corporate environment you know quite how much long emails are used to hide important points. But using it for creating drafts of routine responses and to summarise reports for ministers is a recipe for worse government. Why? Because good ministers get into the details of this stuff. Yes, they have many decisions to make, but not getting into the details of your brief leads to awful, hand-waving, big-picture-details-are-for-losers government. Most ministers don't know enough about the topic area when they start — this will only encourage them not to be immersed in it.
8. How not to do layoff communications
One of the things I studied on my leadership masters was how to manage layoffs, and later on, I saw at first hand how excellent leaders and managers do it. I've seen the seriousness and discipline it takes to so redundancies in a way which is humane, deeply considers how the communications will work, and also looks hard at the effects of redundancies on the remaining part of the business. It's always horrible, but it doesn't have to be either deliberately cruel or handled ineptly. So, I wonder, what is it about tech companies that makes them so awful at it? My gut feeling is that it's partly down to the culture of the hero founder/CEO: basically, leaders who are not prepared to listen to anyone with actual experience of doing this stuff professionally.
9. Green trade rules are "biased"
When Piyush Goyal, India's trade minister, told the FT that rules inserted into trade agreements with his country designed to reduce carbon emissions were "biased" he got a lot of stick, and a lot of it was classic “greedy Indians” racist nonsense from people who should know better. The fact is that he's mostly right: the West is expecting India (17% of the world's population, 3% of its carbon emissions) to stop lifting people out of poverty while the US (4% of the world's population, 15% of its emissions) doesn't reduce its emissions anywhere near fast enough. As Goyal put it, "all the environmental damage that has been done in the past has still not been made up for. What about that? Before we add new environmental issues, let’s first sort out who is responsible for the environmental degradation. Certain promises were made in Paris. They have to be delivered upon.” Just as it would like everyone to forget quite how much of its wealth came from colonialism, the West would love people to ignore how much it has benefited from pumping vast amounts of carbon into the atmosphere we all share. Climate colonialism is alive and well and living… well, here.
10. How the Tories radicalised me
I often note how the government's Prevent programme, designed to stop radicalisation, ought to look at the role of the Tory party. Recently, Tory party membership has been a bigger marker of someone being against traditional British values like free speech and the right to protest than anything else. Lewis Goodall wrote a very good piece about the radicalisation of the Tory party, and how it's now more or less in thrall to conspiracy theories. Certainly, 14 years of Tories has radicalised me: I've gone from soft left to full-on "end capitalism now", which is an unexpected return to my politics of 40 years ago. I should, at least, thank the Tories for opening my eyes again.
The end of the line for Google
“Personally, I don’t want the perception in a few years to be, ‘Those old school web ranking types just got steamrolled and somehow never saw it comin’…’”
Google engineer Eric Lehman, from an internal email in 2018, titled “AI is a serious risk to our business”
I should, of course, have put a question mark at the end of the title of this, but I very much do not want to fall foul of my own law. And, of course, talking about the end of the line for Google as a company is like talking about “the end of the line for IBM” in 2000, or “the end of the line for Microsoft” in 2008. Whatever happens, Google has so much impetus behind it, so much revenue, that a quick collapse is about as likely as my beloved Derby County winning League One, Championship and Premier League in three consecutive years. It isn’t happening, much as I might dream.
This is one of the reasons I quipped that Google could see the $2.3billion that Axel Springer and other European media groups want for its alleged monopolisation of digital advertising as “just the cost of doing business.” It’s the equivalent of someone having to pay a £250 fine for speeding: annoying, but not the end of the world, and not actually that likely to keep you down to under 70mph in the future.
Google’s problems, though, do run deep. Other than, as my friend Cory Doctorow has noted, the 1.5 good products it invented itself (“a search engine and a Hotmail clone”), the most successful Google products are acquisitions. Android? Acquired. YouTube? Acquired? Adtech? Acquired. Even Chrome, which dominates web browsing in a way which many people (including me) find more than a little scary, was based on Apple’s WebKit rendering engine – which was, in turn, based on the open source KHTML.
The fact is, Google is incredibly bad at successfully bringing products to market, to such a degree that no one trusts them to do it and stick with it for long. It continually enters markets with fanfare, only to exit not long after.
Take social networking. You probably remember Google+ (2011–2019). You may even remember Orkut (2004–2014). Perhaps you know about Google Buzz (2010–2011). But do you remember Jaiku, an early Twitter competitor which Google bought – and buried? The resources of Google could have been used to accelerate Jaiku’s development and – perhaps – win the battle against Twitter and the nascent Facebook. Instead, the company took two years rebuilding Jaiku on top of Google’s App Engine, with no new features or marketing spend to support the product. Two years later, they killed it.
What Google is pretty good at is producing research. Its 2017 paper on transformers directly led to many of the large language model breakthroughs which OpenAI used to create ChatGPT. Failing to spot the potential for its research isn’t unknown in technology history, but really great companies don’t allow others to turn themselves into competitors worth $80 billion on the back of it.
And particularly not when those other companies create technology which directly threatens core businesses, in this case, Google’s “one good product” – search. The bad news for Google is that even in the middle of last year, research showed people using ChatGPT for search tasks performed just as well as using a traditional search engine, with one exception — fact checking tasks. That, of course, is a big exception, but ordinary people use search engines for a lot more than just checking facts.
What’s also notable about the same research is that ChatGPT levelled the playing field between different educational levels, giving better access to information for those who have lower educational achievement. That strikes at the heart of Google’s mission statement, which promotes its goal of “organis[ing] the world’s information and making it universally accessible and useful” (my italics). Search, as good as it is, has always required the user to adapt to it. Conversational interaction models, which ChatGPT follows (the clue is in the name), change that profoundly.
In The Innovator’s Dilemma, Clayton Christiansen talks about the difficulties that successful companies have in sustaining innovation. Established businesses, he notes, are excellent at optimising their existing products and processes to serve current customers (this is called “sustaining innovation”). However, they often struggle when faced with a “disruptive innovation” – a new technology or business model that creates a whole new market and customer segment.
One of the potential solutions to this which Christiansen looks at is structural: Creating smaller, independent units or spin-offs tasked with exploring the disruptive technology can allow them to operate outside the constraints of the main company. This, of course, is probably what Google intended to do when it changed its structure to create Alphabet, a group of companies of which Google itself is just one part.
The biggest problem with this putative solution is that if you do it well, innovation doesn’t necessarily flow to where it is most needed. Google’s search products needed to seize on the research made in 2017 and integrate it. It didn’t, and – worse still – no one saw this as a potential disruption of the core business. The blinkers were too firmly on.
Perhaps that’s changing. Notably, last year that Google moved all its AI efforts into a single group, Google DeepMind. The “Google” in its name is significant: previously DeepMind was a separate business within Alphabet (and, in true Google style, it was acquired rather than built in-house). Now, on the surface, it looks likely to focus more on continuing Google’s mission, which means disrupting the traditional ten blue links.
Can it succeed? I’m not optimistic (publishers, take note). What we have here is a company which is good at research, but not at exploiting it; whose history is of one good product and a good Hotmail clone; that has a terrible record of announcing, releasing, and killing products, often multiple efforts in different categories all of which fail; and which has failed to keep its core product – search – up to date.
Perhaps the real question isn’t whether Google has reached the end of the line, but how exactly it made it this far?
Ten Blue Links, "Gloom and doom" edition
It's been a gloomy week. Sorry.
1. Surprise! Apple’s sync stuff is entirely cryptic
The magnificent Howard Oakley, who knows more about the technology in the Mac than any man has a right to know, has been digging into the way iCloud sync works, and found it imposes some completely invisible quotas. This is the flip side of Apple’s “it just works” philosophy – it works, but Apple is not going to make it easy for you to troubleshoot if it ever doesn’t.
2. Google, the most disappointing monopoly
Google, on the other hand, loves being open. It publishes papers about AI, takes part in academic malarkey, and generally is open and lovely and cuddly. Except for one area: search, where its openness definitely has some limits. Cory Doctorow, as always, is on the money with his criticism of how big tech companies are enshittifying their products. Search is just the latest, and it won’t be the last.
3. Free money rots your morals, say people who have rotten morals
Some weirdo Republicans in the US are trying to proactively prevent anyone from implementing universal basic income (UBI), because they want wage slaves to stay in their place or something. Like the four day week, UBI is one of those things where no amount of actual tests and data will convince right wingers that it’s a good thing.
4. It shouldn’t need saying, but it does need saying
Corporations are not to be loved. Even the good ones.
5. Measles infected kids can skip quarantine in Florida
I could make this post into ten things which demonstrate that Republicans are stupid, ignorant, and liable to get people killed. They really are the worst.
6. And speaking of not loving corporations…
One of the things that amateur commentators about the EU DMA Apple shenanigans don’t appear to have understood is that the EU aren’t going to start investigating whether Apple is complying until after the deadline on 7th March. The pundits currently doing victory dances about how the EU can’t write clear laws or about how Apple has done an end run around them are going to end up amending a few blog posts.
7. Leadership is in the details
On a very different topic, this article from Gaël Clichy on Pep Guardiola’s leadership style is well worth a read. What often gets lost in leadership theory is the role that attention to detail plays. Pep gets it.
8. Federation, uh huh. Federation
Bluesky is finally federatable. This is a big deal: federated services are the future, and I always had some doubts over whether Bluesky would, in fact, ever release it. I take my hat off to them, and possibly eat it too.
9. Google to Apple: "hold my beer"
Google, which generates 30 percent of its sales from Europe, the Middle East, and Africa, views the DMA as disrespecting its expertise in what users want.
I am so looking forward to the flurry of investigations which start after the 7th March deadline for DMA compliance. I wonder if the big tech companies are just thinking that if they're all shady about the way they comply, the EU just won't have enough people to investigate them all?
10. VICE pivots to… not posting
Remember VICE? The company that pivoted to a strategy of giving its senior executives big bonuses days before it entered Chapter 11 bankruptcy? Well it's latest pivot is to not posting anything on its website, and becoming a "content studio" (whatever that is) which licenses its content to other publishers. No, I have no idea what that means either, other than it undoubtedly means more layoffs in an industry that has already seen quite a lot this year. And it's only February.
HouseFresh and the challenges of affiliate content
You might have noticed a post from HouseFresh doing the rounds, especially if you have anything to do with creating content intended to generate affiliate revenue. It’s caused quite a stir, particularly among publishers.
My background is in product testing. My first job in publishing was in MacUser's testing labs, where we would regularly have 10–20 products in and – literally in some cases – take them apart to decide which one was best. Next door was the PC Pro labs, which did the same thing, on an even bigger scale. On a visit to New York a few years later, I went to the testing labs of a US publisher: even bigger, with people who looked like they should be wearing lab coats picking over the bones of machines. The product testers were real experts, often devising unique tests designed to stretch the products in ways which matched up the real-world pounding they would take.
But those were proper group tests. What HouseFresh is writing about is not those. Their focus is the “best” article, written specifically to deliver affiliate clicks and sales, and designed to hit a specific keyword.
The HouseFresh article rips the lid off some of the worst aspects of content written to deliver sales though affiliate links (I refuse point-blank to call it “comtent”, which has to be one of the worst words ever invented). Their biggest complaint is that a lot of the pages you will see which rank highly on Google for affiliate-led keywords are written by people who have never had the products in their hands, let alone tested them. They may have done desk research, which involves, at best, scouring spec sheets for hidden details and, at worst, just scouring the user reviews on Amazon. But that doesn't tell you all that much about a product and whether it's any good or not.
Of course, this is really Google's fault because it is rewarding low-quality content by ranking it highly. This content, which is far cheaper to produce than a real group test, can be churned out quickly. A quick writer can do one or two a day, while a group test might take two weeks to organise, test and write. Use an LLM and you can probably make that process even faster. Just make sure to write your prompt to make it include phrases like our lab tests and our experts said to satisfy Google's pretty surface-deep view of how content based on real-world experience works.
HouseFresh’s hope is that Google will improve its algorithms and start rewarding content which is of higher quality, but I have my doubts. I suspect that the company’s focus is on creating “answers engines”like Gemini, rather than the traditional ten blue links. And even if it can improve its algorithms to prioritise in-depth reviews, gaming the SEO system will often look like a better option to many of the kind of publishers HouseFresh is attacking: the ones who have bought well-known brands but now use them to churn out lower quality content.
There are, and will be, exceptions, mostly from publishers who have a heritage in creating brands, rather than the ones that buy brands just for their heritage. But the sheer volume of content created by others could drown them out—especially as LLMs make it easier to generate entire sites within days.
As I have pointed out, I believe businesses based on this kind of affiliate-led content will also be disrupted over the coming few years by conversational AI. Once people have the option of having a conversation with a smart recommendation engine to tailor buying advice to exactly their needs, “best XXX” articles based on desk research or mining Amazon reviews just won’t be good enough.
Google is a choke point for the affiliate content business, but it’s not the only one. The second is Amazon, where most publishers derive a large chunk of their affiliate revenue. Although reliable numbers are difficult to find, Datanyze estimates Amazon has around 48% of the market share in affiliate networks, and anecdotally, I suspect the amount of revenue that brings in for publishers is higher still. Every publisher I know has sought to reduce their exposure to Amazon, especially after the effective demise of its Onsite Associates programme (known internally as OSP), changes in policy from Amazon would have a massive effect on publishers. But the reality is that if Amazon turned off the taps, or even reduced them, publishers with big investments in affiliate content production would be in trouble.
Would Amazon do this? It depends if you believe that Cory Doctorow’s enshittification cycle applies to it:
First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.
Are we at the point where Amazon starts to claw back the revenue it shares with its “business customers” – affiliate partners? Currently, probably not. But it’s worth thinking about the longer term, too. Already, 61% of US shoppers begin their buying journey on Amazon regularly. That’s traffic which Amazon has to pay no extra commission on, and so it’s something that it would love to do more of.
Plus, of course, Amazon has hundreds of millions of reviews of its own that it could tap to automatically create recommendations for users, including by layering conversational AI on top of it to allow users to get “intelligent” recommendations. The potential is there for Amazon to be a trusted source of reviews, as well as the retailer of choice for online.
The “good” news is that presently, Amazon is struggling with its own grey goo of content in the form of fake reviews generated by AI. It’s responded with more AI to try to trim them out. But the key question is really what happens at the point it decides that the money it spends on delivering affiliate revenue would be better off spent on ads, or on-site AI, or whatever else.
Ironically, the kind of content mills which HouseFresh is railing against would be less bothered if Amazon does ever scale back on its focus on affiliates: they are, most likely, pretty aware that the brands they are using are near the end of their life, and if the affiliate cash cow moves on the private equity companies will have long since made a sizeable return.
The relationship between Amazon and publishers, like that of Google and publishers, is some kind of symbiosis. Amazon gains revenue from the clicks that publishers drive their way. Publishers get a slice of that money, enough for them to survive and grow. But the key question is whether that symbiosis is obligate – where each depends on the other for survival – or facultative, where each benefits but could survive alone. If it’s the former, affiliate content has a long and profitable future. If it’s the latter, then eventually, publishers who go all-in on it may have a problem.
Weeknote, Sunday 19th February 2024
The good thing about not writing a weeknote for a week is you have plenty of things to write about. The bad thing is that you have plenty of things to write about.
We’ve managed to fit in two movies in the past fortnight: All of us strangers, and The zone of interest. What a pair of absolute crackers. Go and see them in the cinema, don’t wait till you can stream them or whatever. But then I would say that because I love the cinema, something I have only recently rediscovered.
On a trip to that there London, we managed to squeeze in both exhibitions on at the Courtauld – Cute, and Frank Auerbach – as well as a wander around the newly renovated National Portrait Gallery. Cute was a little disappointing: lots of great objects, but the curation didn’t really tell a story that had any narrative to it. It was more “here’s a thing, here’s a thing, oh and another set of things”. Auerbach is a brilliant artists, but not totally my cup of tea – but he is Kim’s, so that’s fine.
The NPG was a place that I was very familiar with. When I worked at Redwood, we were just across the road in a building which is now a hotel, so I often dropped into the NPG at lunch time for a sit and think. The renovation is a huge improvement, not simply for the fabric of the building but also for the way it’s curated. The Victorians, which used to be a gallery of Dead White Men(TM) now actually tells a story of colonialism and empire through almost exactly the same pictures. Also: whoever decided to put Radclyffe Hall in between Churchill and George VI is a genius.
We also headed over to Oxford for an overnight trip, seeing our lovely friends and their lovely children and also William Kentridge doing the fifth of the Slade Lectures Hilary 2024. I wish we had been able to go to the whole series – Kentridge is a brilliant lecturer as well as an artist I greatly admire. Seeing things like that makes me wish I lived in an academic city, instead of in a city which just happens to have two universities bolted on to it. There is a profound difference, and it’s one of the things that I most dislike about Canterbury.
At the Ashmolean, we saw Colour Revolution, which will have closed when you read this. I liked it: in particular I liked the bust of Maharajah Duleep Singh, heir to the Punjab who was forced into exile in England when we stole his land. The bust on display has his actual skin tone. Queen Victoria insisted on a classic, plain white version for herself. If that isn’t a nod to how Indians – even noble ones – were seen by the Victorians, I don’t know what is.
While there, we also caught Monica Sjöö’s The Great Cosmic Mother at Modern Art Oxford. I was not impressed. There is something about the retreat into mysticism which radical politics of the 60s and 70s succumbed to which irritates the heck out of me. It’s particularly true for second-wave feminism: as Michael Moorcock said in The Retreat from Liberty, “being Mother of the Universe cannot offer much consolation while Father is always in evidence somewhere, even if he spends most evenings at the pub.”
Coincidentally, Moorcock was also critical of the Greenham protests, which he saw as faux radical with zero chance of actually changing anything, and with little/no consequences if you got arrested. It’s not popular to say so now, but he was right – Greenham changed nothing, and the energy which went into it would have been far better spent campaigning, say, for the police to take rape seriously (which they very much didn’t at the time).
This week I have been reading…
Having finished the 500+ page Babel I dived into the 500+ page The Whispering Swarm by Michael Moorcock, and finished it. It's the closest thing Moorcock is likely to write to an autobiography, but of course includes huge strands of fiction in it. Weirdly, he includes many real names of people, but disguises others -- perhaps to make it clear that this is a fictional "real" Moorcock too (it's not down to actually needing the disguise people for legal or other reasons -- changing Ballard to Allard isn't going to fool anyone, and with JGB dead there's no issue of libel anyway).
I also finished Zoe Schiffer’s Extremely Hardcore, which is the story of Elon Musk’s takeover of Twitter. If you haven't been obsessively following it this book is an excellent romp through all that's happened. But if you have, there's probably not a lot in here which will either surprise you or that you won't be aware of.
Musk is, of course, the main character. But until the day when he tells his own story, he's a main character that is almost entirely absent. That allows the reader to paint in whatever their own feelings are about him, but it doesn't really answer the question of why he is like this, why he takes these dreadful decisions. Nor does it really tell us mush about how he manages to get away with it, although having more money than is right for any human being is probably part of the answer.
The Emperor’s New Clothes definitely applies to those around him, and one of the more interesting parts is the accounts of those who attempted to play along with the Musk regime at Twitter, mollifying him and trying to find ways to do what he wanted without destroying their own values in the process. There will no doubt be a few more of those stories come out over the next few years, and I hope there is ultimately a revised version which tells those too.
This week I have been writing…
Last week’s Ten Blue Links was really a collection of bad things that are happening in tech at the moment. It really is quite grim: between Apple deciding it’s more likely to achieve growth through rentier capitalism than making high-quality products that ordinary people can afford, VCs turning out to be utter morons, and Sam Altman being, well, what we all know he is (but aren’t really saying) I don’t think there has been a more depressing landscape in the tech industry.
As I said at the end of that piece, it’s best to sup with a long spoon.
Meanwhile I made some progress on Orford. Not as much as I would have liked, but I solved a knotty problem in the plot by bringing the introduction of a character to much earlier in the work.
Michael Tsai - iOS 17.4 Changes PWAs to Shortcuts in EU
Michael Tsai - Blog - iOS 17.4 Changes PWAs to Shortcuts in EU:
Apple had two years or so to prepare for the DMA, but they “had to” to remove the feature entirely (and throw away user data) rather than give the third-party API parity with what Safari can do. I find the privacy argument totally unconvincing because the alternative they chose is to put all the sites in the same browser. If you’re concerned about buggy data isolation or permissions, isn’t this even worse?
Michael neatly collects together the responses to Apple’s frankly pathetic removal of proper PWA support in the EU, but I think his own quote above hits the nail on the head. The company has had years to prepare for this. If it got blindsided, that’s a management failure. If it’s being petulant, that’s a management failure. If it can’t devote the resources to make this work, that’s a management failure. And if this is an attempt to enforce using native APIs and the App Store rather than PWAs… well, that too is a management failure.
Apple’s whole response to the DMA ruling has been nothing but disastrous for its credibility amongst developers, but unfortunately the company seems to have forgotten that without developers, its platforms are nothing but pretty user interfaces for copying files around.
Daring Fireball: The European Commission Had Nothing to Do With Apple’s Reversal on Supporting RCS
Daring Fireball: The European Commission Had Nothing to Do With Apple’s Reversal on Supporting RCS:
China, unlike the EU, seemingly knows how to draft effective regulations to achieve specific goals.
China, unlike the EU, is a repressive regime with a chokehold over Apple’s business. I don’t think Apple caving in to it has much to do with the quality of how China drafts its laws.
Ten Blue Links, 12 Feb 2024: the work from home edit
I like links. You like links. Everyone likes links!
1. The dirty fight of "return to office"
I've written before that if you can't lead teams remotely, that's your problem, not your team's. Of course, getting some face-to-face time is useful and valuable, but mandating a set number of days per week isn't actually that useful. What's interesting is that the evidence suggests there's no basis to the idea of increased productivity in the office. So what is causing the race to bring people in?
2. The AI data centre boom must be stopped
The amount of energy required by AI is absolutely eye watering, and it's fuelling (sic) what might turn into an energy crisis.
3. The story of Mitchell Cole
I hadn't heard of Mitchell Cole before, but I'm glad I read this. Cole was a 27-year-old footballer who was forced to quit the game owing to a heart condition, and died while having a kickabout with his mates.
4. Betteridge's Law strikes again
5. Apple confirms no more web apps for naughty Europeans
Last week I mentioned I wasn't going to assume malice by Apple breaking web apps on iOS in a new beta. This week, they have confirmed that's precisely what it is. When Apple says it can't "securely" provide both web app support and alternate rendering engines, it's using the meaning of "securely" which refers to securing their income streams, not actual computer security. Pathetic stuff from Cupertino.
6. The iMessage Halo Effect
I think John Siracusa is exactly right: it's the iPhone which gives iMessage its cachet, not the other way around. It's not too late for the company to develop iMessage for other platforms.
7. AI companies lose value after Microsoft and Google quarterly earnings
Losing $190bn in stock market value seems a little careless to me. But the point really is that we are right at the start of learning to use LLMs in creative ways. Replacing cheap copywriters is not where the real action is, but currently that's all everyone is fixating on.
8. Citizen Musk
We all know that Elon is an idiot, but this article shows just how much he's been drawn into a universe of misinformation (or, as we used to call them, "lies"). I'm not sure if he's stupid or venal, or both. Probably both.
9. How the government captured the BBC
Alan Rusbridger pointing out how the BBC editorial standards committee now has just member who is both uninvolved in daily decision-making and has a background in news, and that's Robbie Gibb, who also happens to be Theresa May's former director of communications. The BBC isn't alone in this (Reach plc has no one on its board with any newspaper/online journalism experience), but it's remarkable how much the Tories have worked to subvert the public service bodies of this country.
10. Amazon to customers: have a worse service, and we're putting up the price
Amazon has been doing enshittification since before it was fashionable, but this is definitely their boldest move yet.
Ten Blue Links, "Tech is Bad Right Now" Edition
I remain a technology optimist, but weeks like this give even me an “are we the baddies” moment or two. On to the links.
1. Sam wants more more money
Sam Altman wants $7 trillion. Not to transition the planet to a carbon-free economy, end poverty, or provide universal healthcare to every person on the planet — all of which could be done with that kind of money — but to build AI chips. When I mentioned this on Threads, some dude popped up to helpful educate me that AI would enable us to do all those things. Mate, we don't need AI for any of that. We just need to end capitalism.
2. Apple broke web apps
I'm not going to rush in and say that breaking support for progressive web apps — one of the few ways to distribute apps on the iPhone without giving Apple its tithe — was deliberate. While I'm not inclined to assume malice about bugs in beta software, I would very definitely assume malice if this made it into the release version.
3. Remember when VCs were supposed to be smart?
The savaging that Chris Dixon's silly little book defending crypto has taken is entirely justified. Pushing said book on to the New York Times "bestseller" list by bulk ordering, when you know that bulk ordering gets publicly noted, shows either the kind of “I don't care you're not the boss of me” attitude of a 14-year-old boy, or just stupidity. Fair play to Penguin Random House, though: they must have known that whatever they paid Dixon for this laugh-a-minute publication would be easily recouped by copies bought for his worshippers at the firms his company has thrown money into.
4. Enshittification, FT-style
I was literally about to mail Cory saying “ha ha you got the attention of the FT” when I spotted he'd actually written the piece himself. As he rightly says, we are in the enshittocene.
5. My printer hates me
It's taken a while, but it looks like mainstream publications — OK, The Atlantic — are taking note that just because you buy a product doesn't mean you own it. Printers are just one example, and not even the most egregious. Every large corporate has spotted that charging rents is easier than making good products and competing in a free(ish) market. The surprising thing, to me, is how many people think this is a good idea for ordinary people.
6. App stores keep us safe, Redux
I've spent far too much time over the past couple of weeks arguing with people who believe that Apple is entirely correct to face off against the big bad tech-hating European Union about app stores. Without Daddy Apple keeping us all safe, someone might download a malicious app! The problem is, of course, is that app stores don't really keep you safe, something we saw again this week. What they do is make you believe that it's someone else's responsibility to keep you safe, lulling you into a false sense of security. Oh, and of course, they keep developers paying rent to platform owners.
7. You will be wanting to buy this book
Kara Swisher has written a book. You want to buy this book because if it's anything like the extract, it's going to be a doozy. And in the spirit of this article, you should pre-order it from Bookshop.org rather than Amazon.
8. Comic Sans is a good font
Yes. It is. I will not be taking questions at this time.
9. British Universities are a mess
Gaby Hinsliff's article on the problems of UK universities is well worth a read, but I don't entirely agree with it. I have heard too many horror stories from academics who have been “encouraged” to ensure that foreign students (who pay a lot of money) pass courses. Like much of the Tory legacy that Labour will inherit, it will take decades to undo the horrific damage this government has done to higher education. The entire system of funding both institutions and students needs dismantling and rebuilding.
10. Sup with a long spoon
Reach has reached (ahem) a deal with Amazon to give away its crown jewels to make a few pecks of corn. I had thought that publishers might have learned that collaborating with big tech platforms never means they get a good deal, but here we are. Fool me once, shame on you. Fool me twice, shame on me. Fool me ten times, I'm probably a publisher.
Weeknote, Sunday 4th February 2024
Quite a busy week, all told. I finished off a feature for PC Pro magazine, which will be the first freelance bit of tech journalism I’ve done for quite some time (I think it’s a good five years since the last one). I also took a trip into London to see one of my former colleagues, and it was great to hear what they have been up to. That includes a project I had encouraged them to do involving offering more work experience placements for young people wanting to get into automotive journalism, and it sounds like it’s been a success.
I’m excellent at encouraging other people. Encouraging myself is a bit harder. But even that’s been pretty good this week. I’m still arsing around with technology too much, and thinking too much (and too hard) about platforms and systems and all that jazz. But I also feel like I’m getting somewhere – finally – with the personal projects I have wanted to work on.
One thing I have been arsing around with (for professional purposes) is AI image generation, and it’s absolutely hilarious. Can you guess what the prompt was which produced the image at the top?
Things I have been reading
I’ve been reading Babel by R F Huang this week, and I’m entranced. There’s a lot of wonderful writing in it – I will probably have to write a post just about it when I’m finished – but there are two things which hit hard for me: the scene at the start, where Robin is leaving Canton and believes that he will never see it again, and the continual careful subtext of the seduction by empire of its best and brightest subjects. For me – a grandchild of the Empire, whose mother left Imperial India as a small child – there are so many elements where both those points hit hard. It’s a terrific book, and if you haven’t read it, you really should.
Things I have been writing
I wrote something about Apple Vision Pro, in bullet point form. The confounding thing about Apple is that I think they have, to a degree, lost their soul, and in some ways the Vision Pro is emblematic of that. Vision Pro feels like a device that’s not going to encourage people to be more creative – except in the context of creating things for others to view on Vision Pro.
I’ve been meaning to do a regular link blog post for a while, and now I have an idea for it. So every Friday, I’m going to do a ten blue links post with ten things that have found their way into my inbox. I don’t think the format of the first one is quite right because it is too long (and took too long to write), but I’ll see how it develops. I’m pondering how to make this an email too.
Speaking of newsletters, this week’s was about adapting to the new reality of search. Quality content is going to win, ultimately. Includes a passing reference to Roland Barthes. What do you expect from a philosophy graduate?
I’ve also been working on Orford, the piece of fiction which may well be a novel. I have the beginning. Not only that, but I have the end. How I get from one to the other is the tricky bit. I think I need to redo the outline because that bit just isn’t hanging together well. And then I need to write about another 50,000 words. Go me.
Some thoughts on Apple Vision Pro (and VR/AR in general)
- As many people have noted, the ultimate platform for augmented reality is something that is both portable (can be worn all the time) and invisible (not a huge set of goggles which get in the way of your interactions with the world. We are so far away from this in terms of technology that I would be surprised if we even have it in my lifetime (see also: fully autonomous vehicles that you can drive in).
- The price of Apple Vision is not unreasonable given the technology in it. They are not selling this at a loss, and I would expect the margins on it are similar to other Apple products, but Apple Vision is not something that can currently be made at under $1000, which is probably the sweet spot for this kind of tech.
- As with the Apple Watch, the company has a set of use cases in mind. As with the Apple Watch, these will almost certainly not be the uses that customers actually find most compelling. Expect the marketing to shift in response to what actually resonates with people.
- This represents a minor potential issue for Apple. Apple Watch was priced low enough to have quite a wide spread of customers, especially once the cheaper hardware options appeared after a year or two. Apple Vision is priced too high to get a wide range of customer types. The danger is that it will skew too heavily towards highly-affluent customers, and they kinds of uses they make of devices, for Apple to get much insight into what the real uses of Apple Vision are. Apple doesn’t do much testing with real users (even under NDA) before products are released. That means real-world feedback is vital.
- The criticisms that people have made about the battery life are really not that relevant. No one is going to use this wandering around. You’re going to mostly have your behind in a chair. I’ve done a lot of VR demos when moving and nothing breaks the “reality” of the app you’re using than trying to do much in the physical world. Yes, the passthrough video means you can do this. But trust me, you won’t.
- It’s a shame that you can’t have multiple Mac “monitors” open at the same time. But you can have multiple apps, so I would guess quite a few of the things you want to keep open on multiple monitors will devolve to native apps.
- It’s a bigger shame Apple has chosen to only have an App Store model for software. The lack of hackability of the platform won’t matter to most people, but it does matter to me. This isn’t a market of customers who need the same level of “protection” as on a smartphone, so the justification that all apps need to be checked for malware doesn’t exist on this platform. This was a chance for Apple to break with the past. It’s chosen not to do so.
- I wonder if, strategically, Apple has ended up “skating to where the puck was” rather than where it’s going to be. It’s taken so long to get Apple Vision out – by some reports, perhaps ten years – that the interest in and relevance of VR and AR has died down. VR’s use cases have mostly boiled down to games. AR is still not really a possibility, at least not in its ultimate form.
Ten blue links for 3rd Feb, 2024
I mentioned a while ago on Mastodon that I had such a backlog of stuff I had saved to read and could potentially write about that I was going to have to steal Cory's approach and do a weekly linkblog post. That idea got put on the back burner for a couple of weeks as I had both a feature (forthcoming for PC Pro magazine) and a short training session (end of February, details to follow once its advertised).
And of course, I needed a concept -- what we call in journalism a franchise. Putting in the work of creating something weekly is a lot easier if you can force it into some kind of theme. But I have been scratching my head trying to think of something.
Of course, as soon as you think of something it's obvious: hence Ten Blue Links. Like the old-school Google we knew and loved (and was useful) I'm going to create a page every Saturday which just lists ten things which have amused/entertained/informed me, and that I think are worth your time reading. There's no topic theme -- I read a lot, so that wouldn't make sense -- although every now and then if something big has happened I might make one.
Some words about my tools and process
I'm far too online, and I hop about between tools far too often. But there are two online services which have stuck with me for quite a while now: Raindrop, and Readwise Reader.
Raindrop is a bookmarking service, like Pinboard but a lot better. I use it to dump in links which I know will be useful to me in the future, but which aren't in-depth reading. How-to's, tips, that kind of thing, all of which I categorise and tag so they form even more useful collections.
Readwise Reader, on the other hand, is a read-it-later service like Pocket -- but it's the Olympic Gold medal winning version, the Pele and Maradonna and Messi combined into one of reading things. It's perfectly happy ingesting feeds, or emails, or PDFs as well as simple saving articles, and it integrates with Readwise (of course) which I use to funnel all kinds of stuff into my Obsidian notes. It costs money, but it's a service that is well worth it. I would imagine that most of what I write about in Ten Blue Links is going to come from Readwise.
This one has ended up long, but I promise I'll make it shorter next time...
The ten blue links for this week
1. Apple's culture shaped its DMA response
I wrote at length about my feelings over Apple's response to the EU DMA -- childish is the kindest way of putting it -- but I really enjoyed Manton Reece's short post about it. Manton's focus is Apple's culture, how that has been shaped, and how that has really influenced their response:
Because of their decades of truly great products, Apple thinks they are more clever than anyone else. Because of their focus on privacy, Apple thinks they are righteous. Because of their financial success, Apple thinks they are more powerful than governments. The DMA will test whether they’re right.
2. Return to office = failure of management
Apple, of course, is one of the companies that has mandated its workers return to office -- and they are not alone. But some new research has found that not only does RTO not improve productivity, and damages worker engagement, it actually stems from a simple need for control from managers. Simply put: bad leadership:
"Results of our determinant analyses are consistent with managers using RTO mandates to reassert control over employees and blame employees as a scapegoat for bad performance".
3. Everyone is a sellout now
The creative industries are having a bad decade. Journalism, in particular, is in a horrible place with jobs lost left, right and centre. Rebecca Jennings wrote a great article about how everyone now has to be a pitch person, and how basically if, for example, you're a writer you're now expected to also be able to market your work -- and won't get employment if you don't. And of course, this has a direct, and negative, impact on your actual work:
Next thing you know, it’s been three years and you’ve spent almost no time on your art,” he tells me. “You’re getting worse at it, but you’re becoming a great marketer for a product which is less and less good.”
4. Fertile fallacies
Sam Freedman's article on fertile fallacies and policy bubbles was specifically about politics, but I think it's equally applicable to many areas of life. Sam's point is that sometimes bad ideas work at first, up until the point where they don't. This is because they often have a kernel of truth about them, or are a reaction to something which has pushed too far.
A policy belief that initially began with an important truth – governments need to have control over state spending and some process to maintain it – has ended up distorted into an absurd farce whereby Treasury officials are frantically changing their policy proposals for the Chancellor based on daily fluctuations in projected borrowing for 2029.
But you can apply this idea everywhere. Consider tech: app stores were a reaction to the absolute hell that was mobile apps in the pre-iPhone era, coupled to the opportunity to make something that was a little more secure for users than the PC. This has inflated to the point where you'll find people who genuinely believe that no one should have the right to install software on a device they own, and that developers owe a tithe to whoever made the platform they're using.
5. The evolution of the Conservative mind
There is a connection here, I think, with Simon Wren-Lewis' piece on the evolution of the British Conservative party from neoliberalism as economic doctrine to social conservatism which solely acts in the interests of the wealthy. As with the policy fallacies that Freedman focuses on, the central doctrinal fallacy of neoliberalism has inflated into a bubble that goes well beyond its original intent.
In the UK, the inflationary force in this bubble was Brexit:
The key moment in this transformation in the UK was of course Brexit. Although it is just about possible to rationalise Brexit in neoliberal terms, if we think about power, Brexit was far from neoliberal. The overwhelming majority of businesses and corporations selling to and from the UK suffered serious damage at the hands of newspaper owners and a few very wealthy individuals. This kind of capture of a neoliberal party by monied interests is not really surprising, because once a politician sees themselves as representing the interests of corporations and businesses generally rather than society, it is a small step to start representing the interests of particular and potentially unrepresentative corporations and businesses (and their ‘think tanks’), especially if those businesses happen to be newspapers or party donors or future employers. Corruption inevitably follows.
Of course, this doesn't explain the similar process that has happened in the US and across the world, but there were, no doubt, similar processes at work.
6. AI Agents are the future of computing
One piece that I have read and reread a few times now is Bill Gates' article on how AI agents are the future of computing. I wrote about this a while ago, too, focusing on Apple's 1987 Knowledge Navigator concept. Conversational interfaces change everything, and Gates thinks it effectively means the end of applications as we know them:
In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
I think Bill is right, and he also raises a lot of challenges -- mainly, that to do this stuff properly requires a lot of your personal information to be known by the agent, and managing that in a way which preserves privacy is going to be a tough thing to do.
7. A long interview with Satya Nadella
Another article that I have been rereading is the interview by Axel Spring CEO Mathias Döpfner with Satya Nadella. I find Nadella fascinating: equal parts MBA-bland and tough as old boots, and someone who has done that rarest of things: taken an established business and remade it. While the DNA of the old Microsoft is still there, he's turned the company into something quite different.
There's a tonne of interesting stuff in there -- Obsidian tells me I have nearly 1500 words of quotes saved from it -- covering AI, China, and leadership. But I found this quote pretty interesting, on the relationship between AI and publishers:
After all, with synthetic data training, I think that the incentive is that we create more synthetic data. And if you're training on synthetic data, where you don't have stable attribution to likeness, that becomes a hard thing. So, there is some technological disruption we will have to be mindful of. The fact is, that no publisher will allow you to crawl their content if there isn't a value exchange, and the value exchange has to come in two forms. One is traffic, and the other is revenue share.
8. Technology as rent-seeking process
This article by Wendy Liu from 2019 -- which feels a lot longer ago than just five years -- looks at the business models of technology service companies as effectively being tax collection, and boy she was right. Every company you can think of now, from Apple to Meta to Google and beyond, seems to be hitching its "growth" wagon to collecting tithes of one sort or another. It's no longer enough to simply make products and sell them: you have have have ongoing revenue from users to grow.1
What if we thought of some of the most lucrative tech companies as essentially tax collectors, but privately-run (and thus not democratically accountable)? Economists call this rent-seeking, and what we’re seeing with a lot of tech companies is that their telos is little more than “rent-seeking as a service”. It’s basically baked in to their business model. Once you’ve fully developed the technology underpinning your service - be it coordinating food delivery, or processing payments, or displaying intrusive ads to people who just want to read a goddamn page on the Internet without being entreated to buy stuff - then your whole schtick then becomes collecting taxes on a whole ecosystem of economic activity.
9. Elon Musk continues lying
Joan Westenberg -- who you must read every time -- notes that Elon Musk is, of course, a liar and yet gets a free pass on his lies every single time from credulous journalists. This time it's Neuralink, and its claim to have implanted some kind of brain chip into a human:
Despite providing no evidence of this milestone, and without any 3rd party verification, the claim was quickly republished by major news outlets without scrutiny or confirmation. Journalists (or, more charitably, their editors) have once again eagerly provided publicity to Musk in the pursuit of advertising traffic to their sites, failing in their basic journalistic responsibility to fact-check. To question. To scrutinise. To ask for the truth.
And journalists wonder why journalism is in trouble.
10. More universal public services, please
Jason Hickel is someone you should be reading, generally, and I could have linked to any one of about 10 articles of his I've read recently. But one place to start is his essay on how universal public services help to eliminate the artificial scarcity that capitalism -- and particularly rent seekers -- profit most from:
By universal services here I mean not only healthcare and education, but also housing, transit, nutritious food, energy, water, and communications. In other words, a decommodification of the core social sector — the means of everyday survival. And I mean attractive, high-quality, democratically managed, properly universal services, not the purposefully shitty last-resort systems we see in the US and other neoliberal countries. What does this look like? How do we get there?
I would add a few technology platforms to this list too… but that's another story.
Adapting to the new reality of search
It’s obvious at this point that the landscape of search traffic for publishers is rapidly changing, and not generally for the better. Every SEO I know is complaining about the same patterns: Google results getting swamped by low-quality content; the rise of quick fire-and-forget AI-generated SEO farms, which can impact heavily on short-term traffic in any topic area; and user-generated content being overvalued by Google.
Or, to summarise it: quality content is not, currently, winning the battle for attention.
And then of course there is Google and others’ experiments with putting more answers to search queries on the results page. I’m on record as believing that a lot of traffic, especially for pages designed to answer specific queries, is going to go away as AI gets better at answering questions. Even for affiliate content, I think the appeal of answers that you can have a conversation with, so you get a completely custom answer to, say, what laptop to buy, will be so high for consumers that publishers will see declines in traffic over the coming years.
So then, publishers are facing a few years of transition from old models – where it was possible to get a lot of traffic from terms like “when is the Super Bowl” or “how much is a Ford Fiesta?” — to a future where every single question like that can be answered on the page.
Knowing this, there is no point in setting a strategy for the coming year which doesn’t take account of this longer-term trend. But how can you do that, while also not losing large chunks of visits?
SEO strategies for the next year
The starting point is to look at keyword intent and analyse how likely it is that there is a long-term future for traffic. I follow a fairly standard intent-based split into four buckets:
- Informational: Getting specific answers, usually starting with how/why/whats and commonly answered with some kinds of tutorial
- Commercial: Usually showing some kind of purchase intent, at either early or late stages in the funnel. Almost always including bests, comparisons, reviews, product categories or product/service names. Best answered by reviews and comparisons, and, of course, the heart of affiliate revenue.
- Transactional: All about completing the immediate action of purchase. Usually involves keywords like “buy”, “cheap”, “quote” and sometimes also location-based, such as “buy cheap tires in Canterbury”.
- Navigational: Site and brand names, typically typed in because you want to find a specific brand/product site.
As SEMrush noted last year, transactional and commercial keywords are on the rise, while informational and navigational are declining. That’s good news if you’re looking to affiliate content to drive your revenue over the next year or so, but it also means that informational queries are both dropping in volume and will be answered more on the page through AI-driven features like Search Generative Experience (SGE).
For entertainment brands that have come to rely on informational content about, say, Love Island and have no authority at all about products, this could lead to a particularly bad short-term squeeze.
The temptation will be to try to turn entertainment brands into product focused ones, but it’s worth not going overboard with this, as over the long term it could dilute authority in other areas. To put it another way, if it doesn’t fit, don’t force it: no one really wants reviews of Love Island false eyelashes (sorry, Liverpool Echo).
Where you should be focusing across the board, though, is on quality, particularly in three areas:
- Originality
- Authorship
- Experience
For a long time, one of the dirty secrets of SEO work was the amount of time you could spend trying to steal traffic from your competition by creating “me too but better” content. Check out what keywords they were ranking for, and if you didn’t have equivalents, create them and go on an updating binge to get them to rank. This had the double whammy of both getting you traffic, and weakening your competition.
I told you it was dirty, didn’t I?
The problem with this was that combined with headings targeting related keywords, everyone ended up with content which was highly optimised but unoriginal. It all looked, and often read, the same. It’s no wonder that this is the kind of approach which has worked for using AI to generate quick sites for profit: any content approach which can be reduced to a mechanical process will ultimately be able to be done by an LLM.
Unleash the quirk-en
To stand out, you are going to have to engage some originality in your approaches. That doesn’t mean abandoning the basics of on-page SEO, or never looking at your rivals for ideas. But it means that if your rival is taking an approach, thinking of an original way to answer the same need for the audience will help you stand out. And in a world of AI-generated grey goo content, you will need to stand out.
How do you do that? Well, that is a creative question for you to answer – and you do still have some creative journalists left in the building, right? My personal favourite is The Verge’s magnificent pastiche of an affiliate article, but your mileage may vary – and more importantly, the things which make your audience laugh, cry, and so on are areas that only your experts can tell you.
Why Roland Barthes would have been a terrible SEO
The second area is our old friend authorship because far from being dead, the author is back at the centre of the universe. Unless you have been hiding under a rock, you will already have good quality author pages which link to every single article from your author. You will also have purged your sites of those dreadful “Brand Byline” things which indicate either a confused content strategy or content with quality so low that no one wants to put their name on it.
Now it’s time to go deeper, and that will mean using any means necessary to establish the authority of authors. Make sure that your authors are “out there” – no, not wearing tie-dye clothes and going to Grateful Dead gigs, I mean getting as many authoritative mentions on media you don’t own as possible. Guesting on podcasts, writing guest posts, being quoted by news organisations – encourage your authors to have and raise a professional profile. If one of your journalists is the go-to expert about a topic area, that will pay off over the long term in increases weight of their authority by Google, and the sum of their authority is your authority as a brand.
There is no on-page or technical SEO fix for this. If your journalists spend all their time in the office churning out “me too” articles and never actually doing any work to raise their profile, they are never going to have enough authority. Set them free. Get them out making connections. Fly, my pretties, fly!
Are you experienced?
This brings us nicely to the last point: experience. Not everyone noticed when, at the end of 2022, Google stopped talking about EAT (expertise, authoritativeness, and trustworthiness) and added an extra E: Experience. As they put it at the time, “does content also demonstrate that it was produced with some degree of experience, such as with actual use of a product, having actually visited a place or communicating what a person experienced?”
Now, here’s another dirty little secret: quite a bit of affiliate-focused content out there is written with little or no actual experience of the product. Yes, that’s right, some people write reviews having never had the products in their hands. What, you think PRs are actually sending out products to hundreds and hundreds of big and small publishers for test?
There was a good argument for this: doing reviews based on desk research was a time-saver. Rather than consumers having to comb through spec sheets and a thousand user reviews on Amazon, one journalist could do it well and get a better result, with the application of their expertise. But… it was always a bit of a cop-out, at least for major publishers who could get the real thing in for review.
In the era of experience, desk research is dead. You need to write from first-hand experience of the product, and you need to demonstrate it as often as you can in the copy. You are using first-person, right? Not only that, but you’re not still clinging to old-fashioned “we tested this” are you? If you are, 2024 is the year you stop doing that. It matters.
Adapting to the new reality
This advice should be good for you in 2024, but it’s also vital as the foundation for the AI-driven search landscape to come.
All three factors – originality, authorship, and expertise – are things that LLMs don’t have, and importantly probably will never have. Although a human can use an LLM to achieve original results, LLMs are, essentially, unoriginal thinkers. They are also not authors in their own right (no, LedeAI, you are not a journalist), so are unlikely to be able to build a profile outside your site. And while they have ingested a lot of expertise, LLMs are really experts in nothing – and, as good as it is, no one is going to invite Copilot on to the evening news to discuss anything (sorry Microsoft).
But here’s the thing: all of these human factors are expensive. Too many executives, particularly ones with boards that lack experience of frontline journalism (and yes, they do exist, and you can do your own research to find them) think that when journalists spend time not writing, they aren’t being productive.
If your metrics are the number of articles, and not the quality of those articles, then you are going to struggle to adapt to the new reality. And then new reality really starts today.
Some thoughts on Apple's response to the EU DMA
There is always a point in every Robin Hood film where Robin stops robbing the rich to feed the poor and doffs his hat to King Richard, stepping back and allowing the monarch to take his rightful place as “protector of the realm”. In a feudal system, the lord must prevail because the lord is the peasants' only true guarantee of peace.
I have had this in mind since Apple announced its response to the EU's Digital Markets Act. The rules Apple published are constructed precisely to make alternative methods of distributing apps both unattractive to customers and both toxic and unprofitable for developers. Tonally, it's also a big "fuck you I won't do what you tell me to" to the EU, one of the most bitter and resentful public statements I have seen. It reminds me of Bill Gates' sullen deposition to regulators when Microsoft was being investigated back in the 1990s – and we all know how that case ended up.
It amazes me how quickly successful, rich companies and people turn into sulky teenagers the moment even the most minor demand is made of them. Success, it seems, breeds little character of worth and encourages a kind of childishness which most people grow out of by the age of 21. Usually, I would expect that from Elon Musk or Donald Trump, but it seems Tim Cook has had a dose of it too.
Rich people gonna rich. But what amazes me more is how many cheerleaders they have. Now Apple has always had cheerleaders — lord knows, at times I've even been one of them — but the latest wave of online criticism of those of us who would very much like Apple to allow us to use the computers we bought in the way that suits us, rather than the way that suits Apple, strikes me as different. Louder. More vocal. More focused on the idea that not only is wanting this stupid, but that it's somehow a threat to other people's security.
And as we all know, when people feel their security is threatened, they act a little weird. Moral panics, and all that.
But then I remember another characteristic of feudalism: many people are most comfortable when there is a feudal lord to protect them and make decisions for them, and so vociferously attack when anyone suggests that, perhaps, the existing social order needs to change.
That's not simply because they long for the attention of the rich and powerful and see protecting them as a way to gain favour. Feudalism survived by ensuring that the peasants were always helpless, always in need of protection, and of course always threatened. The lord protected you from anarchy. Unable to imagine another world being possible, the peasant can only support the lords' right to rule because to do otherwise would mean either a more cruel lord, or dangerous lawlessness.
As we move into technofeudalism, where instead of owning technology we rent it, those old peasant instincts are resurfacing. There is a big, bad world out there of hackers, thieves, scammers, and other ne'er-do-wells, and only feudal lord Apple can protect us from it.
"But You have to protect people"
I have some sympathy for the argument that people require protecting. We still get, on a weekly basis, scam calls on our landline from people claiming to be from Microsoft, wanting to “sell” my now-departed in-laws protection for their PC. My father-in-law had dementia, used a computer, and managed to sign up to every kind of dubious data gathering exercise known to man. We are on many lists. I have become very used to calmly asking the person at the other end of the line whether their parents know they attempt to steal old, vulnerable peoples' savings for a living.
Having protections available is a good thing. Having them as the default on very widely used devices like smartphones is also a good thing. But having no ability to turn them off, no matter what? Not so good.
Having protections doesn't mean everyone has to use them. Those who want to opt-out should be able to do so. No one is suggesting that the App Store should be closed down, and anyone who wants to be protected by Apple should be able to carry on.
But then there's the protection argument again. If it can be turned off, the argument goes, then bad people will persuade the vulnerable to do just that.
All of which is a good argument for “parental control” systems, which allow the vulnerable to be protected by someone they know, but not a good argument unless you believe that everyone out there is stupid and needs feudal lord Apple to protect them.
Ah.
I'm not going to link to the original post or put a name on it because I know the person who wrote it means well, and they are by no means the only one making much the same argument:
I get what you’re saying and that’s fine for nerds, but the average punter isn’t able to decide that, is terrified of tech, and doesn’t even know what software is. They are the sorts of people who will tell you their password if you tell them it’s for a survey The result of them making such decisions is very predictably going to be like hyenas around a corpse
I fundamentally disagree with this view, which I find exceptionally patronising towards ordinary people, bordering on misanthropic. Back when the iPad was launched, Cory Doctorow wrote eloquently about why he wouldn't be buying one:
But with the iPad, it seems like Apple's model customer is that same stupid stereotype of a technophobic, timid, scatterbrained mother as appears in a billion renditions of "that's too complicated for my mom" (listen to the pundits extol the virtues of the iPad and time how long it takes for them to explain that here, finally, is something that isn't too complicated for their poor old mothers).
Unfortunately, it looks that Apple has been very successful in persuading people that not only is "your mom" too stupid to understand what software is, they've persuaded a lot of them that the non-existent "mom" is actually the majority of people.
But this is also a view of human relationships to technology which is self-perpetuating: if you never bother to teach people how to do something, such as protecting themselves against scams, unsurprisingly they never become particularly good at doing it. Likewise, if you never let your children play outside, guess what happens?
Learned helplessness is a thing, and it always benefits the most powerful.
And, as Dan Moren points out, Apple's dire warnings of terrible consequences should you be foolish enough to allow an application to be installed from any other source than the App Store are pretty hilarious when you consider that they are implementing the same system of notarisation which keeps Mac apps free of malware. Evidently, Apple believes that someone who spends £1000 on a computer is significantly more tech-savvy and able to look after themselves than someone who spends £1100 on a smartphone.
Unless, of course, ultimately Apple believes it's for the best if Macs are as locked down as iOS.
Hmm.
20/20 Hindsight is 20/20
In retrospect, Dan Gillmor was right:
A few months ago, when Apple introduced its iPad Pro, a large tablet with a keyboard, CEO Tim Cook called it the “clearest expression of our vision of the future of personal computing.” That was an uh-oh moment for me. Among other things, in the iOS ecosystem users are obliged to get all their software from Apple’s store, and developers are obliged to sell it in the company store. This may be Apple’s definition of personal computing, but it’s not mine.
At the time, I shrugged off Dan's arguments. Wasn't there room for a powerful computer, but incredibly easy to use? Where there was never going to be a worry about malware? I think I saw the iPad as just a tiny step on from the Mac: the real computer for the rest of us.
I was wrong. Dan was right. As was Cory Doctorow in 2006. As was Mark Pilgrim the same year.
Apple isn't a bunch of evil geniuses wanting to rule the world. Ultimately, Apple is driven by the same forces as every public company: the demand from “the market” for continual growth. As anyone with a passing interesting in compound growth will tell you, that becomes significantly harder as a company gets bigger. For Apple, 10% revenue growth in 2004 meant adding just $800 million. By 2014, that required an additional $18 billion. In 2024, that will require $38 billion.
There are no more devices as big as the iPhone to be launched, no more hardware markets worth tens of billions of dollars which Apple can magic into existence to keep their share price growing (and no, Apple Vision Pro is not it). So the only way of keeping that revenue growth rolling is to squeeze more from customers, and to ensure that not one cent of current revenue slips from the feudal lord's fingers. And that includes the tens of billions of dollars of revenue it makes from the App Store.
The only way for Apple to keep growing is to not only retain the control — and thus revenue — it has, but to tighten the screws and get more control. To ensure you can't buy an iPhone without also paying them a monthly tithe for storing photos. To ensure that no application gets sold without Apple getting a cut. The role of the feudal lord is one that Apple is choosing to play because it makes more money that way.
Turning points…
I have never been one to entirely excuse Apple its control-freakery, but I've also respected them and liked the products they make. I'm started writing this on an M2 MacBook Air, and it's the best laptop I have ever owned in many ways, not least the battery life. Without Apple's determination to do its own thing, to "own the whole widget" as Steve Jobs would have said, that battery life wouldn't be possible.
But: the Mac model of (relative) openness is not the one which Apple has chosen to pursue. Instead, its focus is on keeping things closed, reducing developers to digital serfs paying a tithe to the feudal lord whose land they are allowed to plough. And of course, ensuring that its customers, who pay a handsome margin to the company simply to buy its products, cannot choose what they do with those expensive devices.
Just as the release of the iPad Pro was a turning point for Dan Gillmor, Apple's response to the Digital Services Act feels like one for me.
I started writing this on the MacBook Air, but I'm finishing it on a ThinkPad X1 Carbon running Fedora 39 Linux. I'm using the same tools to write on both: Obsidian, configured just how I like it, even down to using LanguageTool to proof it.
The MacBook Air will almost certainly be the last Apple product I buy. When the time comes to replace my iPhone, maybe towards the back end of this year, I'll look for something I can install a de-Googled version of Android on.
I've been playing with a Pixel 6 running Graphene, which even lets you install Google's apps on it, but restricts them and prevents them from doing the full range of spying on you. I like that idea: taking a dangerous but handsome animal, and ensuring you can admire its beauty while stopping it biting you.
Perhaps one day Apple might let me do the same with the device I paid them a thousand pounds for. But I'm not going to hold my breath.
And no, I am not switching fealty from the Apple feudal lord to the Google one. I love this, from Dave Megginson:
When Apple fleeced and Google spied,
Where, then, should our loyalty lie?
The answer to that is simple: to people. Not to feudal lords, no matter what colour their flag.
John Scalzi has a new Mac
As he says, it is really weird going to a 16in laptop after using a 13in one – I dusted off my 16in 2019 MacBook Pro a couple of days ago to keep it updated and make sure I has all my files on it1 and it makes using the Air feel like you’re using a toy computer. It also reminds me how much better the keyboard on the Air is – the MBP was, I think, part of the last generation of Macs before Apple dropped their terrible switches.
Like John, I also find myself using USB-C to charge the Air rather than the MagSafe port. I don’t get the love for MagSafe. Sure, if you trip over the cable you have a chance with USB-C to remove your laptop from whatever surface it’s on, but on the few occasions I have kicked a cable the USB-C has come out anyway. Maybe other people’s tables are more slippery than mine? And anyway – with battery life like the M-series machines have, my Air mainly gets charged overnight. It’s really rare I bother plugging it in during the day2.
And I entirely agree with him about this, too:
If you’re using your laptop for word processing or spreadsheets, with web browsing and occasional light gaming, and you want a Mac, please for the love of God get a MacBook Air, which is so much cheaper, much lighter, and more than enough for what you’re doing with your computer.The time was when even non-pros really wanted/“needed” a MacBook Pro. Now, that’s not the case: if (like me) you spend most of your life in ordinary business applications and don’t do professional audio/video editing, an Air will be more than good enough. Even my puny little base model Air (8Gb RAM, 256Gb SSD) is faster than my much more beefy Intel MacBook Pro at video editing. Not that I do a lot, but on the odd occasions when I do, it works.
The Mac
Without the Mac, I wouldn’t have had a career.
The first time I encountered a Mac was in 1986. As a fresh-faced know-it-all Humanities student, I had use of the computer lab at Hatfield Polytechnic. Although I (like everyone) had an account on the VMS computer, I was much more drawn to the ten or so strange boxy all-in-one computers arranged against one wall.
The Mac, so my computer scientist hall mates told me, was something pretty special. They were also the people who, once I requested and got an account on the college Unix machine, thought it was hilarious to hack it and give me root access— but that’s another story.
Over the next couple of years, the Mac and me became firm friends. I can still do a pretty good impersonation of the sound that the floppy drive made when you ejected a disk. You heard that a lot because the computer had a single drive and so you spent a lot of time swapping disks around. I remember getting a pirated copy of WriteNow when it was released, and how amazingly fast it was compared to MacWrite.
When I graduated in 1989, I already knew I would be going back to do a PhD, but spent a year working at Apple in Information Systems and Technology, AKA IS&T, as a desktop support assistant. This was possibly the easiest technical job in the world because around 95% of all problems could be resolved in one of two ways: reinstalling the system, or replacing the motherboard. And as the spares warehouse was downstairs, motherboards were not in short supply. I have no idea what the stock control system was, but I never encountered it.
I could write a whole article about that year. About the fantastic community of nerds that you could tap into via AppleLink, the internal email system and proto-internet which also linked Apple to its dealers, and which ultimately became America Online. About dropping the only Mac Portable in the UK from a height of around two metres while the hard drive was spinning – and it surviving without a scratch (that machine was tough). About having an alpha release of System 7, which was still known as Blue, and installing it on a machine just to see what it looked like (slow, buggy and disappointing was the answer).
But the most important thing about that year was that it gave my my first Mac: a Mac Plus, with a 20Mb external hard drive, a second 20Mb external SCSI drive, and an ImageWriter II printer. I (ahem) “borrowed” some extra SIMMs to take it up to the full 4Mb of memory, and wrote half a thesis on that.
The other half got written on my next Mac: an LC 475 AKA Performa 475 AKA Quadra 605. By that point, I was teaching as well as studying, and the extra money I made paid for a much-needed new Mac. The Mac Plus, which was eight years old, really wasn’t keeping up with my wide range of pirated software. Most importantly, it couldn’t play Arkanoid in colour.
By the end of that year I needed a job, and like every Humanities graduate, I looked to the Wednesday edition of The Guardian to provide. That was when the media jobs were advertised, and I applied for a job on MacUser.
I knew nothing about journalism — I had never aspired to be one, and it probably took me another two years before I could describe myself as one without embarrassment. Given that my job mostly involved calling PRs to get equipment to test in the labs, unboxing said equipment, and working with a succession of real journalists to devise ever more fiendish ways to prove that Printer X was better than Printer Y, my reluctance was probably justified.
Six years later, I was editing the magazine. A title so profitable that it built Felix Dennis several houses, and provided the money to launch Maxim, which took Felix from “rich” to “seriously rich” when he sold it. I have a story about the sale of Maxim in the US which is funny, full of swearing, shows how much luck Felix had, and is completely unrepeatable.
Being editor of MacUser was serious. You got invited to a ludicrous number of swanky events, got to be a D&AD judge, and won many awards. Apple was on its uppers, but MacUser was thriving. I sometimes think that the better MacUser did the worse Apple went, and certainly Apple’s revival under Steve Jobs coincided with the slow demise of the Mac titles. I don’t think it was his fault: at that point, this weird internet thing was beginning to gut advertising revenues. But you never know…
Since I left MacUser I’ve worked for other magazines, other publishers, clients and friends on brands as diverse as The Week, Grazia, heat (yes, it is lower case) and Motorcycle News. I made a brief return to full-time tech journalism at Dennis in the mid ‘10s, and had another ball, but technology journalism has moved on a lot and I don’t think it’s as much fun as it was (the parties… well, they’re not as spectacular. One day, I’ll write up the story of the launch of A Very Major Product which… let’s draw a curtain over that).
But: without the Mac, without that odd little box I encountered in a computer lab in 1986, I wouldn’t have had the career I have had. Who knows what I would have done? Most of my friends spent a chunk of the 90s working in record shops, while I was getting flown around the world. And I don’t think I had their level of ambition.
So, thanks, Apple. Thanks for my career. Thanks for 40 years of fun, bitching about bad designs, purring over good designs, plastic, polycarbonate, aluminium, aloooominum, titanium, and whatever quantum material the Cube was made from. Thanks for the Touch Bar on the MacBook Pro that’s currently warming my legs, and for the ridiculously long battery life on the M2 MacBook Air.
Thanks to Susan Kare for the most playful icons in the world. And thanks to Bill Atkinson, Steve Capps, Andy Hertzfeld and all the wizards who stayed up late at night to fit a GUI into 64Kb of ROM and 128Kb of RAM.
The information grey goo
I’m broadly positive about the future of LLMs and AI, but no one should pretend there will not be difficulties or that the transition to using machines isn’t going to pose plenty of challenges.
Some scenarios, though, are profoundly dangerous, not just for the publishing and creative industries, but for society as a whole.
When we discuss the threat of AI, many people imagine rampant machine intelligences with big guns hunting us all down in a post-apocalyptic wasteland (thank you, James Cameron). I doubt that’s likely. But one consequence which I can see use sleepwalking into is the informational equivalent of an apocalypse that dates back over thirty years: the “grey goo” scenario.
“Grey goo” was a concept which emerged when nanotechnology was the hot new thing. First put forward by Eric Drexler in his 1986 book The Engines of Creation, this is the idea that self-replicating nanobots could go out of control and consume all the resources on Earth, turning everything into a grey mass of nanomachines.
Few people worry about a nanotech apocalypse now, but arguably we should be worried about AI having a very similar effect on the internet.
Nowhere is safe
Unless you haven’t been paying attention, you will have noticed that the amount of content created by LLMs has been increasing at a vast rate. No one knows how much content is being generated, but SEOs – whose job it is to understand content on the internet – are concerned. Less ethical SEOs have used a combination of scraping and generative AI to quickly create low-quality sites with tens of thousands of pages on them, reaping rewards in traffic from Google over the short term.
The problem for Google is that creating a site like that is the work of perhaps a week – and probably a lot less if it can be automated – while it takes months for the search engine to spot that it’s a low-quality site. With more automated approaches, it will become trivial to create spammy sites far faster than Google can combat them. It’s like a game of whack-a-mole, where there are moles appearing at an exponential rate.
And Google isn’t the only platform which AI is threatening to turn to mush. Amazon has a issue with fake reviews generated by AI. And although it claims it is working on solutions, it appears to be incapable of even spotting fake AI-generated product names.
But what about human-to-human social networks? They have already been flooded with AI-generated responses. And it will only get worse, as companies create tools which let brands automatically respond to posts based on keywords using AI-generated text. Sooner or later, saying something which suggests you are in the market for a new car will get you spammed by responses from Ford, Skoda, VW, Tesla, every car dealer in your area, every private second hand seller… you get the picture. Good luck trying to find the real people.
It is obvious that anywhere content can be created will ultimately be flooded with AI-generated words and pictures. And the pace of this could accelerate over the coming years, as the tools to use LLMs programmatically become more complex.
For example, think about reviews on Amazon. It will be possible to create a programme which says “Find all my products on Amazon. Where the product rating drops below 5, add unique AI-generated reviews until the rating reaches 5 again. Continue monitoring this and adding reviews.”
We are already at the point where you can use natural language to create specialist GPTs. The ability to create these kinds of programmes is ultimately going to in the hands of everyone. And this applies to every rating system, all surveys, all polls, all user reviews – and similar approaches can be created for any kind of content.
Can Google, Amazon and the rest fight back? Yes – but at great cost. And it’s not clear that even the likes of Google has the resources to effectively fight millions of users of AI creating billions of low-quality pages at an accelerating scale.
Model collapse
A side-by-side comparison of content created from the same prompt in ChatGPT 3 versus ChatGPT 4 Turbo will show you the difference. And humans are getting better at writing prompts and giving AI models the information they need to do a better job. So surely, this is just a short-term problem, and AI content will get “good enough” to not flood the internet with crap.
The issue is that there is a counterbalancing force at play. As more and more AI-generated content floods the public internet, more and more of that content will end up as training data for AI. Exacerbating this, quality publications are largely blocking AI bots, for entirely understandable reasons, which means less, and less higher-quality content is being used to train the next generation of models.
For example, researchers have noted that the LAION-5B dataset, used to train Stable Diffusion and many other models, already contains synthetic images created by earlier AI models. This is the equivalent of a child learning to draw solely by copying the images made by younger children – not a scenario which is likely to improve quality.
In fact, researchers already have a name for the inevitable bad outcome: “model collapse”. In this case, the content generated by AI’s stops improving, and starts to get worse.
The Information Grey Goo
This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.
The social and political consequences of this are huge. We have grown so used to information abundance, the greatest gift of the internet, that having that disrupted would be a major upheaval for the whole of society.
It would be a challenge for civic participation and democracy for citizens and activists, who would no longer be able to access online information, opinions, debates, or campaigns about social and political issues.
With reliable information locked behind paywalls, anyone unwilling or unable to pay will be faced with picking through a rubbish heap of disinformation, scams, and low-quality nonsense.
In 2022, talking about the retreat behind paywalls, Jeff Jarvis asked “when disinformation is free, how can we restrict quality information to the privileged who choose to afford it?” If the AI-driven information grey goo scenario comes to pass, things would be much, much worse.
Weeknote, Sunday 21st January 2024
So much blood this week. Fortunately, all of it was removed from me under medical supervision. On Tuesday, I had a bit taken for blood tests. I have been feeling a bit run down, and was wondering if my iron levels were falling. Turned out they were fine, but my glucose levels are a bit elevated – not to the diabetes stage, but heading in the wrong direction, so I will have to get more diet and exercise. Yay.
Then on Wednesday, more blood, this time donating. I love giving blood. It's such a tiny thing to do but such a wonderful little symbol of your commitment to other people, for no reward other than an orange Club biscuit at the end.
Oh, and I finally deleted my Substack. Everyone who was subscribed to it should have been ported over to WordPress. I am, though, considering whether I should move to Ghost. Because hey, who doesn't love a bit of tech-related shenanigans?
This morning we went to see Poor Things, at a 10am showing (which feels almost naughty). If you haven't been, you should go: it's the most brilliant film I've seen in a long time, with fantastic directing and performances. I could spend a couple of days trying to unpick all the threads from it, and it still wouldn't scratch the surface. Plus, Emma Stone should be a shoo-in for the Oscar.
The three things which most caught my attention
- If you're not reading Rachel Coldicutt's occasional newsletter, you should be.
- I have many feelings about the work from home vs return to office wars, but the biggest one is this: it's not a war.
- This is from last year, but basically Andrew Ridgeley is a lovely man, and what a loss to music George Michael was.
Things I have been writing
One of the biggest concerns I have about the current AI-mania is the lack of understanding of what a major change it is. Now that Microsoft has started to roll out Copilot for Microsoft 365 to all sizes of business, it's likely that more and more will turn it on (at $30 a licence) and think that's their "AI stuff" sorted. And then, of course, lay off 10% of the workforce because what their spreadsheet reckons is the "efficiency gain".
That, of course, is bunk. Using AI in your business is about people, and how you train them, and it demands a change from a "one and done" training approach to a continuous structure learning system – something that's not easy.
I also wrote a related piece about using the ADKAR change management framework to roll out AI. The point that I wanted to get across was that you are committing to a major strategic change, and you need to do that formally – and manage it, rather than just imposing it on teams. ADKAR is great for that, and, as I note, if you do it well it's not actually going to be cheap because you need potentially new roles to implement ongoing optimisation of the way you work with AI. Interesting times: it reminds me of the early days of the web.
Things I have been reading
I finished off Neal Asher's Jenny Trapdoor, which was… OK. I spotted the end coming about 20 pages in, which in a novella is always possible but a bit disappointing anyway.
Then I started and rapidly finished Stephen Baxter's Creation Node, which was very Baxter, with all that entails. I felt in places like it was a cosmology lecture masquerading as a novel, there were aspects of it which made no sense at all from a plot perspective, and had I been editing it I would have wanted some of it to just get dropped. It felt, at the end, like Baxter had created an interesting backdrop for a story but not really put much of a story into it. Which, as I said, is very Baxter.
How to roll out AI in a creative business
I talked recently about how changing the culture of learning in your business will be important if you will make the most of AI. But no matter what, you’re going to have to roll it out – and you need to do that in a structured way.
Remember, this isn’t just an ordinary technology roll out: it’s a change management process that will have a lot of impact on your business. One framework which can help, and that I have found incredible powerful for managing change at scale is the ADKAR model of change management.
This model consists of five stages: Awareness, Desire, Knowledge, Ability, and Reinforcement. Each stage focuses on a different aspect of the change process, from creating a clear vision and generating buy-in, to acquiring the necessary skills and (importantly) sustaining the change over time, something that’s often neglected.
So how might you use ADKAR when looking at an AI rollout?
Awareness
At this point, your focus is to communicate the need and benefits of AI for your business, such as improving efficiency, enhancing customer service, or gaining insights. Explain how AI aligns with your vision, strategy and values, and what challenges it can help you overcome. Use data and evidence to support your case and address any concerns or misconceptions.
Remember, too, that this stage is about the need for change, not that change is happening. The most important outcome for this stage is that everyone understands the “why”.
Key elements of building awareness
- Start with your senior leaders. In effect, you need to go through a managed change process with them first, to ensure they are all aware of the need for change, have a desire to implement it, and have the knowledge they need to do so. Your senior team has probably been through quite a few changes, but none of them will have gone through what you are going to experience with AI.
- Explain the business drivers making the use of AI essential. Don’t sugar coat this, but be mindful of not using “doom” scenarios. Your model should be Bill Gates’ “Internet Tidal Wave” rather than Stephen Elop’s “Burning Platform”.
- For every single communication, ask yourself whether it contributes to helping employees be able to think "I understand why this change is needed". If not, rethink that comms.
- Be clear and consistent in messaging – and have leaders deliver the message (but make sure they are clear about it themselves).
- Tailor your message. Customize communication for different groups within the organisation. Different stakeholders may have different concerns and questions, so addressing them specifically can be more effective.
Desire
Building desire is all about cultivating willingness to support and engage with the change, and for AI, it’s incredibly important. While AI is a technology, it requires cultural change to succeed – and changing a company culture is very hard. Without building desire, any change which threatens the existing culture will fail.
There are many factors which influence whether you can create a desire for change. Personal circumstances will matter, and the fear with AI is that employees will lose their jobs. That’s a big barrier to building desire.
And, in some cases, those fears will not be misplaced, so it’s critical to be clear about your plans if you are to win enough trust to create desire. Consider, for example, making a commitment to reskill employees whose roles are affected by AI, rather than giving bland statements about avoiding redundancies “where possible”.
This is especially critical if you have a poor track record of managing change – so it’s vital that you are in touch with how your change management record really looks to your teams.
At this point, you should also identify your champions. Who, in the business, has a lot of influence? Who are the people who are at the centre of many things, who act as communicators? Who do other employees go to for help and advice? Are there people who, when a new project starts, are the first names on the list? They are not always senior, so make sure you’re looking across the board for your champions.
Even if they are not the most senior people or the most engaged with AI at this point, if you win them over and make them part of the project, you will reap the benefits.
Remember, too, that desire is personal to everyone. While making the business more efficient and profitable tends to get your senior team grinning, not everyone in your business is motivated by that. Focus, too, on the benefits for people’s careers, work/life balance, and especially with AI, freeing up time to do more creative things and less routine work.
And don’t, whatever you do, talk about how “if we don’t become more efficient, people will lose their jobs”. I’ve seen this approach taken many times, and in creative businesses, it almost never works. Desire is about motivating people to change, and fear is a bad motivator.
Key elements of building desire for AI:
- Inspire and engage your team members to participate in the AI adoption process.
- Identify and involve key influencers and champions who can advocate for AI and influence others.
- Highlight the personal and professional advantages of AI, such as learning new skills, increasing productivity, or advancing career opportunities.
- Create a sense of urgency and excitement around AI and its potential.
Knowledge
If awareness is about the why, the knowledge stage is about the how: how are we going to use these tools? This is where you build knowledge of the tools and the processes by which you use them.
One mistake that I have seen made – OK, to be honest, I have made – is to focus too heavily on training people on how to use a tool, without also training on changes in the processes you’re expecting people to make. Every new tool, including AI, comes with processes changes. And, in fact, the process changes that the tool enables are where you achieve the biggest benefits.
Training people in the context of the processes they follow (and any associated changes) relates the training to what people do – and that’s why I would recommend role-based training, which may cut across teams. If you have large teams, consider further segmenting this according to levels of experience. But I would recommend that you train everyone if possible: people who are left out may end up feeling either that AI isn’t relevant to them (and it will be) or that they have no future in your new, AI-enabled business.
Key elements of building knowledge of AI:
- Provide adequate and relevant training and resources for your team members to learn about AI and how to use it effectively. Make sure you document any process changes.
- Tailor the training to suit different learning styles, levels of expertise, and roles.
- Use a range of methods, such as workshops, webinars, online courses, or peer coaching.
- Encourage feedback and evaluation to measure progress and identify gaps.
Ability
So far, what we have done is all theory. This stage is where the rubber really hits the road because it’s where all that training starts to be implemented. And at this point, people will start to spot issues they didn't see before as they get the hang of new processes and get better at them. They will also find things you didn’t anticipate, and even better ways of using AI.
One aspect that’s critical at this stage is the generation of short-term wins. For a lot of your teams, AI is the proverbial big scary thing which is going to cost them their jobs – and even if you have had a successful “desire” phase, it can be easy for people to be knocked off course when that is at the back of their minds, or they are reading scare stories about how AI will mean the end of humanity.
Quick wins will help with this. They are positive, visible evidence about the success of people they know using AI, and in storytelling terms that is absolute gold dust. Remember, though, that the positives must be personal, and in a creative business they need to focus on improving the creative work. Shaving 10% of the time taken from a boring business process might be incredibly valuable to you, but it’s not all that compelling to a writer, editor, or video producer.
Key elements of building ability in AI:
- Support your team members to apply their AI knowledge and skills in their daily work.
- Create a safe and supportive environment where they can experiment, practice, and learn from mistakes.
- Provide guidance, feedback, and recognition to reinforce positive behaviours and outcomes.
- Make sure success stories are being shared, and that your teams are helping each other.
- Monitor and track performance and results to ensure quality and consistency.
Reinforcement
This stage focuses on activities that help make a change stick and prevent individuals from reverting to old habits or behaviours, and I think it’s both the most crucial stage of managing a change in technology or process – and the one that’s easily forgotten.
There are several reasons for this. First, commitment even among your senior team may be waning, leading to reduced encouragement from the top to continue along the path. The people who thought that your rollout of AI was likely to fail will probably be latching on to every bump in the road and turning them into roadblocks – ones that they “knew would happen”.
This is why it’s incredibly important to have all your senior team go through a parallel managed change process, to make sure they are all bought into what you want to achieve. AI is a strategic change on the same level of impact long-term as a complete restructure of your entire business, so there is no getting round managing that process for your senior team.
If you are starting to get resistance to AI deployment at this stage, check whether your senior team are still bought into it. In the worst case, some of them may be sending subconscious signals to their teams that they don’t have to keep going.
And now the bad news: in terms of budget, the reinforcement phase may cost as much as the training required in the knowledge phase because you need people looking after the AI roll out who are constantly engaging with your teams, understanding issues, celebrating success, and making sure that communications about how AI works for everyone, and – importantly – keeps everyone updated on new developments and changes.
For every new pitch, product or process, someone needs to be in the room asking how you can use AI to improve this, speed it up, or do interesting creative things. That is the only way they AI will become embedded in what you do, and not fade away – as so many corporate projects do.
Who is that person going to be? The likelihood is that in the “desire” phase, internal champions will emerge who can do that job. This offers the advantage of credibility, as it’s someone who is both personally familiar and professionally respected, but don’t make the mistake of assuming this role is something that you can tack on to a day job. Unless your business is very small, doing all this is a full-time role, for at least a year after you have “completed” the rollout of the technology.
Key elements of reinforcing AI use:
- Celebrate and reward your team members for their achievements and contributions to the AI adoption process.
- Focus on improvements in employees’ experience, not just business benefits.
- Solicit and act on feedback to improve and refine your AI practices and policies.
- Reinforce the benefits and value of AI for your business and your team.
- Keep your team informed and updated on the latest AI trends and developments and encourage continuous learning and improvement.
AI is about people, not just machines
It would be a little remiss of me if I didn’t mention the launch this week of Microsoft’s consumer and small business AI play. Microsoft Copilot Pro integrates with Word, Outlook, PowerPoint, and OneNote, and offers suggestions, corrections, and insights based on the context and purpose of the document. It’s available now for anyone with a Microsoft 365 Personal or Family plan, at – by complete coincidence, I’m sure – almost the same monthly price as ChatGPT Plus.
Microsoft's approach with Copilot Pro and other AI services is primarily aimed at enhancing practical business and personal productivity, rather than implementing radical changes in the ways people work.
For me, the real short-term win from Large Language Models lies in their ability to clear away yawn-inducing office tasks. Copilot, especially, is a superstar at this. It helps you tackle all the routine stuff, and leaves people to get on with the creative work.
The people factor
AI tools like Copilot are not magic bullets that can solve all our problems, and they don’t magically do things on their own. And that highlights something that I suspect is getting neglected: long-term training and support for users in businesses.
One thing I have noticed in the creative industries time and again: technology often gets side-lined in learning and development. Sure, publishing companies have massively improved when it comes to fostering skills in leadership, coaching and other business areas. Yet, when it comes to embracing and learning new tech, training tends to be pretty old-fashioned, rolled out in a “one and done” approach. Updates are relegated to the odd email (which no one reads).
In the old days, that worked because the pace of change of technology was comparatively slow. A new version of QuarkXpress (yes I am that old) would come out every couple of years, you would do an update session and that was it.
But for cloud technologies this is not enough, and when there is a complete paradigm shift in tech – as we’re experiencing with AI – it risks putting you well behind more agile businesses.
According to a report by Oliver Wyman Forum, there is a significant gap between the skills employees believe they need training in, such as AI and big data, creative thinking, and leadership, and the training that employers are currently offering. 57% of employees think the training they are getting isn’t sufficient. And I think they’re right.
Of course, you can implement short-term fixes. But this is also a good opportunity to set up the way you train and the way your people learn for the long term. The next three to five years are going to see the pace of change accelerate, and you need to adapt the systems which allow your people to learn.
Continuous structured learning
Integrating AI tools into your team's workflow isn't a one-time event, but rather a journey of continuous learning. Begin by setting up a framework for ongoing training and support. This could mean setting up regular training sessions to providing access to online courses, interactive tutorials, and detailed manuals. It's not just about the initial learning curve; it's about keeping the knowledge fresh and relevant.
To foster a culture of continuous learning, encourage your team to see AI as an evolving toolset, one that offers new opportunities for growth and innovation. Promote an environment where experimentation is the norm, and learning from mistakes is valued. This approach helps to maintain a level of curiosity and enthusiasm for what AI can bring to the table.
Remember, the key to continuous learning is collaboration and knowledge sharing. By encouraging your team members to share their experiences and insights gained from using AI tools, you create a knowledge-rich environment. Regular team discussions, workshops, or even informal chat sessions can be great platforms for this exchange of ideas.
Not everyone is going to want to get on board. To get tech-hesitant people excited about AI, relate it to their interests and show how it simplifies work or hobbies. Demystify AI with jargon-free explanations and introduce them to easy-to-use tools through hands-on sessions. Sharing success stories of others who've overcome similar fears can motivate them. Ensure support is available for any questions, making their AI journey smooth and approachable, while focusing on its practical, real-world applications.
To put this into action, consider scheduling a monthly 'AI day' where team members can share new findings, discuss challenges, and brainstorm on how to better integrate AI into your workflows. Think about establishing a mentorship program where more experienced team members can guide others through learning about AI. And finally, make sure you are making use of your best communicators, not just the people who are really enthusiastic about AI.