Christmasnote, 25th December 2023

Technically, I should have written this yesterday, but somehow Christmas Eve feels like an even less appropriate time to be writing than Christmas Day.

Christmas coincides with my birthday, something that’s been a bane and a boon over the years. On one hand, it opens the opportunity of a REALLY BIG PRESENT as a joint Christmas and birthday one, and one I can open a few days before everyone else gets theirs. Although as a child, there were always a few aunties and uncles who managed to sneak in the same present they would have bought me for Christmas anyway.

On the other hand, it’s always meant that my birthday was overshadowed by the big day itself. In my teens and early 20s, this didn’t matter: my birthday was just an excuse for friends to start the Christmas week’s drinking early.

But as friends and family got older, my birthday has always tended to get forgotten, or pushed to one side. Happily, that has coincided with getting old enough that each passing year is treated with as much suspicion as optimism. Your best days might be ahead of you, but there is a hell of a lot less of them than you would like.

Getting old also means that the old acquisitive urges lessen, too, making present buying for me difficult and less obvious. Kim chose wisely, of course, because she puts a great deal of thought into it.

Hence, spending my birthday evening somewhere near Stratford, seeing ABBA Voyage.

I’ve always loved Abba. Old enough to remember when their singles were new and much-anticipated, even as a child I loved the melancholy in their songs. Abba have two modes: full on joy (Dancing Queen) or heartbreak (Knowing me, Knowing you). No band in the world has ever managed to hit both modes so well.

I’ve argued in the past that Abba’s greatest hits are better than The Beatles, and I still think that’s true. There’s something to be written about how Abba are queer in a way that The Beatles never could be. But that’s another essay, and one that I am probably not entirely qualified to write.  

Of course, Voyage is holograms: but after about five minutes, you stop thinking about that and just enjoy it. And it’s wonderful: genuinely the best “live” show I’ve seen. It might have made hundreds of millions of pounds, but you can see where the money went to set it up. Even the building was built specially for the show.

I cried. It’s a mark of the breadth of music that I like that the last band I saw live that made me cry was Van Der Graaf Generator, when they reformed in 2005, and I went to their first show.

But perhaps there’s a connection. I cried at VDGG because they were a band I had loved, who I never thought I would have the chance to see. Abba, despite the physical absence of the protagonists, felt in some way the same.

Anyway, if you get a chance, go and see it. It’s good.



Removing cloud storage providers from macOS

One of Apple’s most annoying habits is changing things to make them harder for users to “make mistakes”. Almost without exception, these actually degrade the user experience in one way or another.

A case in point is the changes that Apple made a while ago to how third-party cloud storage providers such as Dropbox, OneDrive, and Proton Drive create and store their files. In the past, these were held in common or garden directories in the user’s folder – so, for example, OneDrive used to store its files in /Users/ianbetteridge/OneDrive. You could even change this location if you wanted to put it somewhere else.

That, obviously, was too simple for Apple. Now, the “approved” way of being a cloud storage provider is to store files in /User/Library/CloudStorage. This uses the macOS File Provider extension rather than custom kernel extensions, and Apple has indicted it will get rid of custom kernel extensions at some point. It hasn’t happened yet, and some storage providers – notably Nextcloud – still use the old method.

The drawback of this is that the CloudStorage director is, like everything in /Library, hidden. You can’t get to it without either holding down option and selecting the Go menu in the Finder or using the terminal. That means if you remove a cloud provider’s drive from the Locations bar in the Finder windows, you have a fun time trying to find those files again.

More importantly, if you remove a provider, it’s hit or miss whether your files will also be removed. I’ve seen instances where deleting a cloud storage provider’s application also asks if you want to delete the locally stored files. But more often, those files will just stick around, in a directory you can no longer easily access, hogging disk space.

That’s fine as long as you have infinite disk space and didn’t locally cache many files. But I like my files local where possible, which means if I delete, say, OneDrive (which has tens of gigabytes of files in it) I just lose that disk space.

However, I hear you say, can’t you just navigate to them and delete the directory? Well, no: Apple (helpfully, again) locks those directories against deletion in the Finder. You can delete the files in them, but the folder itself sticks around like a bad smell. And if you reinstall the cloud storage provider again, things seem to break: when I tried this, the folder refused to sync, and of course, couldn’t just be moved, and if it did appear to sync files didn’t open because the Finder was confused about their sync status.

Thank you, Apple. You have replaced well-written kernel extensions with your own “universal” version, which actually works less reliably. Well done, people.

Thankfully, you can completely delete a folder. You just have to go to the terminal and do a little bit of Unix to do it.

  1. Open a terminal, and go to the CloudStorage folder – for me, this means typing “cd /Users/ianbetteridge/Library/CloudStorage”, but you will obviously have to replace that with your username.
  2. Type “ls” – this gives you a list of what directories are in there. For example, I have folders called OneDrive-IanBetteridge and Dropbox, for the two cloud storage providers I use.
  3. Now the dangerous bit. Type “sudo rm -r NameOfDirectoryToDelete” (replacing, of course, “NameOfDirectoryToDelete” with the name of the folder you wanted deleting). Terminal can autocomplete names, so you can just type the first few letters of a unique name, hit tab, and it will fill in the rest.
  4. Hit return, and you will be asked for your password. Type it in, hit return again, and that directory and all its contents are gone from your Mac, for good, with no real possibility of getting them back. So really do make sure that you’re deleting the directory that you want.

 And that is what I mean about Apple’s ability to make things less usable. If I was able to just navigate to the folder in the Finder and delete it normally, it would be in the Bin and so, if I make an mistake, easier to recover. But because Apple first instinct every time is to “protect” users and “make it easier”, you need to go to the command line and use a Unix tool which is far more user-hostile and dangerous.

Thanks Apple.


Weeknote, Sunday 17th December 2023

I’ve been thinking a lot about how I use computers this week. As I have written before (at great length) I still use computers like I was a full-time technology journalist, which means the tech is the point. I’m playing around a lot, but not necessarily getting a lot done.

So this week, I put a bit of thinking time into both the why and the what. Why I do this is pretty simple: I like playing around with tech, and while I was a full-time journalist, it was literally my job to use it and get to know it better. But what I also had to do was write about it, communicating what I found to other people. I don’t have to do that any more, but I would like to do it again.

The what was thinking about what tools I need to do the productive and creative things that I enjoy doing (and that make me some money). And that list turned out to be simple: a browser, a word processor, a few other things. I’m going to write a longer post about this, but actually sitting down and writing some thoughts on paper about it made me feel a lot calmer.

Three things which you might want to read

  1. I am shocked, shocked I tell you, that corporations have been using the cover of inflation to pad their margins and add to their bottom line. And I was told it was all greedy workers wanting to, you know, be able to buy groceries.
  2. No, just because you have paid millions of pounds for a train doesn’t mean you own it. You’ve simply taken on the responsibility of paying inflated fees to the company that made it when it needs servicing. Didn’t you get the memo?
  3. I really enjoyed John Scalzi’s post about abandoning Twitter. There are some interesting points about engagement on Bluesky and Mastodon compared there, too, if that’s your kind of thing.

What I’ve been reading

One of my bad habits is book bouncing. I’m like a magpie: I start on one thing and bounce to a new, shiny thing. And as I have a massive backlog of books to read, it’s easy to bounce from one to another.

All of which is a long-winded way of saying that this week I have been reading about five books, with no actual substantial reading done. I promise to be good next week.

What I have been writing

Other than this, absolutely nothing. It has not been a productive week.


What the Epic vs Google case means for content

A games company and a search giant tussling over Fortnight doesn't sound like it has much to do with publishing. But it begins a process which could see publishers get more revenue

There has been a lot of controversy over Google and Facebook “stealing” content from publishers. Publishers have been accusing Google of stealing their content for a long time. They claim that Google's search engine and news aggregator are infringing on their copyrights and depriving them of revenue, particularly in a world where more and more answers are appearing on Google’s results pages rather than pushing traffic to publishers.

The argument is that Google is using publisher content without paying them or asking for their permission, and that this is unfair and illegal. They also complain that Google is dominating the online advertising market and squeezing out their competitors. Some publishers want Google to pay them for displaying snippets of their articles or linking to their websites, or to stop using their content altogether.

Google, of course, argues that it is providing a valuable service to both publishers and users. It says that it is helping publishers reach a wider audience and drive more traffic to their websites, while also giving users quick and easy access to relevant and high-quality information. Google also points out that it respects the choices of publishers and allows them to opt out of its platforms or customize how their content is displayed.

Gannett, the publisher of USA Today, is not convinced. It has sued Google for allegedly monopolizing the advertising technology market, arguing that Google's broad control of the ad tech market has hurt the news industry, as online readership has grown while the proportion of online ad spending taken by publishers has decreased.

Gannett isn’t the only one. The Daily Mail alleges that Google has too much control over the online advertising market, which has resulted in newspapers seeing little of the revenue that their content produces. The lawsuit claims that Google controls the tools used to sell ad inventory, the space on publishers' pages where ads can be placed, and the exchange that decides where ads will be placed. The Daily Mail argues that this lack of competition depresses prices and reduces the amount and quality of news available to readers. The lawsuit also alleges that Google “punished” publishers who "do not submit to its practices”.

A lot of this hinges on copyright, and what amounts to effectively extending its provisions to cover something that’s previously been considered fair use: showing snippets of publisher content to let users decide if they want to click through to the content.

Similarly to Cory Doctorow, I would argue that extending copyright like this is foolish, and ultimately won’t benefit publishers and the rest of the creative industries. What will benefit us is removing the choke points that big tech companies like Google and Facebook own, and that world took a significant step forward this week.

Yesterday, a federal court jury ruled in favour of Epic Games in its lawsuit against Google, marking a significant victory for the game developer. The jury found that Google's Android app store, Google Play, uses anticompetitive practices that harm consumers and software developers.

Epic Games had accused Google of abusing its dominant position as the developer of Android to strike deals with handset makers and collect excess fees from consumers. Google collects between 15% and 30% for all digital purchases made through its shopfront. Epic tried to bypass those fees by charging users directly for purchases in the popular game Fortnite; Google then removed the game from its store, which led to the lawsuit.

The jury agreed with Epic on every question it was asked to consider, including that Google has monopoly power in the Android app distribution markets and in-app billing services markets, that Google did anticompetitive things in those markets, and that Epic was injured by that behaviour. What the remedy will be is yet to be determined. But even if it is confined solely to Epic, it’s a step towards breaking the stranglehold that Google and Apple have on the app stores and making the 30/15% tithe they take a thing of the past.

Why does this matter to publishers? Because it opens the possibility of publishers getting more control over the way their content is distributed to smartphones, and ultimately take more revenue by, in effect, running their own app stores just for content.

Being able to have an app store just for your publications would mean an immediate increase in revenue. Currently, between 15- and 30% of every in-app subscription or micropayment goes to the tech giants. If publishers could bypass this fee, they would retain a larger portion of their sales, potentially leading to significant financial gains.

This, as Doctorow highlighted, is where Google and Apple have been taking money from publishers – not by “stealing content”, but by stealing revenue. And it has the added bonus of allowing publishers to get a closer relationship with their customers, something that the app store intermediary model removes. . Customers won’t be buying from Apple or Google and the platform passing on the money to you: they will be buying directly from the people who make the content.

Of course, there are also challenges and risks associated with running an app store. These include the technical and financial resources required to develop and maintain it, ensuring security and privacy for users. Publishers would also have to convince users to switch from established platforms like Google Play and the Apple App Store, which takes more investment in marketing.

Of course, there are downsides. Publishers will need to spend more on discovery because it’s unlikely that either Apple or Google would ever promote a competing App Store. But that’s a small price to pay for restoring the direct relationship you have with customers. 



Pluralistic “If buying isn’t owning, piracy isn’t stealing”

Pluralistic: “If buying isn’t owning, piracy isn’t stealing” (08 Dec 2023) – Pluralistic: Daily links from Cory Doctorow:

In Poland, a team of security researchers at the OhMyHack conference just presented their teardown of the anti-repair features in NEWAG Impuls locomotives. NEWAG boobytrapped their trains to try and detect if they’ve been independently serviced, and to respond to any unauthorized repairs by bricking themselves

If you ever needed to see an example of quite how insane the “IP protection” laws are, this is probably it.


The MacBook Pro

The MacBook Pro 16in which was effectively replaced by my MacBook Air M2 has been sitting in a corner for a while. I had wiped it completely – something that’s a bit of a saga in its own right – and intended to sell it.

The buyer I had in mind wasn’t able to take it as unfortunately, they had some financial mishaps, and I would rather not sell it via eBay or classified. So it has just sat there doing nothing.

I decided to set it up and use it for a while, just to remind myself of what it was like. It’s the last generation of Intel machine, and I bought it not long before the announcement of the M-series chips. Although it was the lowest-end of 16in MacBook Pro, it’s still a pretty good computer and it seemed a shame for it to be doing nothing.

I’m glad I did because it’s reminded me how much I like a big screen laptop. It’s nowhere near as good for either performance or battery life as the Air, but it’s still more than fast enough for everything I need to do. And as I am unlikely to stray far from the house with it I don’t need to worry too much about the battery.

Now I just have to think about a name for it because the default “Ian’s MacBook Pro” seems a little bit soulless.

Installing Brew

Whenever I get a new Mac or resurrect an old one, I start from scratch rather than reinstall from a backup. This lets me work out which applications I actually need. Because I like to try out many applications, I end up with a lot of software on my machines which I don’t actually use much.

The days when this mattered from the perspective of system maintenance are long gone. Most applications are spraying extensions, libraries, or even (lord help us) DLLs all over your system. Even Linux has self-contained application installs now, thanks to technologies like AppImage, Flatpak and Snap.

But it’s still a waste of disk space and feels inelegant, so I set everything up with a clean slate and only install when I like using.

One thing that always gets installed on any Mac is Brew, the package manager, which is the de facto standard for installing Unix apps on an Apple computer. macOS is, of course, based on Unix, but the default set up doesn’t include the kind of software which usually comes as standard – utilities like ffmpeg, for example.

You can install them, though, and Brew makes is easy. It’s a command line tool which works in the same way as a regular Linux package manager, like DNF on Fedora or APT on Debian-derivatives. Once you have installed Brew using a single line of commands, you can type brew install and the name of the software you want, and it will do all the installation you need.

Brew lets you fill the holes which Apple has left. For example, the first thing I install with it is wget, which isn’t part of the standard macOS and which I find very useful. I also add yt-dlp so I can download video from YouTube and other services, as well as get_iplayer to tap into the BBC’s archives.

There’s a lot more you can do with Brew, and if you are used to the command line I recommend it.


Weeknote, Sunday 3rd December 2023

How did it get to be December already? And how did it get so cold?

The Byte archive

I was a fairly religious reader of Byte magazine from the early 1980s until it finally bit the dust as a print publication in 1998. I always loved that it wasn’t focused on a single platform, but on “small computers” as a whole.

It also had the kind of deep technical content which I loved. If you wanted to know about new processors, the transputer, or something even more esoteric, Byte was a great place to keep informed.

It also had Jerry Pournelle. Science fiction writer, conservative, and holder (in later life) of some dubious views, Jerry was nonetheless one of the most influential early computer journalists. I loved his columns, which stretched out to 5,000 words or so per issue. They were written from the perspective of an ordinary computer user — albeit one that had the kind of knowledge required to run a computer in the days of the S100 bus and CP/M.

Thankfully, the Internet Archive has every issue of Byte, scanned and neatly labelled. Annoyingly though, there isn’t a single collection which has every issue in it, which means it’s not easy to just download everything.

And having local copies is vital for me, as I use DEVONThink for research, and it wants a local set of PDFs. So I have started putting together the definitive collection of every single issue, and once I’ve done it I will put them somewhere online, so people can download the whole set. It’s big – my incomplete version is about 7Gb, and I estimate the full set is about 10Gb – but at least they will be there.

This took quite a while to do this week, and I'm pleased with the results.

Chanel

I went to see the Chanel show at the V&A – I don’t really like Chanel’s clothes that much, but her accessories are amazing and she had a really fantastic eye for patterns.

Seeing the collection emphasised that once she created the classic suit, so much of what she did was just more of the same. Milking a hit isn’t necessarily a bad thing: but the small card which notes she “attempted to extend the suit from day to evening wear” is a bit of a giveaway. She didn’t just make more suits. But she was more than happy to keep churning out endless slight variants in a way which made her a lot of money.

It was a little disappointing that the exhibition basically skips over the nine years when she could not work in France because she was widely believed to have collaborated with the Nazis. She had an affair with a German officer and used his connections to protect a relative. Literally one sign, with no more than 30 words on it, and then skipping merrily on to her return in 1954.

In fact, there is more space devoted to the one document from 1943 which lists her as working with the resistance, although there is no documentation (and no one remembers) exactly what she did with them.

There is, of course, far more documentation listing her as a Nazi agent. She definitely benefitted from the Germans’ Aryanization laws, which let her get control of her perfume business from the Jewish Wertheimer brothers.

There’s no doubt that Chanel collaborated, and that her high-placed contacts (Churchill, Duff Cooper, and many others) protected her after the war. None of this is mentioned, perhaps because once you understand what she did and what she was, it’s much less likely that you will just want to admire the pretty clothes.

I don’t think it’s possible to understand Chanel-the-person without considering that period of her life. And the exhibition doesn’t have the excuse that it’s solely about her influence on fashion (there’s surprisingly little which contextualises her in that sense). It ends when she dies, so it's not about Chanel the brand or even really her legacy.

In that sense, it’s a massive contrast to Diva, which is also on at the V&A and which managed to reduce me to tears when I saw it. Diva is a brilliant bit of curation in ways that Chanel is not.

However, if you do go to the V&A, get yourself a piece of the pear and caramel cake. It’s really rather fine.

Three things you should read this week

  1. The End of Elon Musk. In any rational world, Musk’s performance at the Dealbook conference would be the end of his career. It probably won’t be, unfortunately. But, as Magary notes, Musk “appeared both high and made of plywood”. He does not seem like a well man, and I don’t say that either lightly or with any pleasure.
  2. Speaking of Musk, the Cybertruck is here1, and predictably it’s pricier and has less range than he claimed. Oh, and while the sides are bulletproof, as Musk said, the windows are not, which may prove an issue if someone is actually trying to kill you.
  3. And sticking with the theme of “people who really should grow up”, Basecamp lost a customer thank to DHH’s nonsense. Why is it that people who crow loudest about “keeping politics out of work” so often bring their politics to work? Of course, what they actually mean is “keep your politics out of work”. It’s the same as Elon “Free Speech” Musk. Free for them, not for you.

This week I have been reading…

The news that greenhouse gas emissions have been soaring rather than reducing ended the week on a sour note for me, but it makes it more obvious than ever that capitalism isn’t going to deliver a future of humanity. So reading Tim Jackson’s Post Growth has been pretty timely. Highly recommended.


  1. Not actually here till 2024, or 2025 if you want the cheap model, and not here at all outside the US because it doesn’t meet any reasonable safety regulations. ↩︎


Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today) | Mozilla Add-ons Community Blog

Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today):

Starting December 14, 2023, extensions marked as Android compatible on addons.mozilla.org (AMO) will be openly available to Firefox for Android users.

But not of course for iOS, because Apple doesn’t allow companies to use any rendering engine other than Safari’s webview. And Apple also hates the idea of extensions that aren’t themselves applications, so don’t expect them to make the lives of extension developers easy once the EU forces them to open things up a little


How to get a glimpse of the post-Google future of search

What does the search engine of the future look like? Forget 10 blue links...

You can break down the creative process into three big chunks: research, creation and revision. What happens in part depends largely on the kinds of content you're creating, the platforms you are making the content for, and many other factors.

Like every journalist, I spent a lot of time using search on the web to help with that research phase. I was quick off the mark with it, and I learned to adapt my queries to the kinds of phrases which delivered high-quality results in Google. My Google-fu was second to none.

But that was the biggest point: like all nascent technologies, I had to adapt to it rather than the other way around. Google was great compared to what came before it, but it was still a dumb computer which required human knowledge to make the most of out of it.

And Google was dumb in another way too: apart from spelling mistakes, it didn't really help you refine what you were looking for based on the results you got. If you typed in “property law” you would get a mishmash of results for developers, office managers and homeowners. You would have to do another search, say for “property law homeowners” to get an entirely different* set of results that were tailored for you.

Google got better at using other information it knows about you (your IP address, your Google profile) to refine what it shows you. But it didn't help you form the right query. It would ask you “hey, what aspects of property law are you interested in” and give you a list of more specific topics.

What's more, what it “knew” about you were pretty useless. You couldn't, at any point, tell it something which would really help it give you the kinds of results you wanted. You couldn't, for example, tell it "I'm a technology journalist with a lot of experience, and I favour sources which come from established sites which mostly cover tech. I also like to get results from people who work for the companies that the query is about, so make sure you show those to me too. Oh, and I'm in the UK, so take that into account."

Google isn't like that now. Partly that's down to the web itself being a much worse source of information. But that feels like a huge cop-out from a company whose mission is to “organise the world’s information and make it universally accessible and useful”. It sounds like what it is: a shrug, a way of saying that the company's technology isn't good enough to find "the good stuff".

The search engine of the future should:

  • Be able to parse a natural language query and understand all its nuances. Remember how in the Knowledge Navigator video, our professor could ask just for “recent papers”?

  • Know not just the kind of information about you that's useful for the targeting of ads (yes Google, this is you) but also the nuances of who you are and be able to base its results on what you're likely to need.

  • Reply in natural language, including links to any sources it has used to give you answers.

  • If it's not sure about the kind of information you require, ask you for clarification: search should be a conversation.

For the past few weeks, I've been using Perplexity as my main search engine. And it comes about as close as is currently possible to that ideal search engine. If you create content of any kind, you should take a look at it.

Perplexity AI allows users to pose questions directly and receive concise, accurate answers backed up by a curated set of sources. It's an “answer engine” powered by large language models (including both OpenAI's GPT-4 and Anthropic's Claude 2). The technology behind Perplexity AI involves an internal web browser that performs the user's query in the background using Bing, then feeds the obtained information to the AI model to generate a response

Basically, it uses an LLM-based model to create a prompt for a conventional search engine, does the search, finds answers and summarises what it's found in natural language, with links back to sources. But it also has a system it calls (confusingly) Copilot, which provides a more interactive and personalised search experience. It leverages OpenAI's GPT-4 model to guide users through their search process with interactive inputs, leading to more accurate and comprehensive responses.

Copilot is particularly useful for researching complex topics. It can go back and forth on the specific information users need before curating answers with links to websites and Wolfram Alpha data. It also has a strong summarisation ability and can sift through large texts to find the right answers to a user's question.

This kind of back-and-forth is obviously costly (especially as Copilot queries use GPT-4 rather than the cheaper GPT-3.5). To manage demand and the cost of accessing the advanced GPT-4 model, Perplexity AI limits users to five Copilot queries every four hours, or 600 a day if you are a paid for “Pro” user.

If you're not using Perplexity for research, I would strongly recommend giving it a go. And if you work for Google, get on the phone to Larry and tell him your company might need to spend a lot of money to buy Perplexity.


Latenote, Monday 27th November 2023

Between getting ridiculously excited about the goings-on OpenAI, I didn't get a lot of writing done this week. There are definitely times when too much is going on in the tech world, and my old habits die hard: I have to keep up with it all.

I wrote a post on Substack with my take on it, from the perspective of the longer-term impact on creative professionals. And, given how fast things were moving, I ended up rewriting it three times. That was a good reminder not to cover breaking news in that newsletter!

In case you're interested, the focus of that newsletter is the three-to-five year perspective on how technology will impact on what we occasionally call “the creative industries”. That includes magazine publishing, of course, but also writing and creativity more broadly. Hopefully, it should be interesting.

On Sunday, we went out with the wonderful and super-clever Deb Chachra, who has just published her book How infrastructure works (and there's a great review of it here if you are interested). We tempted Deb out of London on a trip to Dungeness, which has both Derek Jarman's cottage and Dungeness A and B nuclear reactors. What's not to like about art and infrastructure?

And more art on Sunday night, as we went down to Folkestone for a talk by the brilliant and wise Jeremy Deller. If you don't know Deller's work, honestly, where have you been for the last 20 years? This is the third time we have done something Deller-related this year, having seen him before in London and also seen Acid Brass. 2023: Year of the Deller.

The three things which most caught my attention

  1. Commiserations to my old comrades in SEO, who are dealing with some pretty turbulent times. I promise that I didn't sabotage Google.
  2. Bill Gates wrote a long post about the way AI is going to change the way you use computers. Gates is right – large language models are just the precursor to what might look from some angles like the end of application software altogether.
  3. Bloomberg looked at the way Elon Musk has been radicalised by social media, adopting a world-view that's completely in the thrall of what we would have called the alt-right not that long ago.

Things I have been writing

There were three… no, actually four drafts of my post about what was going on at OpenAI and why you should care. I am never doing breaking essays on news again.

To give myself a break from all things Orford, I picked up a short story that I had left to one side, about a very strange doctor. Might finish that this week.


What the heck is going on at OpenAI (and why should I care?)

Confused? You should be. I'm deliberately not looking at Techmeme so I don't have to update this post for the fifth time.

Twenty-four hours ago, this was a thoroughly different post. Heck, twelve hours ago, it was a different post.

One of the things I told myself when making this Substack was that I wouldn’t focus on current events. My focus is on the longer term: the three-to-five-year time frame, for publishers, communications professionals and others assorted nerds.

But the shenanigans at OpenAI over the weekend suckered me in, and now I have had to rewrite this post three times (and whatever I write will probably be wrong tomorrow). Still, the drama goes on.

The drama that’s been happening at OpenAI does matter and might be a turning point in how AI and large language models develop over the coming years. It has some implications for Google – which means it is relevant for publisher traffic – and Microsoft – which means it is significant for the business processes which keep everything flowing.

What’s happened at OpenAI?

If you’ve not been following the story, here’s a timeline created by Perplexity (about which I will have more to say in the future). But the basics are that OpenAI’s board dismissed Sam Altman, its founder and CEO, alleging he had been less than truthful with them. Half the company then decided they wanted to leave. Microsoft’s Satya Nadella then claimed Altman would be joining his company, only to walk that back later in the day. Now Altman is going back to OpenAI as CEO but not on the board and there will be an “independent investigation” into what went on, something that might not totally exonerate Altman.

Confused? You should be. Everyone else is. Partly this drama comes down to the unusual structure of OpenAI, which at its heart is a non-profit company that doesn’t really give two hoots about growth or profits or any of the things most companies do. Partly it’s down to Altman basically pushing ahead as if this wasn’t true, then realising too late that it was.

What’s the long-term impact on future AI development?

OpenAI has been at the forefront of developing the kind of conversations large language models which everyone now thinks of as “AI”. It’s fair to say that before the June 2020 launch of GPT-3, LLMs were mostly of interest to academic researchers rather than publishers.

And a huge number of tools have been built on top of OpenAI’s technology. By 2021 there were over 300 tools using GPT, and that number has almost certainly gone up an order of magnitude since. And of course, Microsoft is building OpenAI tech into everything across its whole stack, from developer tools to business apps to data analysis.

If there’s one company that you don’t want to start acting like a rogue chatbot having a hallucination, it’s OpenAI.

And yet, because of Microsoft’s investment in the company and commitment to AI, it probably matters a lot less than it would have if this schism had happened three or four years ago. The $13bn it has put in since 2019 for an estimated 49% stake of the company and the fact it is integrating OpenAI tech into everything it does mean it has a lot to lose (and Satya Nadella does not like losing.)

Because of this, I think the past few days won’t have much impact on the longer-term future of AI. In fact, it could end a good thing, as it means Microsoft has committed that it will step in should OpenAI start to slip.

The greatest challenge for Microsoft was that, although it had perpetual licenses to OpenAI’s models and code, it didn’t own the tech outright, and it didn’t have the people in house. And, when you’re betting your company’s future on a technology, you’re always in a better position if you own what you need (something that publishers should take note of).

Partners are great, but if you’re locked into a single partner, and they have what you require, you’re never going to be the driver of your fate. Now, though, if Altman and the gang join, Microsoft effectively owns all it needs to do whatever it wants. It has the team. It has the intellectual property. Everything runs, and will continue to run, on Azure, and it has the financial muscle to invest in the huge amount of hardware required to make it available to more businesses.

The big question for me is how all this impacts on Google over the next few years. If Altman and half of OpenAI ends up joining Microsoft, I think it weakens Google substantially: at that point, Microsoft owns everything it needs to power ahead with AI in all its products, and the more Microsoft integrates AI, the stronger a competitor it will be.

If, on the other hand, Altman goes back to OpenAI with more of a free hand to push the technology further and harder, Microsoft still benefits through its partnership, but to a lesser degree.

If I was running Google, I would be calling Aravind Srinivas and asking how much it would take to buy Perplexity. But that’s another story, maybe for next week.


"Journalism is picking up the phone"

Remembering the craft and process of original reporting can help build a loyal audience.

So far this week, I have looked at a couple of strategies for creating stand-out content over the coming years: hands-on reviews and real-life stories. There is a third area, and in a sense it’s about going back to the future and focusing on something that never truly went out of fashion: original reporting.

Back in 2008, my reserve arch enemy Danny O’Brien and I were debating what the difference was between blogging and “proper” journalism, and Danny ended up liking one of the ways I put it: that “journalism is when you pick up the phone”. Even then, that didn’t mean a literal phone – email was the hot communications thing. But it meant, as Danny put it, “journalism requires some actual original research, rather than just randomly googling or getting emailed something and writing it up as news.”

That’s the core of original reporting, and as Danny also pointed out, a great deal of what passes as editorial doesn’t meet that standard (opinion columnists of the UK media, stop looking so shifty).  

Original reporting in any topic area is about uncovering truths, providing context, and delivering stories that matter to audiences. AI, while adept at aggregating and rephrasing existing information, lacks the ability to conduct investigative journalism, engage in ethical decision-making, and provide the human empathy that is often central to impactful storytelling. I would consider myself broadly an optimist about the developing capabilities of AI, and even I don’t think it’s likely to be able to do this in my lifetime.  

And “picking up the phone” is definitely having something of a renaissance. Take, for example, the series that The Verge is currently working on under the label of “we only get one planet”. Digging into how Apple and others add to the mountain of e-waste while claiming to be on top of their environmental efforts takes a lot of work, and importantly, original research and interviews. The Verge might not be physically picking up the phone, but they’re more than living up to the spirit.  

Obviously, investing in original reporting is expensive, and it can’t just be a moral imperative. It has to be a sound business strategy, too. First, audiences appreciate its value. According to a 2019 Pew Research survey, “about seven-in-ten U.S. adults (71%) say it is very important for journalists to do their own reporting, rather than relying on sources that do not do their own reporting, such as aggregators or social media. Another 22% say this is somewhat important, while just 6% say it is not too or not at all important.”

Original reporting can elevate a publisher's brand reputation and recognition, which can be a key to unlocking more direct traffic. In a saturated market, having a distinct journalistic voice and a reputation for in-depth reporting can be a significant differentiator.

Publications like The New York Times and The Guardian have successfully leveraged their reputations for quality journalism to build robust subscription or contribution-based revenue models, with The Guardian hitting record annual revenue this year. And, importantly for its long-term profitability, nearly half its traffic is direct (and its biggest search terms are branded ones).

One thing that’s worth noting: The Guardian’s strategy was a three-year plan. Do you have a three-year plan to diversify revenue, have a more direct relationship with your audience, and leave yourself less vulnerable to the whims of Google or Facebook?

Thank you for reading Ian Betteridge's Substack. This post is public so feel free to share it.

Share


Telling human stories: where AI ends and people begin

The second area where humans can do a better job than an LLM: real life storytelling

One of the best parts of my last year working at Bauer was getting to know the team which works on real life content. Real life, sometimes called true life or reader stories, focuses on stories derived from ordinary people caught up in extraordinary events – usually not the national news, but their own personal dramas.

There are many magazines whose focus is entirely real life, and you will have seen them on many supermarket shelves with multiple cover lines, often focused on shocking stories. But the key part about them, and the thing which differentiates them from tabloids, is that the stories are those told by the people involved in the drama. It's not third-person reporting: it is focused on first-person experience.

And now a confession: before I worked with that team, and I suspect like many journalists, my view of real life wasn't all that positive. I considered it to be cheap, and pretty low-end.

How wrong I was.

I worked with the team creating the content to implement a new planning system, which needed to capture every part of their story creation process. What I learned was how thorough their process is, and how much human care and attention they had to take when telling what were sometimes traumatic stories, working directly with the subject.

I don't think I have ever worked with a team that had a more thorough legal and fact-checking process, and I came away a bit awed by them. I ended up thinking that if all journalists operated with their level of professionalism and standards, the industry would be in a much better place.

Bringing the human into the story

Where does AI come into this? I talked earlier this week about how injecting more of a human, emotional element into reviews was a way to make them stand out in a field that AI is going to disrupt. Real life is a perfect example of a topic where it's difficult to ever see a large learning model (LLM) being able to create the story.

An LLM can't do an interview, and because of the incredible sensitivity of the stories being told, I wouldn't trust a machine to write even a first draft of it. But there are aspects of the way that real life content is created which, I believe, can give lessons to every kind of journalism.

First, whatever your topic area, telling the human story is always going to be something that humans do better than machines. Build emotion and empathy into telling a personal story, rather than relating just the facts. That doesn’t just mean technique: yes, use emotional arcs, and yes, show don’t tell, but technique alone won’t bring across the way that the subject felt when going through whatever event they are describing.

On a three-to-five-year timescale, I would be looking to shift human journalists into telling more of these kinds of stories, regardless of what your topic area. Remember that humans are empathic storytellers and focus on the emotion of the story. So, think about how you can change your content strategy to be more focused on the human story.

The process is the practice

Don't, though, be tempted to work on these kinds of stories with an ad hoc process. Process is important in journalism – but it is crucial if you want to do real life stories well.

To do this well, make sure you codify and document the process to a high level. Documenting the process is often something that journalists can push back on because it's considered stifling creativity, but that's not true at all. In fact, a documented process allows you to free up time to focus on creative tasks, rather than reinventing the wheel with every story.

And that is where you can start to think about how to use LLMs to streamline your processes and make them move faster. But this is a business process problem, rather than a creative one.

For example, if your pitching process involves creating a summary of a story, an LLM can write the summary – there's no need to waste a human's time to do it. Can you write a specialist GPT to check if a story has been used before? Can you use an LLM to query your content management system for similar stories you may have run in the past?

If you are thinking about how to be a successful publisher in three to five years, you need to be looking at the process. If it's not documented – in detail – then make sure that's done. That can't be a one-off because a process is never a single entity fixed for all time. New technologies and new more efficient practices will come along, and someone needs to be responsible within your organisation for it.

So, ask yourself some questions:

  • Who, in my company, is directly responsible for documenting and maintaining our editorial and audience development processes?

  • Where are they documented?

  • How often are they maintained?

  • Are they transparent? Does everyone know where they are?

Once you have a fully documented process, you can start to interrogate it for points where AI can be used to speed things up, where using natural language queries to a specialist model can improve the work. That way, you can leave humans to do the work they're best at: emotion, and storytelling.


What kinds of content can humans do better than AI?

Sometimes, you just need the human touch...

What kinds of content can humans do better than AI? The last few posts here have, I have to admit, been a bit of doom and gloom. I’ve looked at how conversational AI is going to squeeze search traffic to publisher sites, and at how adopting AI for content generation will remove the key competitive advantage of publishers. 

But there are areas of content creation where publishers can use their ability to do things at scale and the talent they have to make great work that audiences will love.

I’ve broken this post out into three parts, covering three different kinds of content. Today, I’m going to look at one which is close to my heart: reviews. Tomorrow and Thursday I’ll look at two other examples where humans can win.

Doing reviews right

One of the points that I made last week was that affiliate content, in particular, was susceptible to the shift to conversational ways of working with computers. However, that doesn’t mean that reviews are going to disappear. Certain types of article are likely to remain an area where humans will continue to produce better content for other humans for the foreseeable future.

For many sites, creating content for affiliate purposes has involved a lot of round-up articles, often created at least in part with what gets called “desk-based research”. You are not reviewing a product you have in your hand, you are researching everything about it that a consumer could possibly need to know, and summarizing it helpfully.

I’ve sometimes argued this was OK in certain circumstances, as long as you flag it and the amount of work that goes into the article is high. Just casting around for whatever is top-rated on Amazon doesn’t cut it because a reader can do that quickly themselves. But if you’re saving someone hours of time in research, you’re still performing a valuable service for them.

That kind of content isn’t going to survive the increased use of conversational AI because one thing that LLMs will be excellent at is ingesting lots of data and combining it into a cogent recommendation. LLMs can read every piece of Amazon feedback, every spec sheet and every piece of manufacturer data faster and more accurately than any human can. If your content is just research, it’s not going to be viable in the world of AI.

What will work is direct first-person experience of the product, written to focus on the less tangible things about it. An LLM can read a car spec sheet and tell you about its torque, but it can’t tell you how it feels to accelerate it around a corner. An LLM can look at a spec sheet for a laptop, but it can’t tell you how good the keyboard is to type on for extended periods.

If your editorial teams are focused on what I used to call “speeds, feeds and data” then part of your approach should be to shake up the way they write to get them closer to a more personal perspective. One way to do this is to change style.

Back when we launched Alphr at Dennis, one of the first changes I made to editorial style was to stop using the UK tech traditional plural in reviews (“we tested this and found blah”) and shift to first person (“I tested this and found blah”). Shifting into first person forces the writer into a more subjectively human perspective on the product you’re looking it. It frees the writer from an overly objective point of view into a more personal experience, and that is something which will survive the world of LLMs. Don’t just say what the specs are: say what it feels like, as a human being, to use this product.

Tomorrow, I’m going to look at the second area I think is a clear “win” for human-generated content: the often maligned area of real life stories.


Weeknote, Sunday 12th November 2023

This felt like a busy week, perhaps because it actually was

On Monday I had a call with Peter Bittner, who publishes The Upgrade, a newsletter about generative AI for storytellers which I highly recommend. It was great to chew the fat a little about what I've been writing about on my newsletter, and also to think about a few things we might do together in the future.

The on Thursday I caught up with Phil Clark, who has also recently left his corporate role and is working on a few interesting projects. Plus I spoke to Lucy Colback, who works for the FT, to talk about a project she's working on.

On Friday we headed down to Brighton for the weekend. Kim was doing a workshop on drawing (of course) and I took the opportunity to catch up with a couple of old friends, including my old Derby pal Kevin who I've known for 40 years. Forty bloody years. How does that even happen?

The three things which most caught my attention

  1. Here's something positive: the story of Manchester Mill, a subscription-based local news email in Manchester that's doing more than breaking even, which remaining independent, creating quality news, and not taking advertising.
  2. Tilda Swinton is just one of my favourite people. That's all.
  3. Mozilla wants to create a decentralised social network, based on Mastodon, that's actually easy for people to use.

Things I have been writing

Last week's Substack post looked at Apple's old Knowledge Navigator video and how computing is heading towards a conversational interaction model. This has some big implications for publishers, particularly those who have focused on giving "answers" to queries from Google: when you can effectively send an intelligent agent out to find the things you want via a conversation, web pages as we know them are largely redundant.

I wrote a post about Steven Sinofsky's criticism of regulating AI. I think Sinofsky is coming at this from a pretty naive perspective, but not one which is atypical of the kind of thinking you'll find amongst American tech boosters. It was ever thus: I feel when writing articles like this that it's just revisiting arguments I was having with the Wired crowd in the late 1990s. The era when "the long boom" was an article of faith, the era when George Gilder was being listened to seriously.

And that's not surprising, really. The kind of people who are loudly shouting about the need for corporate freedom to trample over rights (Marc Andreessen, Peter Thiel) grow up in that era and swallowed the Californian ideology whole. So did a lot of radicals who should have known better.

Things I have been reading

Having seen Brian Eno perform last week I'm working my way through A Year with Swollen Appendices, which is a sneaky book: the diary part is only a little over half of it, so just when you think you're coming to the end you have a lot of reading left to do. It's a good book though. Picking that up means I have had to put down Hilary Mantel's A Memoir of my Former Self, but that will be next on the list.


John G on Monica Chin's review of the Surface Laptop Go 3

Daring Fireball: Monica Chin on the Microsoft Surface Laptop Go 3: ‘Why Does This Exist?':

A $999 laptop that maxes out at 256 GB of storage and has a 1536 × 1024 display — yeah, I’m wondering why this exists in 2023, too. And I’m no longer wondering why Panos Panay left Microsoft for Amazon.

The $999 MacBook Air has 256Gb of storage, 8Gb of RAM, and a three year old processor. I’m kind of wondering why that exists in 2023, too.

Not to say that the Surface Laptop 3 is any good – it isn’t – but Microsoft isn’t the only company that has some bizarre pricing at the “low” end of its laptop range.


What a 36 year old video can tell us about the future of publishing

The future is arriving a little later than expected...

I have had the best life. Back in 1989, I left polytechnic with my first class honours degree in humanities (philosophy and astronomy) and walked into the kind of job which graduates back in the 80s just didn't get: a year-long internship with Apple Computer UK, working in the Information Systems and Technology team – the mighty IS&T.

It paid a lot better than my friends were getting working in record shops. And although it was only temporary – I was heading back into higher education to do a PhD in philosophy, working on AI – it suited me. Without it, I wouldn't have had my later career in technology journalism. The ability to take apart pretty-much any Mac you cared to name became very useful later on.

Apple treated new interns the same as every other new employee, which meant that there was an off-site induction for a couple of days when we were told about the past, present, and future of Apple. The only part of the induction that I remember is the future because that was when I first saw the Knowledge Navigator video.

If you haven't seen Knowledge Navigator, you should watch it now.

Why is a 36-year-old concept video relevant now, and what does it have to do with publishing? The vision of how humans and computers interact which Knowledge Navigator puts forward is finally on the cusp of coming true. And that has profound implications for how we find information, which in turn affects publishers.

There are three elements of the way Knowledge Navigator works which, I think, are most interesting: conversational interaction; querying information, not directing to pages; and the AI as proactive assistant. I'm going to look at the first one: interaction as conversation, and how close we are to it.

Interaction as conversation

The interaction model in Knowledge Navigator is conversational. Our lecturer talks to the AI as if it were a real person, and the interaction between them is two-way.

Lecturer: “Let me see the lecture notes from last semester”. Mhmm… no, that's not enough. I need to review the more recent literature. Pull up all the new articles I haven't read.”

Knowledge Navigator: "Journal articles only?”

Lecturer: "uhh… fine.”

Note one big difference with the current state of the art in large language models: Knowledge Navigator is proactive, while our current models are largely reactive. Bing Chat responds to questions, but it doesn't ask me to clarify my queries if it isn't certain about what I'm asking for… yet.

That aside, the way conversation happens between our lecturer and his intelligent agent is remarkably similar to what you can do with Bing Chat or Bard now. The “lecture notes from last semester” is a query about local data, which both Microsoft and Google are focused on for their business software, Microsoft 365 and Google Workspace. The external search for journal articles is the equivalent of interrogating Bing or Bard about a topic.

In fact, Bing already does a pretty good job here. I formed a similar question to our lecturer's about deforestation in the Amazon, to see how it would do:

Not bad, eh?

The publishing model of information – the one which makes publishers all their money – is largely not interactive. The interaction comes at Google's end, not the publishers. Our current model looks like this:

  1. A person interacts with Google, making a query.

  2. They click through to a result on the page which (hopefully) gives them an answer

  3. If they want to refine their query, they go back to Google and repeat the process – potentially going to another page

Interaction as conversation changes this dynamic completely, as an “intelligent” search engine gives the person the answer and then allows them to refine and converse about that query immediately – without going to another page.

Have a look at this conversation with Bard, where I am asking for a recommendation for a 14in laptop:

OK, that sounds good. Now let's drill down a little more. I want one which is light and has a good battery life:

That ZenBook sounds good: so who is offering a good deal?

By contrast, a standard article of the kind which publishers have been pumping out to capitalise on affiliate revenue (keyword: “best 14in laptop”) is a much worse experience for users.

And at the end of that conversation with Bard, I'm going to go direct to one of those retailers, with no publisher involvement required.

If that isn't making you worry about your affiliate revenue, it should be.

The model of finding information which search uses, based on queries and a list of suggested results, is pretty well-embedded in the way people use the internet. That's particularly true for those who grew up with the web, aged between 30-60. It may take time for this group to move away from wanting pages to wanting AI-driven conversations which lead to answers. But sooner or later, they will move. And younger demographics will move faster.

That, of course, assumes that Google will leave the choice to users. Google may instead decide it wants to have more time with “its” users and put more AI-derived answers directly at the top of searches, in the same way that Microsoft has with Bing. Do a keyword search on Bing, and you are already getting a prompt to have a conversation with an AI at the top of your results:

Once again, the best option for publishers is to begin the switch from a content strategy which relies on Google search and focuses on the kinds of keywords which are susceptible to replacement by AI (focused on answers) to content strategies which build direct audience and a long-term brand relationship.

Treat search traffic as a cash cow, to be milked for as long as possible before it eventually collapses. In the world of the Knowledge Navigator, there's not going to be much room for simple web pages built around a single answer.


On Steven Sinofsky's post on regulating AI

Regulating AI by Executive Order is the Real AI Risk:

The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation

Sinofsky’s response is fairly typical of the AI boosters, and as always, it fails to understand the point of regulation. And in particular it fails to understand why an executive order is entirely the correct approach at this point.

Regulation exists so that we gain the benefits of something while ameliorating the risks. To use an area that probably makes sense to Americans, we regulate guns, so we get the benefits of them (personal protection, national security) while avoiding the dangers (everyone having a gun tends to lead to lots and plenty of gun deaths).

AI is the same: we should regulate AI to ameliorate the dangers of it. Now, those dangers aren’t Terminators stomping around the world with machine guns. They are, instead, things like racial discrimination because of an intrinsic bias of algorithms. It’s looking at the implications for privacy of generative AI being able to perfectly impersonate a person. It’s the legal questions of accountability – if an AI makes a major error which leads to death, for example, who exactly is responsible?

But hey, I guess tech ethics is the enemy, right?

So why an EO? In part, I think the AI boosters only have themselves to blame. You can’t go around saying that AI is the most transformative technology since the invention of the PC and stoking the stock markets by claiming the impact will all be in the next couple of years and not be surprised if a government uses the tools it has to act expeditiously. Silicon Valley types constantly laugh at the slowness of the Federal government. Complaining when it does something quickly seems a bit rich. “Move fast and break stuff” sure – but not when it’s their gigantic wealth that might be the thing that gets broken.

Sinofsky also highlights the nay-sayers of the past, including posting some pictures of books which drew attention to the dangers of computers. The problem is some of those books are turning out to be correct: David Burnham’s The Rise of the Computer State looks pretty prescient in a world of ubiquitous surveillance where governments are encouraging police forces to make more use of facial recognition software, even though it discriminates against minorities because it finds it hard to recognise black faces. Arthur R. Miller may have been on to something, too, when he titled his book The Assault on Privacy.

Sinofsky gets to the heart of what ails him in a single paragraph:

Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing.

To which I would respond: “great! It’s about time!”

There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.

All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.

As for the idea that this is “preventing product(s) that don’t exist from existing”… that is precisely what we pay our taxes to do. We spend billions on defence to prevent the problem of someone dropping big bombs on our cities. We pay for education, so we won’t have the problem of a stupid population which votes in a charlatan in the future (why do you think the far right hates education?)

Good business leaders talk all the time about how proactive action prevents costly issues in the future. They scan horizons, and act decisively and early to make sure their businesses survive. The idea that the government should only react, especially when that’s usually too late, is just bizarre.

At one point, Sinofsky’s sings the praises of science fiction:

The best, enduring, and most thoughtful writers who most eloquently expressed the fragility and risks of technology also saw technology as the answer to forward progress. They did not seek to pre-regulate the problems but to innovate our way out of problems. In all cases, we would not have gotten to the problems on display without the optimism of innovation. There would be no problem with an onboard computer if the ship had already not traveled the far reaches of the universe.

It’s a mark of the Silicon Valley mind-set that he appears to forget the understandable point that this was all made up stuff. 2001 wasn’t real. Star Trek was not real.

Sinofsky then spends some time arguing that the government isn’t “compelled” to act, as AI is actually not moving that quickly:

No matter how fast you believe AI is advancing, it is not advancing at the exponential rates we saw in microprocessors as we all know today as Moore’s Law or the growth of data storage that made database technology possible, or the number of connected nodes on the internet starting in 1994 due to the WWW and browser.

All well and good, but not true: a Stanford study from 2019 found that AI computational power was advancing faster than processor development, and that was before the massive boost from the current AI frenzy. Intel has noted the speed at which AI programs can “train” themselves doubles every four months, compared to the 24 months that Moore’s Law predicted for processor speed.

Towards the end, of course, Sinofsky lapses into Andreessen-style gibberish:

The Order is about restricting the “We” to the government and constraining the “We” that is the people. Let that sink in.

Making “the people” synonymous with “extremely rich billionaires and their companies” is, of course, one of the tricks that the rich play again and again and again. AI is being created to enrich the already rich. It requires resources in computing power, which means my only option of accessing it is to rent time on someone else’s computer. It reinforces technofeudalism. Of course, Silicon Valley, which wants to make sure all of us pays a tithe to them, loves it.

It’s time that we have some assertion of democratic control over the forces that shape our lives. The Silicon Valley fat cats don’t like it. That, on its own, tells me that regulating AI is probably a good thing.