Regulating AI by Executive Order is the Real AI Risk:
The President’s Executive Order on Artificial Intelligence is a premature and pessimistic political solution to unknown technical problems and a clear case of regulatory capture at a time when the world would be best served by optimism and innovation
Sinofsky’s response is fairly typical of the AI boosters, and as always, it fails to understand the point of regulation. And in particular it fails to understand why an executive order is entirely the correct approach at this point.
Regulation exists so that we gain the benefits of something while ameliorating the risks. To use an area that probably makes sense to Americans, we regulate guns, so we get the benefits of them (personal protection, national security) while avoiding the dangers (everyone having a gun tends to lead to lots and plenty of gun deaths).
AI is the same: we should regulate AI to ameliorate the dangers of it. Now, those dangers aren’t Terminators stomping around the world with machine guns. They are, instead, things like racial discrimination because of an intrinsic bias of algorithms. It’s looking at the implications for privacy of generative AI being able to perfectly impersonate a person. It’s the legal questions of accountability – if an AI makes a major error which leads to death, for example, who exactly is responsible?
But hey, I guess tech ethics is the enemy, right?
So why an EO? In part, I think the AI boosters only have themselves to blame. You can’t go around saying that AI is the most transformative technology since the invention of the PC and stoking the stock markets by claiming the impact will all be in the next couple of years and not be surprised if a government uses the tools it has to act expeditiously. Silicon Valley types constantly laugh at the slowness of the Federal government. Complaining when it does something quickly seems a bit rich. “Move fast and break stuff” sure – but not when it’s their gigantic wealth that might be the thing that gets broken.
Sinofsky also highlights the nay-sayers of the past, including posting some pictures of books which drew attention to the dangers of computers. The problem is some of those books are turning out to be correct: David Burnham’s The Rise of the Computer State looks pretty prescient in a world of ubiquitous surveillance where governments are encouraging police forces to make more use of facial recognition software, even though it discriminates against minorities because it finds it hard to recognise black faces. Arthur R. Miller may have been on to something, too, when he titled his book The Assault on Privacy.
Sinofsky gets to the heart of what ails him in a single paragraph:
Section I of the EO says it all right up front. This is not a document about innovation. It is about stifling innovation. It is not about fostering competition or free markets but about controlling them a priori. It is not about regulating known problems but preventing problems that don’t yet exist from existing.
To which I would respond: “great! It’s about time!”
There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.
All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.
As for the idea that this is “preventing product(s) that don’t exist from existing”… that is precisely what we pay our taxes to do. We spend billions on defence to prevent the problem of someone dropping big bombs on our cities. We pay for education, so we won’t have the problem of a stupid population which votes in a charlatan in the future (why do you think the far right hates education?)
Good business leaders talk all the time about how proactive action prevents costly issues in the future. They scan horizons, and act decisively and early to make sure their businesses survive. The idea that the government should only react, especially when that’s usually too late, is just bizarre.
At one point, Sinofsky’s sings the praises of science fiction:
The best, enduring, and most thoughtful writers who most eloquently expressed the fragility and risks of technology also saw technology as the answer to forward progress. They did not seek to pre-regulate the problems but to innovate our way out of problems. In all cases, we would not have gotten to the problems on display without the optimism of innovation. There would be no problem with an onboard computer if the ship had already not traveled the far reaches of the universe.
It’s a mark of the Silicon Valley mind-set that he appears to forget the understandable point that this was all made up stuff. 2001 wasn’t real. Star Trek was not real.
Sinofsky then spends some time arguing that the government isn’t “compelled” to act, as AI is actually not moving that quickly:
No matter how fast you believe AI is advancing, it is not advancing at the exponential rates we saw in microprocessors as we all know today as Moore’s Law or the growth of data storage that made database technology possible, or the number of connected nodes on the internet starting in 1994 due to the WWW and browser.
All well and good, but not true: a Stanford study from 2019 found that AI computational power was advancing faster than processor development, and that was before the massive boost from the current AI frenzy. Intel has noted the speed at which AI programs can “train” themselves doubles every four months, compared to the 24 months that Moore’s Law predicted for processor speed.
Towards the end, of course, Sinofsky lapses into Andreessen-style gibberish:
The Order is about restricting the “We” to the government and constraining the “We” that is the people. Let that sink in.
Making “the people” synonymous with “extremely rich billionaires and their companies” is, of course, one of the tricks that the rich play again and again and again. AI is being created to enrich the already rich. It requires resources in computing power, which means my only option of accessing it is to rent time on someone else’s computer. It reinforces technofeudalism. Of course, Silicon Valley, which wants to make sure all of us pays a tithe to them, loves it.
It’s time that we have some assertion of democratic control over the forces that shape our lives. The Silicon Valley fat cats don’t like it. That, on its own, tells me that regulating AI is probably a good thing.