"If Crypto Is Libertarian, Then AI Is Communist"

tech choice blockchain automation AI

I was listening to an interview of Peter Thiel, and he said something that caught my attention. The quote is, "If crypto is libertarian, then AI is communist." This little nugget of wisdom helped me piece together some stray thoughts I've had, between which I previously had not made any connections. Here's the segment; the relevant portion is rather short, but interesting. It starts at around 1 hour, 53 minutes, and goes on for four or five minutes:

Not only is he correct in that statement, but also in the secondary fact that crypto being "libertarian" is obvious, but AI being "communist" is not so widely said. Even before bitcoin existed, people have been waiting for the promise of decentralizing money and "cutting out" the big banks, and ultimately giving more power to the little guy. The concepts of cryptocurrency or e-cash, and then later, "smart contracts", was going to enable individuals to do more without involving the government, the established financial system, or the usual old gatekeepers. It's been long talked about. However, AI being more authoritarian is not an idea that's in the public mind. Part of the reason is that no one would really describe a new technology quite in those terms, but part of it is also because AI is different now than when it first started.

A Brief (Subjective) History of AI

When I first heard about Artificial Intelligence, I envisioned a computer in a box that was capable of human-level thinking. Optionally, this man-made brain would be contained in an android body as depicted in science fiction - like Lieutenant Commander Data from Star Trek, or the droids from Star Wars. But whether inside a robot, or simply at an interactive TV screen in my home, each AI as I imagined it was a self-contained, simulated entity that could pass the Turing test. Just like in science fiction, it would have a personality (often including a bad sense of humour) and self-awareness. Sure, it might have direct access to the internet and/or a super-strong robot body, but it would otherwise be an individual among a large population of individuals.

However, I learned that it doesn't quite work that way. To illustrate, take the analogous example of human flight. For a very long time, we thought we could fly by mimicking birds - by having flapping wings that propelled the body forward while simultaneously creating lift. Although we learned many lessons in our failed attempts to copy birds, and although it is (now) technically possible to create a plane or a drone with flapping wings, our successful flying machines do something completely different. We studied the physics of air and gravity and motion, and created something that is different from birds, yet works for us.

In a sense, the opposite thing happened with AI. Originally, we tried to create really smart programs. We studied the rules of logic and information processing and mathematics, and built them into the code of our AIs. A piece of software that could play a game for example would have all the rules written in by a programmer, as well as some optimal strategy algorithms. For tic tac toe, this is easy, and every possible outcome could be typed in completely within a few lines of code. For a game like chess, however, writing in all the rules was still easy, but optimizing it so that it could win against a human was very difficult. Developers had to get creative, and find heuristics and shortcuts. In the case of Go, it was even harder, to the point where it was considered impossible to write a program that could beat a human at that game.

That is, until we taught a computer not how to think, but how to learn. Rather than tell a computer the best strategies to win at Go, developers programmed in the rules and the conditions for winning, and then had it play thousands upon thousands of games, altering its own strategy each time, until it learned the best ways to play. AlphaGo became the first software system that could beat a human expert at Go about three years ago.

AI Leads to Centralization

So better AI is no longer a question of the cleverness of your algorithms, but of the size of your datasets. The more data you have, the more intelligent your AI will be. Now, the race for the best AI is essentially a race in data gathering. In the case of Go, it's about running more and more games until the best strategies are developed. This is great, because the game can be run completely virtually, and this kind of machine learning is ideal for parallel computing. To make a robot learn how to do something physical, like play tennis, requires a little more time, but it can work just the same.

At this point, it may be increasingly obvious that AI leads inevitably to centralization. If a small computer with a small dataset can do a good job at thinking, then a big computer with a big dataset will do an even better job. It is therefore in everyone's interest to pool data together into one giant database, and have all customers go to the big computer for services, while also feeding it more data.

A great example of this is image recognition. The folks at Google did something ingenious when they created their reCAPTCHA system. You know... a CAPTCHA system is made to prevent automated spam in comment threads by presenting some visually garbled text, and asking the user to correctly type the letters in the text - essentially testing whether the user is a human or a bot. Producing and operating such a system costs money, since they need to generate images of garbled text, and securely match it with the associated characters, etc. reCAPTCHA was able to make it free by making the users work for them. Instead of generating images of bogus distorted text, they took existing images of hard-to-read text from book digitization projects. This provides value to the project, and improves the computer's ability to read text printed on a page. A few efforts are taken to make sure the user is being honest, such as displaying two words: one that is known, and one that is unknown. If the user correctly identifies the known word, then the unknown word is considered correct. Combined with the efforts of other people on the same word, confidence in its transcription goes up. Later they upgraded the system to include images from Google Maps' Street View. The effort of hundreds of thousands of people helps improve Google's image recognition AI, and helps websites prevent spam.

The best example I can think of is voice recognition. I remember when I was a kid learning about voice recognition software - "Dragon..." something or other. It came on a CD and would run (completely offline) on your computer. Just as with the other examples, it was simply a clever algorithm. It knew the various phonemes of the English language, and used some smart methods to guess the mapping of sounds to words. Today, it doesn't happen like that. All of the voice assistant features of smartphones transmit the sound clips back to Apple, or Google, or Microsoft, and the big computers with their giant databases compare the sounds with known clips, and spit out the right words.

It works better. But the inevitable consequence is that more and more data and more and more control comes under a single roof.

A final quote from the interview: "I do think it's not a coincidence along these lines that the Chinese Communist Party hates crypto and loves AI. ... In Silicon Valley, AI often just means a super smart computer that will leave all the humans behind. In China, it means a really smart computer that helps a few humans control the rest."

Add a comment

Comments

I would tend to agree with your analysis but it seems that things are shifting… in this answer I am more focusing on individual/privacy more than AI in general. For instance nowadays it exists data gathering market places, where companies offer data to anyone that can offered it. Weird if you think about it, even sounds a little illegal, but it is trying to break the “communism” model. Pretty interesting model at least on the short term. Let me explain. As you mentioned, Google's DeepMind lab used reinforcement learning for solving the game of Go but now to master the game of poker it is using counterfactual regret minimization. What is the difference? Simple, for the game of go it analysed about 30 million Go moves from human players, and now for the game of poker it can learn from trial and error. So no more big data set and third party market place. A pretty disruptive idea! But lets push it further…. (https://www.wired.com/2015/04/jeff-dean/) What does this mean? As the legislation move forward in Europe about individual’s right and privacy, like the right to be forgotten, and so we can imagine sovereignty of individual’s data in a not too long future (let’s take a wild guess and say 15 years), large companies would not be impacted as much as most think. Indeed, as the tech progress, it is not too much of a stretch to imagine that they would have been able by then to mimic individual behaviours and create a sort of cyber clone with very limited amount of cyber footprint and data point. There are already experiments on this (see link below) Therefore what would privacy mean? (https://www.technologyreview.com/s/407722/your-virtual-clone/, https://www.vice.com/en_us/article/8qmkxa/create-a-digital-clone-of-yourself ….) To finish, cryptography and crypto-economics are, I believe, very different subject with very different people persona. Interesting to note that, on one hand, the Crypto Anarchist Manifesto was published in 1988. Very politically and economically engaged, written with bold language such as “you have nothing to lose but your barbed wire fences!”. And, on the other hand, the cypherpukns manifesto, published in 1993, focusing on privacy (the word privacy is mentioned 22 times) written with an academic language “The Cypherpunks are actively engaged in making the networks safer for privacy. Let us proceed together apace.” I am assuming here that Eric is agglomerating both movements, which is a mistake in my opinion. But to give him credit, it seems that it is hard to tell the difference now with all the companies emerging , ICO and the Blockchain trends. Kind of a mess. (https://nakamotoinstitute.org/crypto-anarchist-manifesto/#selection-43.10-43.63 - https://www.activism.net/cypherpunk/manifesto.html) In conclusion, to paraphrase you, I feel we are now at a crossroad were as a society we need to chose between one model centralise model in which more and more control comes under a single roof, and another model of self sovereign control of your data with not a lot of model in the middle. A good time to be alive and try to make a dent in this universe! Note: I know I am pushing here, but I would like to point out that Nick Szabo in the Extropy #16 published in 1996 first used the concept of smart contract. By the way that can be a great subject for a second article: Extropy and the transhumanism movement. (http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html)
Written on Tue, 27 Nov 2018 13:12:19 by arthur
Great read, Brandon. I think it is super helpful to look for the political affordances of technologies, and you are doing a bangup job. It's funny though because I see these backwards. In crypto, there is a fundamental resignation towards selfishness. We couldn't figure out how to govern, so we tapped into primordial greed to keep a Commons safe. Now, instead of people dynamically deciding what happens to money, the algo does. So while humans get to be libertarian in the sense that they have escaped the tyranny of men, they now bow to code. On the other hand, AI, precisely because it's utility scales with sharing, affords a fundamentally selfless society. Here, decentralized data gathering points to a sharing economy, where we can imagine a large dataset that is agglomerated AND distributed, rather than centralized. In the end, you, me and Peter Thiel are all capable of reading our own politics into the techs we favor and avoid. So long as we keep critical of them, even if we see things differently, we'll stay a little safer!
Written on Tue, 30 Oct 2018 10:21:53 by William Robinson

Previous Post Next Post