In the last few years, both crypto and AI have made remarkable advances.

Crypto has celebrated success with DeFi and, more recently, DeSci.

AI has achieved many successes, including AlphaFold2 and chatGPT.

In 2018, Peter Thiel highlighted the tension between decentralizing crypto and centralizing AI. He coined the phrase “Crypto Is Libertarian, AI Is Communist”. Here, I would like to argue that combining both could be a good thing.

Why? The skills and approaches developed by the crypto and security community can unlock AI applications and reduce AI risks.

Allison Duettmann, president and CEO of Foresight Institute. She is responsible for the Intelligent Cooperation, Neurotech, Space, Neurotech, Biotech and Health Extension, Fellowships, Prizes, and the Tech Trees Programs.

Eliezer Yudkowsky is a leading figure in AI security. He recently appeared on the Bankless Podcast – a Web3 podcast.

Two things made this a surprising experience:

Eliezer believes that we are rapidly moving towards the development of Artificial General Intelligence, or AGI, which can perform all tasks performed by humans. He also believes that AGI is likely to kill us all.

Second, when asked if there is anything one may do to increase the tiny chance we could survive, he encouraged security and cryptography-oriented people with a strong security mindset to take a hand at AI alignment.

Let’s decode that. We’ll first discuss why AGI is a concern, and then zoom in on the promises made by the crypto (here, meaning cryptography) community to mitigate some of AGI’s dangers.

Anyone who has been following the news can attest that AI is advancing rapidly every week. Here are three important developments in case you missed them:

The first is that there’s been a push towards more centralization in AI. For example, Microsoft invested in OpenAI and Google invested in OpenAI competitor Anthropic. DeepMind and Google Brain merged into one organization.

Michael J. Casey: Why Web3 and AI-Internet Belong together

Second, there is a push towards a more generalized AI. Recent paper ” GPT4 : Sparks Of Artificial General Intelligence ” showed that GPT-4 already demonstrated first instances of theory-of-mind, a measure typically used to assess human intellect.

Third, AI systems are being pushed to become more autonomous. AutoGPT is becoming more intelligent by constantly re-prompting it in order to accomplish more complex tasks.

Metaculus a platform for forecasting, had predicted that AGI would arrive in 2039. In May, this date has been shifted to 2031, which is an eight-year drop in five months.

Why is AGI considered to be so difficult?

We can divide the AGI safety problem into three subproblems:

Alignment – How can AI be aligned with human values?

How do we align AIs with our values? It’s easy for us to forget that our values are not even agreed upon. Philosophers and mortals have debated ethics since the dawn of civilisation, with both sides presenting convincing arguments. Our current civilization is based on value pluralism, the idea that humans can coexist peacefully with values they disagree with. This works well for the diversity of human values, but it is difficult to integrate into a single artificially intelligent agent.

Imagine for a moment that we had a rough idea of what moral values the AGI should have. We then need to communicate the human values to an AGI that does not share our evolution, mind architecture, or context. We can use implicit knowledge to coordinate with another human because we all share the same biology, evolutionary history and cultural context. AI does not allow us to rely on a similar context.

Read more: Michael J. Casey’s Web2 Lesson on AI: Decentralize for Humanity

A second problem is that it’s usually useful to have more resources and be alive in order to achieve any goal. An AI that is set on achieving a goal will resist being shut down, and may even seek out more resources. The task of aligning technical goals is difficult, given the many ways an AI can achieve its goals, including human injury, negligence, deceit and more.

We can’t expect an AGI to behave reliably unless we have proof that its underlying hardware and software are reliable. AGI gives its creators a significant advantage, so malicious hackers could sabotage the AGI or reprogram it.

A bug that is not intended could also interfere with AGI’s ability to achieve its goals. Or, the AGI itself could exploit weaknesses in its code by, for example, reprogramming it in dangerous ways.

Unfortunately, we have built today’s entire multi-trillion-dollar ecosystem on insecure cyber foundations. Our physical infrastructure, including our electric grid and nuclear weapons technology, is built on hackable systems. Even insecure self driving cars and drones can be hacked into killer bots. Cyberattacks like Sputnick and Solarwinds may seem severe, but they are likely to be mild compared with future AG-enabled threats. Our inability to respond meaningfully to these attacks may indicate that we’re not ready to tackle the AGI-safe task of security, which could require us to rebuild much of our outdated infrastructure.


We may be able pursue a multipolar scenario of superintelligence by leveraging the technologies and skills within the security and cryptography community

Coordination

It could take a long time to make progress in the alignment and security of AGI, so it is important that actors working on AGI coordinate their efforts. It is difficult to encourage major AI actors to work together and to avoid a race to the top. Even if all other actors cooperate, it only takes one to deviate from an agreement. This means that even if they all cooperate, if someone races ahead, then they will have a decisive edge. The first mover’s advantage is maintained until AGI systems are built. Given the power the AGI system can confer on its owner and the ability to deploy it in a unitary fashion, this is a hard temptation to resist.

You may have agreed that AI security is a difficult issue. What does crypto have to offer?

The traditional concern, given the pace at which AI is progressing and the difficulty in making it secure, is that we’re racing towards an AGI singularity scenario in which AGI replaces human civilization and dominates the entire world, possibly killing humanity in the process.

We may be able, by leveraging the technologies and skills of the security and cryptography community, to pursue instead a multipolar scenario of superintelligence, where networks of AIs and humans securely collaborate to combine their local knowledge to the collective superintelligence for civilization.

Let’s explore how the crypto and security community can help to tame AI risk and unlock AI’s beauty through new applications.

CoinDesk - Unknown(ArtemisDiana/GettyImages) (Getty Images/iStockphoto)

How can security and encryption tame AI risks?

Red-teaming

Paul Christiano, an safety researcher who is well-known in the field of AI, suggests that AI needs to be more “red-teaming”. This term is usually used in computer security for simulated cyber attacks. In the AI context, red-teams could be used, for example, to search for inputs which cause catastrophic behavior in machine learning systems.

The crypto community is familiar with red-teaming. Bitcoin and Ethereum both develop in an environment where they are constantly under attack. Insecure projects can be the equivalent of millions in cryptocurrency “bug bounty” payments.

Bulletproof systems will replace non-bulletproof ones, leaving only bulletproof systems in the ecosystem. Crypto projects are subjected to a level adversarial testing which can provide inspiration for the development of systems that can withstand cyberattacks, even those that would destroy conventional software.

Anti-collusion

Second, multiple AIs could eventually work together to destroy humanity. In ” AI Security via Debate“, a popular alignment technique, two AIs debate topics, and a human decides who wins. The human judge will not be able, however, to rule out the possibility that the AIs are conspiring against each other, and none of them is promoting the real result.

Crypto has also had experience in avoiding collusion issues, like the Sybil attacks. This attack uses a single network to run many fake identities, allowing it to gain influence on the network without being detected. Cryptography is doing a lot of work to design mechanisms that can help avoid collusion. Some may also have lessons useful for AI collusion.

Checks and Balances

Anthropic, a competitor of OpenAI, is currently exploring ” constitutional AI” in which an AI supervises another AI by using rules and principles provided by a person. The U.S. Constitution is the inspiration for this design. It sets up conflicting interest and limited means within a system that checks and balances.

Security and cryptography professionals are also familiar with constitutional-like check and balance arrangements. POLA, or the Principle of Least Authority, is a security principle that states an entity must have the least information and resources to perform its duties. This is a useful principle when developing more advanced AI systems.

These are only three of many examples, which give a glimpse of the kind of security mindset prevalent in security and crypto communities that could help with AI alignment challenges.

Let’s not only look at the AI safety issues you can try, but also at some cases where crypto security innovations are helping AI to be more beautiful, by enabling new beneficial applications.

Privacy-preserving AI

Traditional AI is not able to solve some problems, especially those that require sensitive information such as health data or financial data. These data are subject to strict privacy restrictions.

As cryptography researcher Georgios Kaissis pointed out, these are areas where cryptographic and auxiliary methods, such as federated-learning, differential privacy and homomorphic encryption, excel. These new approaches to computation are able to handle large datasets with privacy and have an advantage over centralized AI.

Local knowledge is a powerful tool

A traditional AI struggle is to find the local knowledge required for solving edge cases of machine learning (ML), which big data can’t make sense of.

Crypto ecosystems could help with local data by creating marketplaces where developers can use incentives in order to attract better data from local sources for their algorithms. Fred Ehrsam , co-founder of Coinbase, suggests that blockchain-based incentives can be used to attract better data for blockchain-based data marketplaces and ML marketplaces. Data marketplaces could compensate creators of data for their fair share. While it might not be possible or safe to open-source the actual training of machine learning models, they can offer incentives that encourage better data.

Encrypted AI

In the long term, it might even be possible for AI systems to be built that are more powerful and secure by leveraging cryptographic methods.

Cryptography researcher Andrew Trask , for example, suggests using a homomorphic algorithm to fully encrypt neural networks. This would protect the intelligence of a network from theft and allow actors to work together on specific problems without divulging the inputs.

The AI will perceive the world as encrypted if it is homomorphically encoded. Humans who control the secret key can unlock the individual predictions made by the AI, instead of letting it out in the wild.

These are only three of the many ways that crypto can be used to unlock new AI use cases.


AI systems are also capable of controlling AI systems, as shown by the examples of memes controlling other memes or institutions controlling each other

Centralized AI is prone to failures due to single points. It would not only reduce complex human values into a single objective function. This system is prone to errors, internal corruption, and external attacks. The security and cryptography communities’ secure multipolar systems have a lot of promise. They support value plurality, can provide red teams, checks and balanced, and are antifragile.

Cryptographic systems have many disadvantages. Cryptography, for example, requires improvements in decentralized storage, functional encryption and adversarial testing. These approaches are still prohibitively expensive and slow. Decentralized systems are less stable and more susceptible to rogue agents who have an interest in destroying the system or collaborating with it.

It is not too soon to think about how you can contribute to AI and bring some of the benefits that have been discussed.

Eric Drexler, an early technology pioneer, summarized the promise of a secure multipolar AI in 1986. “The examples that memes control memes and institutions control institutions also suggest AI systems can be controlled by AI systems.”

Ben Schiller is the editor.