Researchers from OpenAI, Cambridge, Oxford, and several other institutions have concluded that the best way to combat the malicious use of artificial intelligence (AI) is to develop more powerful AI and place it under government control. Their findings, published in a paper titled “Computing power then governance of artificial intelligence,” aim to explore the challenges involved in governing the use and development of AI. The researchers argue that controlling access to the hardware necessary for training and operating AI systems is crucial in determining who can possess the most powerful AI systems in the future. This implies that governments should establish systems to monitor the development, sale, and operation of hardware essential for advanced AI. Some governments already exercise “compute governance” by restricting the sale of specific GPU models used for AI training to certain countries. However, the researchers believe that true prevention of malicious AI use would require hardware manufacturers to incorporate “kill switches” that enable remote enforcement, such as shutting down illegal AI training centers. Nonetheless, the researchers caution that poorly implemented compute governance could lead to privacy concerns, economic impacts, and concentration of power. Moreover, advancements in decentralized compute, which allows training, building, and running models more efficiently, could make it harder for governments to locate and shut down hardware associated with illegal training efforts. As a result, the researchers suggest that society must utilize more powerful and governable compute to develop defenses against emerging risks presented by ungovernable compute, potentially leading to an arms race against illicit AI use.