OpenAI co-founder Sam Altman is looking to raise an astounding $7 trillion to address the global shortage of semiconductor chips caused by the increasing demand for generative artificial intelligence (GenAI). However, Altman’s project extends beyond that. He believes that the world needs more AI infrastructure, including fab capacity, energy, and data centers, than what is currently planned. Altman emphasizes the importance of building massive-scale AI infrastructure and a resilient supply chain for economic competitiveness.
While scaling with such a significant amount of money suggests a focus on GenAI, the ultimate goal is to achieve artificial general intelligence (AGI), which surpasses human intelligence. Altman argues that instead of criticizing the endeavor, one should contribute to securing our collective future.
Nevertheless, it is essential to consider the risks and challenges associated with AI systems before scaling them. AI relies heavily on data, and this reliance introduces critical risks. Data may be incomplete, erroneous, or misused, leading to inaccurate results. In the world of large language models (LLMs), this phenomenon is amplified, as they can process poor or outdated information and present it as correct and plausible.
Algorithmic bias is another significant problem with AI systems. It has been well-documented that bias leads to discrimination. While legislators have urged tech companies to address this issue, it remains unresolved. Additionally, other problems associated with GenAI, such as hallucinations, misinformation, lack of explainability, scams, copyrights, user privacy, data security, and environmental implications, have not been adequately mitigated.
The Biden administration and the European Union have called for responsible AI. President Joe Biden signed an executive order in October, laying out requirements for companies to develop AI tools for cybersecurity, privacy-preserving techniques, consumer protection, and worker safety. OpenAI committed to managing AI risks and adhering to responsible AI. However, they have not yet demonstrated the actionable responsible AI they promised.
The European Union’s AI Act also emphasizes transparency and auditability in AI development, but practical solutions have not been provided. Blockchain technology could potentially assist in providing auditable responsible AI. OpenAI should consider implementing such solutions and demonstrating appropriate auditability before scaling their systems.
Responsible innovation, including addressing the auditability of AI systems and mitigating energy implications, should be prioritized before massive scaling. Ensuring that AI systems are safe, secure, and trustworthy is crucial for securing our collective future. While Altman may have a different approach, it is essential to take the right path.
Dr. Merav Ozair, an expert in emerging technologies, emphasizes the significance of responsible AI and the need for addressing its risks and challenges. She suggests that implementing auditable responsible AI, along with mitigating energy implications, should be prioritized before scaling AI systems.
Please note that this article is for informational purposes only and should not be considered legal or investment advice. The views expressed here are solely those of the author and do not necessarily represent the views of Cointelegraph.