The AI industry is grappling with the scarcity and high cost of GPUs required for training complex models. However, DEKUBE has introduced a network solution for distributed AI training that aims to democratize AI development by utilizing consumer-grade GPUs.
The demand for computational power in the AI industry, particularly for training large language models (LLMs), is growing exponentially. However, there is a shortage of high-end GPUs like Nvidia’s H100 and A100, which has created bottlenecks in the sector.
Larry Ellison, the founder and chairman of Oracle, shared an anecdote that highlights the critical importance and scarcity of high-end GPUs. He recounted a dinner with Jensen Huang, the CEO and founder of Nvidia, where Ellison and Elon Musk found themselves “begging” for access to Nvidia’s enterprise-grade technology.
The scarcity of high-end GPUs, coupled with technological and production constraints, has led to a near monopoly by major corporations. This has stifled innovation and put smaller entities and researchers at a disadvantage.
However, there is untapped computational power within consumer-grade GPUs that could democratize AI development. The challenge lies in addressing the limitations of these GPUs in terms of memory, computing power, and bandwidth, which traditionally make them unsuitable for training extensive AI models.
DEKUBE aims to revolutionize AI infrastructure by creating a global network for AI training powered by consumer-grade GPUs. This innovative use of consumer GPUs unlocks distributed AI training on a large scale, making it both flexible and cost-efficient. DEKUBE’s approach democratizes AI development and makes advanced AI accessible and achievable.
One of DEKUBE’s notable initiatives is the GPU mining event, where users can connect their GPUs to the network and earn DEKUBE points. These points can be converted into tokens upon the launch of the mainnet, providing an incentive for community participation and rewarding network contributors.
DEKUBE’s technology overcomes efficiency bottlenecks in AI training by optimizing various aspects of the process, including network transmission, LLMs, data sets, and training processes. This optimization allows for the effective utilization of computational resources from consumer-grade GPUs, addressing the shortage of computational power and high costs associated with training large AI models.
DEKUBE has successfully deployed Llama2 70B on its distributed compute power network, demonstrating the feasibility of using widely available hardware for training complex AI models. The planned deployment of Grok 314B, one of the largest open-source LLMs, aims to further test the limits of distributed computing in AI and enhance the scalability and accessibility of high-caliber AI technologies.
The platform has received industry-wide attention and support, with technical experts dedicating over three years to its development. Notable AI entrepreneurs and leading miners have contributed to the latest financing round, raising funds to build the most extensive distributed AI training infrastructure with over 20,000 GPUs online by the second quarter of 2024.
DEKUBE aims to reduce computational power procurement costs and cycles for AI model developers, accelerating technological progress in the industry. The platform will continue to support open-source LLM developers and teams, invest in and incubate outstanding AI projects, and act as a bridge between them and Web3 in the future to promote innovation and development in the industry.
By providing easier access to high-end GPUs, DEKUBE paves the way for innovation, greater accessibility, and broader participation in advancing AI. Through the utilization of consumer-grade GPUs and overcoming the challenges of computational power scarcity and high costs, DEKUBE positions itself at the forefront of a transformative movement in the AI sector.