Amazon Web Services (AWS) and Nvidia on Tuesday announced new initiatives in their strategic collaboration that will focus on adding supercomputing capabilities to the companies’ artificial intelligence (AI) infrastructure.
The unveiling took place during the AWS re:Invent conference, showcasing a range of significant projects. A standout initiative is Project Ceiba, an advanced supercomputer slated to be integrated with various AWS services. This collaboration will provide Nvidia with access to an extensive array of AWS capabilities, including the secure Virtual Private Cloud networking and high-performance block storage.
Project Ceiba is earmarked for research and development endeavors focused on propelling AI advancements, particularly for large language models (LLMs). It will also delve into graphics applications, encompassing the generation of images, videos, and 3D content. Additionally, the supercomputer will contribute to simulations, digital biology, robotics, self-driving cars, Earth-2 climate prediction, and various other domains.
AWS and Nvidia will also partner in powering Nvidia DGX Cloud, an AI supercomputing service that gives enterprises access to multi-node supercomputing to train complex LLMs and generative AI models. It will be integrated with Nvidia AI Enterprise software and provide customers with direct access to Nvidia’s AI experts. Amazon Web Services
In a pioneering move, Amazon is set to be the first cloud provider to integrate Nvidia’s GH200 Grace Hopper Superchips with multi-node NVLink technology into its Elastic Cloud Compute (EC2) platform. This integration of Nvidia Superchips is poised to empower Amazon EC2 with the capacity to offer up to 20 terabytes of memory, facilitating the execution of terabyte-scale workloads on the platform.
