This is a mobile optimized page that loads fast, if you want to load the real page, click this text.

NVIDIA COMPUTEX 2024 NEWS: Data Centre and Enterprise Announcements

VNZ-NEWS

Administrator
Thành viên BQT
At his COMPUTEX keynote address, NVIDIA founder and CEO Jensen Huang made the following data centre and enterprise announcements:


Computer Industry Joins NVIDIA to Build AI Factories and Data Centres for the Next Industrial Revolution
NVIDIA and the world’s top computer manufacturers today unveiled an array of NVIDIA Blackwell architecture-powered systems featuring Grace CPUs, NVIDIA networking and infrastructure for enterprises to build AI factories and data centres to drive the next wave of generative AI breakthroughs. ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn will deliver cloud, on-premises, embedded and edge AI systems using NVIDIA GPUs and networking.

Read full press release: https://nvidianews.nvidia.com/news/computer-industry-ai-factories-data-centers

NVIDIA Supercharges Ethernet Networking for Generative AI
NVIDIA also announced widespread adoption of the NVIDIA Spectrum-X Ethernet networking platform as well as an accelerated product release schedule. CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global, and Yotta are among the first AI cloud service providers embracing NVIDIA Spectrum-X to bring extreme networking performance to their AI infrastructures. Several NVIDIA partners have announced Spectrum-based products, including ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn, joining Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro in incorporating the platform into their offerings.

Read full press release: https://nvidianews.nvidia.com/news/nvidia-supercharges-ethernet-networking-for-generative-ai

NVIDIA NIM Revolutionises Model Deployment, Now Available to Transform World’s Millions of Developers Into Generative AI Developers


The world’s 28 million developers can now download NVIDIA NIM — inference microservices that provide models as optimised containers — to deploy on clouds, data centres or workstations, giving them the ability to easily build generative AI applications for copilots, chatbots and more, in minutes rather than weeks. These new generative AI applications are becoming increasingly complex and often utilise multiple models with different capabilities for generating text, images, video, speech, and more.

Read full press release: https://nvidianews.nvidia.com/news/nvidia-nim-model-deployment-generative-ai-developers