top of page

Nvidia reveals the H200, its latest premium chip designed for AI model training

Nvidia made waves on Monday with the unveiling of its latest powerhouse in AI processing, the H200 graphics processing unit. Positioned for training and deploying cutting-edge artificial intelligence models, the H200 builds on the success of its predecessor, the H100, famously employed by OpenAI for training GPT-4.



Demand for these chips, priced between $25,000 and $40,000 per unit, is surging across major corporations, startups, and government agencies. The H100, central to colossal model creation through 'training,' has spurred Nvidia's stock, soaring over 230% in 2023, with an anticipated $16 billion in Q3 revenue.


The H200 introduces a significant upgrade, boasting 141GB of advanced 'HBM3' memory, enhancing its inference capabilities. It promises nearly double the output speed compared to the H100, validated through testing with Meta's Llama 2 LLM.

Expected to hit the market in Q2 2024, the H200 is poised for competition with AMD's MI300X GPU. Both chips share the advantage of additional memory for running extensive models during inference.


Crucially, Nvidia ensures compatibility between the H200 and its predecessor, the H100, easing the transition for AI companies. It will be available in four-GPU or eight-GPU server configurations within Nvidia's HGX complete systems, as well as in a chip called GH200, featuring an Arm-based processor alongside the H200 GPU.


However, the H200's reign may be short-lived, as Nvidia shifts to a one-year release pattern. Plans for the B100 chip, based on the upcoming Blackwell architecture, are on the horizon for 2024. As AI chip technology rapidly evolves, stay tuned for more innovations from Nvidia, shaping the future of artificial intelligence.

bottom of page