Login

Nvidia may be considering sockets for its next AI Mega GPUs, but it won't happen with GeForce graphics card

The CPU is almost always mounted in an external socket on desktop PCs, workstations and servers. When it comes to large AI servers, particularly those that use Nvidia chips there is no socket visible, making upgrades and repairs more difficult. Nvidia, according to a report, could change its mind.

The report is from Trendforce, via Chiphell, which claims that Nvidia's next series of Blackwell AI processors, the B300 line, will switch to socketed processors. The former is the most efficient, but it can be a pain to maintain and service.

Trendforce also notes that the change will benefit manufacturers who build Nvidia AI Hardware, as it will reduce the amount surface-mounting equipment required, or at least reduce the time spent using existing equipment to make Nvidia systems.

AMD already uses a connector for its Instinct MI300A monster chip, specifically an SP5 socket that looks suspiciously similar to AMD's SH5 socket for EPYC servers CPUs. Intel, on the contrary, follows Nvidia’s line of thought with its Gaudi 3 AI Accelerators, but because there aren’t many companies using this processor, there is no pressure on Intel.

All of this is meaningless to the average consumer. But you can be sure that you won't see a GPU socketed any time soon. This is because AMD and Nvidia mega AI accelerators come with RAM in the same package as their processing chiplets. So, there's no need for the user to replace the memory when they swap out the accelerator.

Discrete graphics cards are soldered with RAM to the circuit board. Although there have been consumer GPUs in the past that had VRAM on the package (e.g. The Radeon VII is a good example of this. However, the cost of these systems compared to high-speed GDDR6 make it uneconomical for large scale implementations.

You may then wonder why the GPU , and VRAM are not both socketed like the CPU and system RAM in your desktop PC. This would not only reduce the performance of the graphics cards memory system but also increase the cost to manufacture the card.

I don't think anyone would want to pay that extra cost to upgrade the RAM and GPU on the same circuit board, given how expensive they are today. Memory slots on motherboards also follow a standard, and CPUs have been designed to fit that standard.

This is not a GPU design, and I don't think AMD, Intel, or Nvidia will ever agree on a VRAM-socket design. This would also make GPUs unnecessarily complex.

Look at the AMD AM4 Ryzen processors. They all have a dual-channel memory controller that is 128 bits wide, whereas Nvidia’s current RTX40-series GPUs range between 96 and 384 bits. All of that is too expensive and complex to fit into a single socket.

In the distant future, discrete GPUs may be available in sockets, but for the time being, the CPU and cash cow AI chips are the only ones that are.

Interesting news

Comments

Выбрано: []
No comments have been posted yet