大规模训练,需要数千个gpu,那光gpu就得好几个亿
For small-scale models and datasets, a single high-performance GPU or a few GPUs might suffice. However, larger, more complex models and datasets will necessitate multiple GPUs, potentially even hundreds or thousands in a distributed system.
【 在 liangf 的大作中提到: 】
: 我以为你要搞大规模训练呢
--
FROM 116.128.189.*