InfraHub Compute use GPUs – chips designed to handle a large amount of data in parallel. The NVIDIA GPUs the company uses are the gold standard for accelerating Al, as well as rendering, analytics, simulation, and scientific computing, to name a few.
GPUs have a larger number of cores compared to CPUs, enabling them to handle many tasks simultaneously which is crucial for the matrix multiplications and vector operations typical in Al workloads. Al Inference speed, for example, is 237 times faster with the NVIDIA A100 GPU than with traditional CPUs1. Back in 2020, data centre budgets were still heavily skewed towards CPUs, with these chips comprising 83% of total spend on processors, however as Moore’s Law comes to an end for CPUs, GPUs are expected to become the dominant processor2.
Moore’s law states that every two years the processing power of computer chips doubles while the cost halves. This law is currently dying as we are reaching the physical limitations of silicon-based CPUs we can no longer shrink the spaces between components on a chip. This, in turn, makes GPUs the ONLY option.
Jensen Huang’s law claims that the increased performance is due to Al software as much as it is to increases in chip hardware, and GPUs are now outpacing CPUs in terms of advancement, with capabilities tripling every two years.
As Artificial Intelligence and accelerated compute adoption increases, so too does customer-base.