There isn’t much in common with a GPU compute datacenter and a software or server compute datacenter besides the power and cooling.
I’m…not sure you have a good handle on how AWS, GCP, or Azure hyperscaler data centers (which are the type the article is referring to) are set up if this is your take. These services don’t sell AI GPU or compute services. Its the whole tip-to-tail of IT servers and services with AI simply being one service offering. GPUs don’t exist in as a standalone product. They’re installed in compute instances. Even the AI managed service offerings of these hyperscalers are simply compute instances chucked full of GPU with a layer of software abstraction over top of them. You don’t need to take my word for it. Its right in the sale pages for the hyperscalers:
Here’s GCP:
Compute Engine is GCP’s Virtual Machine product. GKE is GCP’s Kubernetes (compute container) product.
Here’s AWS:
EC2 is AWS’s Virtual Machine product. You can see they have many different instance types (VM types) that have GPUs available to them.
GPUs are not sold as a standalone product by these companies. They are sold in concert with the compute products and services.
The possible exception might be Oracle Cloud Infrastructure’s (OCI) Supercluster which uses RoCE to stitch together thousands of GPU cores in an AI switched fabric model. source But that is an exception, not the rule. 99% of the AI offerings from these hyperscalers are GPUs stuffed into servers that can be used for regular old non-AI compute needs.
These are useless for let’s say streaming video or other content hosting.
GPUs have been used for General Purpose (GPGPU) computing long before AI took the stage. Nvidia released CUDA, its GPGPU language 18 years ago in 2007.
I’m…not sure you have a good handle on how AWS, GCP, or Azure hyperscaler data centers (which are the type the article is referring to) are set up if this is your take. These services don’t sell AI GPU or compute services. Its the whole tip-to-tail of IT servers and services with AI simply being one service offering. GPUs don’t exist in as a standalone product. They’re installed in compute instances. Even the AI managed service offerings of these hyperscalers are simply compute instances chucked full of GPU with a layer of software abstraction over top of them. You don’t need to take my word for it. Its right in the sale pages for the hyperscalers:
Here’s GCP:
Compute Engine is GCP’s Virtual Machine product. GKE is GCP’s Kubernetes (compute container) product.
Here’s AWS:
EC2 is AWS’s Virtual Machine product. You can see they have many different instance types (VM types) that have GPUs available to them.
GPUs are not sold as a standalone product by these companies. They are sold in concert with the compute products and services.
The possible exception might be Oracle Cloud Infrastructure’s (OCI) Supercluster which uses RoCE to stitch together thousands of GPU cores in an AI switched fabric model. source But that is an exception, not the rule. 99% of the AI offerings from these hyperscalers are GPUs stuffed into servers that can be used for regular old non-AI compute needs.
GPUs have been used for General Purpose (GPGPU) computing long before AI took the stage. Nvidia released CUDA, its GPGPU language 18 years ago in 2007.