There isn’t much in common with a GPU compute datacenter and a software or server compute datacenter besides the power and cooling. These are useless for let’s say streaming video or other content hosting.
Great for converting video (something Google does a shitload of), 3D material and fluid simulations (every engineering firm), large scientific projects (BOINC gets most of their compute from donated GPU time), and ray-tracing for movies (animated movies use a shitload of GPU compute).
They are almost certainly going to be sold at very, very low rates after the bubble pops. Lots of supply!
Think of it like economic dumping. Going to make it hard for others to compete and anybody who has demand for some of that supply is going to be in great shape!
There isn’t much in common with a GPU compute datacenter and a software or server compute datacenter besides the power and cooling.
I’m…not sure you have a good handle on how AWS, GCP, or Azure hyperscaler data centers (which are the type the article is referring to) are set up if this is your take. These services don’t sell AI GPU or compute services. Its the whole tip-to-tail of IT servers and services with AI simply being one service offering. GPUs don’t exist in as a standalone product. They’re installed in compute instances. Even the AI managed service offerings of these hyperscalers are simply compute instances chucked full of GPU with a layer of software abstraction over top of them. You don’t need to take my word for it. Its right in the sale pages for the hyperscalers:
Here’s GCP:
Compute Engine is GCP’s Virtual Machine product. GKE is GCP’s Kubernetes (compute container) product.
Here’s AWS:
EC2 is AWS’s Virtual Machine product. You can see they have many different instance types (VM types) that have GPUs available to them.
GPUs are not sold as a standalone product by these companies. They are sold in concert with the compute products and services.
The possible exception might be Oracle Cloud Infrastructure’s (OCI) Supercluster which uses RoCE to stitch together thousands of GPU cores in an AI switched fabric model. source But that is an exception, not the rule. 99% of the AI offerings from these hyperscalers are GPUs stuffed into servers that can be used for regular old non-AI compute needs.
These are useless for let’s say streaming video or other content hosting.
GPUs have been used for General Purpose (GPGPU) computing long before AI took the stage. Nvidia released CUDA, its GPGPU language 18 years ago in 2007.
There isn’t much in common with a GPU compute datacenter and a software or server compute datacenter besides the power and cooling. These are useless for let’s say streaming video or other content hosting.
Great for converting video (something Google does a shitload of), 3D material and fluid simulations (every engineering firm), large scientific projects (BOINC gets most of their compute from donated GPU time), and ray-tracing for movies (animated movies use a shitload of GPU compute).
We don’t need huge portions of our electrical grid dedicated to any of that.
They are almost certainly going to be sold at very, very low rates after the bubble pops. Lots of supply!
Think of it like economic dumping. Going to make it hard for others to compete and anybody who has demand for some of that supply is going to be in great shape!
None of that has anything to do with the associated electrical usage.
I’m…not sure you have a good handle on how AWS, GCP, or Azure hyperscaler data centers (which are the type the article is referring to) are set up if this is your take. These services don’t sell AI GPU or compute services. Its the whole tip-to-tail of IT servers and services with AI simply being one service offering. GPUs don’t exist in as a standalone product. They’re installed in compute instances. Even the AI managed service offerings of these hyperscalers are simply compute instances chucked full of GPU with a layer of software abstraction over top of them. You don’t need to take my word for it. Its right in the sale pages for the hyperscalers:
Here’s GCP:
Compute Engine is GCP’s Virtual Machine product. GKE is GCP’s Kubernetes (compute container) product.
Here’s AWS:
EC2 is AWS’s Virtual Machine product. You can see they have many different instance types (VM types) that have GPUs available to them.
GPUs are not sold as a standalone product by these companies. They are sold in concert with the compute products and services.
The possible exception might be Oracle Cloud Infrastructure’s (OCI) Supercluster which uses RoCE to stitch together thousands of GPU cores in an AI switched fabric model. source But that is an exception, not the rule. 99% of the AI offerings from these hyperscalers are GPUs stuffed into servers that can be used for regular old non-AI compute needs.
GPUs have been used for General Purpose (GPGPU) computing long before AI took the stage. Nvidia released CUDA, its GPGPU language 18 years ago in 2007.