Google is taking on traditional chipmakers such as Intel and Nvidia as it uses its own chips designed to increase the performance of the company’s artificial intelligence software.During the past year, Google has deployed
“thousands” of these specialised artificial intelligence chips,
called TensorFlow Processing Units (TPUs), in servers within its datacentres,
said Urs Holzle, the company’s senior VP of infrastructure.
“If you use cloud voice recognition, then it goes to TPU. If you use
Android voice recognition then it goes to TPUs,” Holzle said. “It’s
been in pretty widespread use for about a year.”
Google’s chip connects to servers via a protocol called PCI-E, meaning it can
be slotted into the company’s computers, augmenting them with faster artificial
intelligence capabilities. It represents a first attempt by Google at designing
specialised hardware for its AI workloads.