Google Shares Performance Characteristics for its Machine Learning Chip (Apr 5, 2017)
It’s time to roll out that old Alan Kay maxim again: “those who are serious about software should make their own hardware”. Google started working on its own machine learning chip, which it calls a Tensor Processing Unit or TPU, a few years back, and has now shared some performance characteristics, suggesting that it’s more efficient and faster than CPUs and GPUs on the market today for machine learning tasks. While Nvidia and others have done very well out of selling GPU lines originally designed for computer graphics to companies doing machine learning work, Google is doing impressive work here too, and open sourcing the software framework it uses for machine learning. As I’ve said before, it’s extremely hard to definitively answer the question of who’s ahead in AI and machine learning, but Google consistently churns out evidence that it’s moving fast and doing very interesting things in the space.
via Google Cloud Platform Blog
The company, topic, and narrative tags below will take you to other posts with the same tags. The narrative link(s) will also take you to the narrative essay which provides additional context behind the post.
Vote for or share this post
Use the Like button below to vote for this post as one of the most important of the week. The posts voted most important are more likely to be included in the News Roundup podcast episode I do each week. Or use the sharing buttons to share a link to this post to social networks or other services.