This has been a big week for specialized chips aimed at performing machine learning tasks. Google announced the open beta of its second generation Tensor Processing Unit, Amazon is reportedly working on a dedicated AI chip for its Echo smart speaker, and ARM announced its own AI hardware.

It’s easy to see why that’s happening: The math needed to run machine learning algorithms is incredibly computationally intense. Chips optimized for the task do that faster and more efficiently than general processors. What’s more, data scientists keep trying to push the envelope of accuracy by creating ever more complex models, which in turn require more power. Specialized silicon can increase the efficiency, making it easier to run complex models on beefy machines as well as devices with less power, though the particular hardware can differ between applications.

AI-optimized silicon is popping up everywhere. It’s already in your phone, or will be within a few years. Meanwhile, the three major cloud players all have their own versions of dedicated AI hardware, with chipmakers building their own capabilities as well. Then there’s a conga line of startups that all have their own takes on how to tackle the same problem.

I expect that hardware-based AI accelerators will be as common as — if not more common than — dedicated signal processors for video decoding, networking hardware, and other purpose-built silicon that already makes its way into our computers, smartphones, tablets, and other electronics today.

But as all that comes to pass, chipmakers and consumers will have to consider the lifecycle of…

[SOURCE]