Intel Details Lake Crest Deep Neural Network AI Accelerators
Samuel Wan / 1 year ago
One of the big movements going on in the tech sector is the rise of the AI. From self-driving cars to smart assistants, AI is nearing reality. To power the more complicated AI’s, both AMD and Nvidia has made custom deep learning accelerators. Not one to be left out of the loop, Intel is getting into the fray, supplementing their current Xeon Phi compute accelerators with the new Lake Crest Deep Neural Network accelerators.
Unlike Nvidia and AMD’s solutions which are modified GPU designs or Xeon Phi which is modified x86, Lake Crest is a whole new architecture tailored for deep learning. This Flexpoint architecture is combined with HBM2 in an MCM with an interposer, not unlike AMD’s Fiji. In total, each Lake Crest chip features 32GB of HBM2 for a total of 1 TB/s of bandwidth. Each 8GB stack has its own controller. Within each Lake Crest chip, there are a total of 12 compute clusters each with several cores. The clusters are connected using the new interconnect which uses links 20x faster than PCIe with a total of 12 links.
In the future, Intel is also planning on combining a Xeon or Xeon Phi with a Lake Crest chip to form Knights Crest for an all in one solution. For now, Lake Crest will be paired up with Skylake Xeons and Knights Mill Xeon Phi. The end goal is to reach a 100x speed improvement by 2020 for machine learning. With so many different options available, it will be interesting to see which platforms become dominant or will they coexist side by side going forward.