One of the big movements going on in the tech sector is the rise of the AI. From self-driving cars to smart assistants, AI is nearing reality. To power the more complicated AI’s, both AMD and Nvidia has made custom deep learning accelerators. Not one to be left out of the loop, Intel is getting into the fray, supplementing their current Xeon Phi compute accelerators with the new Lake Crest Deep Neural Network accelerators.
Unlike Nvidia and AMD’s solutions which are modified GPU designs or Xeon Phi which is modified x86, Lake Crest is a whole new architecture tailored for deep learning. This Flexpoint architecture is combined with HBM2 in an MCM with an interposer, not unlike AMD’s Fiji. In total, each Lake Crest chip features 32GB of HBM2 for a total of 1 TB/s of bandwidth. Each 8GB stack has its own controller. Within each Lake Crest chip, there are a total of 12 compute clusters each with several cores. The clusters are connected using the new interconnect which uses links 20x faster than PCIe with a total of 12 links.
In the future, Intel is also planning on combining a Xeon or Xeon Phi with a Lake Crest chip to form Knights Crest for an all in one solution. For now, Lake Crest will be paired up with Skylake Xeons and Knights Mill Xeon Phi. The end goal is to reach a 100x speed improvement by 2020 for machine learning. With so many different options available, it will be interesting to see which platforms become dominant or will they coexist side by side going forward.
No matter who you are or where you are it's pretty sure thing that you've…
Low input lag reduces time delay between devices to monitor SmartImage game mode optimised for…
High-quality, elegant as well as timeless design and technical innovation - these are the features…
Aluminum heat pipe cover discreetly hides copper piping 4 conductive copper heat pipes with Direct…
The customizable ARGB fans snap together with a magnetic connector that links the fans and…
TKL mechanical keyboard with 88 keys in a UK ISO layout V-silk PBT keycaps with…