The Consumer Electronics Show (CES) may seem like an odd place to announce server processors, but Intel knows full well the eyes of the tech world are on the show. And what better place to corral a bunch of journalists?
First up was shipment of the new Xeon Scalable CPU, code-named Cascade Lake, featuring improved artificial intelligence (AI) and memory capabilities. Cascade Lake is the first to feature support to the company’s Optane DC persistent memory and instruction set, called DL Boost, to facilitate AI-based deep learning (DL) inference.
Optane memory goes in the memory slots and has the persistence of flash but better performance. Think of it as a cache between the SSD and the main memory. It will also support multiple terabytes of memory per socket.
Intel’s 10nm Ice Lake architecture
The bigger news was that Intel showed off the 10nm Ice Lake architecture, which is based on a whole new microarchitecture and finally achieves 10nm fabrication. Intel has been stymied for years at getting to 10nm. The company expects to ship desktop and notebook Ice Lake processors at the end of this year, with server processors coming in 2020.
The Ice Lake architecture is called Sunny Cove, which promises a significant improvement in performance over the current Skylake generation of processors through a set of changes that are inordinately complex to explain, and I don’t want to spend time on it. Suffice it to say, all of the changes mean the processor can execute code with much more depth and breadth than Skylake. It also has fixes for the Spectre v2 exploit. Spectre v1 has already been fixed in shipping products.
New AI chip
Intel also announced a brand-new class of AI processor called the Nervana NNP-1, which stands for neural network processor and is being positioned as an alternative to GPU-based AI. GPUs, mostly from Nvidia, are very popular for AI processing due to their massively parallel design, but the drawback is tremendous power draw — often 300 watts or more per chip.
The NNP-1 draws just 100 watts of power, much less than GPUs. The question now is how does that compare in terms of performance to Nvidia’s Tesla V100 GPUs, which are monsters. And Nervana is focused on the inference part.
Machine learning with neural networks is a two-pronged process. First comes the training, making the network learn whatever it is it needs to learn, and then actually putting that training to work on new information, the inference. The more resources you put into training, the more accurate it can be.
The real processing part is the training. The amount of processing to do inference is lower because the system has already done its homework. So, it could be neural networks will run on Teslas for the training and NNP-1 for the actual execution, since Teslas are more power than inference needs and thus a potential place to save power. More will come out when the NNP-1 ships later this year.
Other Intel initiatives
Intel also unveiled Project Athena, a new chip technology specifically for advanced laptops to give them both 5G and AI capabilities. Like Centrino and ultrabooks, Intel seems to be driving laptop innovation. Right now the project is somewhat amorphous; Intel didn’t give a lot of details. The company plans to discuss what OEMs, partners, and software developers need in order to enable a positive user experience for 5G and AI.
Also on the 5G agenda is a 10nm-based network system on chip (SoC) code-named Snow Ridge and developed specifically for 5G wireless access and edge computing. This SoC is designed for wireless access base stations and edge computing networks, again to bring 5G to new markets, such as edge networks and wireless base stations.
Finally there was Lakefield, a new client platform that features a hybrid chip consisting of a x86 processor and Atom processor on the same die. This means a smaller motherboard footprint and thus the ability to use Lakefield in thin and light form factors. Lakefield is expected to be in production this year.
For a company that’s been without a CEO for six months, Intel isn’t doing too badly.