Intel has announced two new CPU designs for large computing organizations which will be the Blue team’s first chip to use artificial intelligence (AI).
The development follows Intel’s AI-based performance accelerators like Myriad X Visual Processing Unit that features a Neural Compute Engine to draw deep neural network inferences.
The chipmaker is far from the only company to come up with machine learning processors to handle AI algorithms. Google Tensor Processing Unit (TPU), Amazon AWS Inferentia, and NVIDIA NVDLA are some of the other popular solutions for the companies as the need for complex computations continue to increase, and demand for more advanced computational machines with more efficient computational algorithms is still high.
The two new chips are the company’s first offerings from its Nervana Neural Network Processor (NPP) line and one will be used to train the AI machine on dynamic datasets while the other will handle inference and output.
The Nervana NNP-T, code-named Spring Crest, will be used for training and comes with 24 Tensor processing clusters (Multi-Dimensional processing vectors/matrices) that have been specifically developed to sustain the conventional neural networks. Intel’s new system on a chip (SoC) ensures users to perform everything they’ll need to train an AI system on dedicated GPUs or CPUs.
The Nervana NNP-I, code-named Spring Hill, is the company’s inference SoC that uses its 10-nanometer process technology along with Ice Lake cores to help users implement trained AI systems.
Intel’s new AI-based chips are designed to handle AI workloads in data center environments, especially in Supply Chain Management so that users no longer have to utilize their computation abilities on XEON chips to handle AI and machine learning tasks, manually. Xeon chips are capable of handling such workloads, though they are not nearly as effective or efficient.
Vice President and General Manager of Intel’s Artificial Intelligence Products Group, Naveen Rao explained how the company’s new processors will help facilitate an AI-driven future when AI will takeover, and said, “To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources. Data centers and the cloud need to have access to performant and scalable general-purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed, from hardware to software to applications.”
What is Artificial Intelligence and Machine Learning?
Artificial intelligence (AI) is the simulation of human intelligence processes by computers (mostly) and machines. These processes include learning (the acquisition of information and rules for implementing the available information on test cases) of the dataset, Logical reasoning (using rules to reach approximate or definite conclusions) and confusion matrix (self-correction).
Conventional Neural Networks (CNN) is the extension of Neural Networks, whereas Neural Networks fall into the category of Machine Learning, which is further a sub-area of the greater field, Artificial Intelligence.
Machine Learning works in two parts, first, it learns the Data from the available dataset called Training Set and then applied those learn patterns over an unseen dataset called Test Set by predicting the results. The accuracy and percentage of true predictions are considered as the intelligence efficiency of the trained model.