Intel announced Nervana Neural Network Processor soon to be shipped
As per recent newsletter, Brian Krzanich chief executive officer of Intel Corporation spoken about cognitive and artificial intelligence technology. Intel is already working and leading on areas like research and investments in hardware, data algorithms and analytics, acquisitions, and technology advancement. Intel is soon going to ship the Nervana Neural Network Processor industry’s first silicon for neuron network processing before end of this year.
According to Intel, Nervana Neural Network Processor promises to revolutionize the AI computing across various industries, which will maximize the amount of data processed and enable customers to find the greater insights.
Here are the some of the examples giving more details about usage
1. Health Care: Here, it will allow for earlier diagnosis with greater accuracy.
2. Social Media: Here, providers will be able to deliver more personalized experience to their customers.
3. Automotive: Here, accelerated learning will help in putting autonomous vehicles on the road.
4. Weather: Here, it will collect immense data which will help in improving predictions on climate shifts.
Intel has already set some goals to last year of achieving 100 times AI performance by 2020.
Intel’s New Class of Hardware that is already AI by Design. It provides Blazingly fast data access. Intel Nervana Hardware uses new high capacity, high speed, high bandwidth memory to provide the maximum level of on-chip storage and a blazingly fast memory access speed.
It’s throughput near the theoretical limit. Intel Nervana Hardware has separate pipelines for computation and data management so new data will be always available for computation. This pipeline isolation combined with local memory to run its theoretical maximum throughput most of the time.
It’s built in Networking for unprecedented speed and scalability. Intel Nervana hardware uses bidirectional high bandwidths links to move data seamlessly between them, which allows linear speedup on current models by assigning more compute power to the task without decreasing the speed.