If it is all true then it could mean that deep learning, the technique used by IBM, is a subset of artificial intelligence (AI) that mimics how the human brain works could be set to take off.
IBM wanted to reduce the time it takes for deep learning systems to digest data from days to hours.
The improvements could help radiologists get faster, more accurate reads of anomalies and masses on medical images,.
Hillery Hunter, an IBM Fellow and director of systems acceleration and memory at IBM Research, said that, until now, deep learning has largely run on single server because of the complexity of moving huge amounts of data between different computers.
The problem is in keeping data synchronized between lots of different servers and processors
In its announcement IBM says it has come up with software that can divide those tasks among 64 servers running up to 256 processors total, and still reap huge benefits in speed. The company is making that technology available to customers using IBM Power System servers and to other techies who want to test it.
IBM used 64 of its own Power 8 servers—each of which links IBM Power microprocessors with Nvidia graphical processors with a fast NVLink interconnection to facilitate fast data flow between the two types of chips..
It developed clustering technology that manages the multiple processors in a given server as well as to the processors in the other 63 servers. If that traffic management is done incorrectly, some processors sit idle, waiting for something to do.
Each processor has its own data set that it knows, also needs data from the other processors to get the bigger picture. If the processors get out of sync they can't learn anything, explained Hunter.