The neural network would perform its tasks normally but will serve up malware in a way that will make it universal in the future.
Boffins from the University of the Chinese Academy of Sciences used real malware samples and used them to replace half of the neurons in the AlexNet model -- a benchmark-setting classic in the AI field. Even with the malware still kept the model's accuracy rate above 93.1 percent. The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Some of the models were tested against 58 common antivirus systems and the malware was not detected.
The advantage is that other methods of hacking into businesses or organisations, such as attaching malware to documents or files, often cannot deliver malicious software en masse without being detected.
The AI method would work if an organisation brings in an off-the-shelf machine learning model for any given task (say, a chatbot, or image detection) that could be loaded with malware while performing its task well enough not to arouse suspicion.
According to the study, this is because AlexNet (like many machine learning models) is made up of millions of parameters and many complex layers of neurons including what are known as fully-connected "hidden" layers. By keeping the huge hidden layers in AlexNet completely intact, the researchers found that changing some other neurons had little effect on performance.
According to the paper, in this approach, the malware is "disassembled" when embedded into the network's neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update.
The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using "traditional methods" like static and dynamic analysis.