Skip to content

Learning Unauthorized Acquisition of an AI Model without Resorting to Hacking

Scientists have engineered a method that identifies a model's electromagnetic imprint and contrasts it with other models operated on the same type of processor.

Learning Unauthorized Acquisition of an AI Model without Resorting to Hacking

Artificial intelligence models have an unexpected vulnerability - they can be cloned, if you can detect their electromagnetic emissions. Researchers at North Carolina State University discussed this technique in a recent study, even though they affirm they don't support cyberattacks on AI networks. They needed an electromagnetic sensor, pre-trained AI models, open-source software, and a Google Edge Tensor Processing Unit (TPU) to conduct their research. The method involves tracking electromagnetic emissions while the TPU chip is in use.

In an interview with Gizmodo, Ashley Kurian, the study's lead author and a Ph.D. student at NC State, stated, "Designing and training a neural network is expensive. It's an intellectual property that companies own, and developing it requires significant time and resources. ChatGPT, for example, has millions of parameters. If someone steals it, it becomes their property for free. They don't have to pay for it, and they could potentially resell it."

Theft in AI is a significant concern. However, typically, it's the AI developers who use copyrighted content to train their models without permission from the creators, leading to lawsuits and tools to help artists protect their work by manipulating AI generators.

In the study, the electromagnetic data yielded a "signature" of the AI processing behavior, according to Kurian. However, to decipher the model's hyperparameters, they compared the electromagnetic field data to data captured during other AI models' use on the same type of chip. By analyzing the results, they could identify the architecture and key details required to create an exact copy of the AI model with 99.91% accuracy.

To accomplish this, the researchers needed physical access to the chip, used it for probing, and ran other models on it. They also collaborated with Google to evaluate the chip's vulnerability to attacks.

Kurian suggested that models running on smartphones might also be infiltrated, but monitoring electromagnetic signals on these devices would be more challenging due to their compact design.

Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, stated, "Side channel attacks on edge devices are common, but this technique for extracting entire model architecture hyperparameters is significant." Since AI hardware processes data in plaintext, Sencan explained, "Anyone deploying their models on edge or in unguarded servers would have to assume that their architectures can be extracted through extensive probing."

This technique of cloning AI models through electromagnetic emissions highlights the potential risks in the tech sector's future, as it threatens the intellectual property rights of companies utilizing artificial-intelligence. The discovery by researchers at North Carolina State University underscores the necessity of enhancing security measures for AI networks to prevent such vulnerabilities from being exploited.

Read also:

    Latest