As more private data is stored and shared digitally, researchers are exploring new ways to protect data against attacks from bad actors. Current silicon technology exploits microscopic differences between computer components to create secure keys, but artificial intelligence (AI) techniques can be used to predict those keys and access data. Now, Penn State researchers have devised a way to make encrypted keys harder to decipher.
Led by Saptarshi Das, assistant professor of engineering and mechanics, the researchers used graphene – a layer of carbon an atom thick – to develop a new low-power, scalable, hardware security device. reconfigurable with significant resilience to AI attacks. They published their results in Electronic Nature today (May 10).
“There have been more and more private data breaches recently,” Das said. “We have developed a new hardware security framework that could potentially be implemented to protect this data across industries and sectors.”
The device, called a Physically Unchangeable Function (PUF), is the first demonstration of a graphene-based PUF, the researchers say. The physical and electrical properties of graphene, along with the manufacturing process, make the new PUF more energy efficient, scalable, and secure against AI attacks that pose a threat to silicon PUFs.
The team first made nearly 2,000 identical graphene transistors, which turn current on and off in a circuit. Despite their structural similarity, the electrical conductivity of transistors varied due to the randomness inherent in the production process. While such variation is generally a disadvantage for electronic devices, it is a desirable quality for a PUF not shared by silicon-based devices.
After implementing graphene transistors in PUFs, researchers modeled their characteristics to create a simulation of 64 million graphene-based PUFs. To test the security of PUFs, Das and his team used machine learning, a method that allows AI to study a system and find new patterns. The researchers trained the AI with simulation data from the PUF graphene, testing to see if the AI could use this training to make predictions about the encrypted data and reveal insecurities in the system.
“Neural networks are very good at developing a model from a huge amount of data, even if humans can’t do it,” Das said. “We found that the AI could not develop a model and that it was not possible to learn the encryption process.”
This resistance to machine learning attacks makes the PUF more secure because potential hackers could not use the breached data to reverse engineer a device for future exploitation, Das said. Even if the key could be predicted, the graphene PUF could generate a new key through a reconfiguration process requiring no additional hardware or component replacement.
“Normally, once the security of a system has been compromised, it is definitely compromised,” said Akhil Dodda, an engineering and mechanical graduate student who conducts research under Das’ mentorship. “We developed a system in which such a compromised system could be reconfigured and used again, adding tamper resistance as another security feature.
With these characteristics, along with the ability to operate over a wide temperature range, the graphene-based PUF could be used in a variety of applications. Further research may open avenues for its use in flexible and printable electronics, home appliances and more.
Co-authors of the paper include Dodda, Shiva Subbulakshmi Radhakrishnan, Thomas Schranghamer, and Drew Buzzell of Penn State; and Parijat Sengupta of Purdue University. Das is also affiliated with the Penn State Department of Materials Science and Engineering and the Materials Research Institute.
Source of the story:
Material provided by Penn State. Original written by Gabrielle Stewart. Note: Content can be changed for style and length.