ROB ENGELAAR/AFP/Getty Images
show image

Oscar Williams

News editor

The line between AI-powered and human cyber attacks is blurring, researchers warn

It is becoming increasingly difficult for security professionals to distinguish between AI-powered and human cyber attacks, according to a new report.

Researchers at the security vendor Darktrace have warned that AI-driven malware is starting to mimic the behaviour of human attackers in an effort to evade detection.

Meanwhile, human attack groups are beginning to leverage AI-driven implants more regularly, as they seek to scale up their campaigns, the researchers found.

“Ultimately, it is of little significance to for the defender whether a majority of the hack was carried out by a human, AI, or both,” said Darktrace’s director of threat hunting Max Heinemeyer. “For blue teams, it will be increasingly difficult to differentiate between the two during investigations, as they will start to blend into each other.”

Darktrace’s report seeks to highlight how three real-world attacks identified by its researchers might have panned out in the future. In one case, a victim’s computer was infected by an “opportunistic, information-stealing malware”, Darktrace found. Such viruses will soon learn to adapt to their environment, observing conventional operations before striking.

“Imagine a worm-style attack, like WannaCry, which, instead of relying on one form of lateral movement (e.g., the EternalBlue exploit), could understand the target environment and choose lateral movement techniques accordingly,” the researchers wrote. “If EternalBlue were patched, it could switch to brute-forcing SMB credentials, loading Mimikatz or perhaps install a key-logger to capture credentials.”

In another instance, Darktrace detected a malware that used a number of autonomous techniques to remain hidden. Soon, such malware will use AI to instantly “learn what constitutes normal”, the researchers found.

The final scenario involved malware stealing data from a medical technology company so slowly and in such small packets that the exfiltration was never detected. AI will only make this process more covert as the malware adapts to the environment, the researchers said.

“As soon as the malware no longer uses a hard-coded data volume threshold but is able to change it dynamically, based on the total bandwidth used by the infected machine, it will become much more efficient,” the report warns.

It concludes: “Companies are already failing to combat advanced threats such as new strains of worming ransomware with legacy tools. Defensive cyber AI is the only chance to prepare for the
next paradigm shift in the threat landscape when AI-driven malware becomes a reality.”