Skip to content

North Korean cybercriminals allegedly exploited the functionality of ChatGPT to craft counterfeit identity documents.

North Korean hackers allegedly employed ChatGPT to produce a deepfake military ID for a cyber assault on a South Korean target, cybersecurity experts claim.

North Korean Hackers Employed ChatGPT in the Production of Deepfake Identity Documents
North Korean Hackers Employed ChatGPT in the Production of Deepfake Identity Documents

North Korean cybercriminals allegedly exploited the functionality of ChatGPT to craft counterfeit identity documents.

In a startling revelation, a suspected North Korean hacking group, known as Kimsuky or APT43, has been accused of using the advanced AI model, ChatGPT, to create a deepfake of a South Korean military ID document as part of a cyberattack targeting the country.

The attack, which was first reported by South Korean cybersecurity firm Genians in July, marks a significant leap in the use of AI by state-sponsored hacking groups in intelligence-gathering efforts.

Kimsuky has previously been linked to spying efforts against South Korean targets. The group is alleged to be part of a long-running effort by North Korea to gather information for the government in Pyongyang, reportedly using cyberattacks, cryptocurrency theft, and IT contractors.

Phishing targets in this spree included South Korean journalists, researchers, and human rights activists focused on North Korea. The US Department of Homeland Security has stated that Kimsuky is likely tasked by the North Korean regime with a global intelligence-gathering mission.

The latest cybercrime spree saw attackers leveraging AI during the hacking process, including malware development and impersonating job recruiters. In one instance, the email address used in the phishing attempt ended in .mli.kr, an impersonation of a South Korean military address.

During their investigation, Genians researchers experimented with ChatGPT, discovering that it was possible to bypass the restriction against creating government IDs by altering the prompt. The hackers are also reported to have used the Claude Code tool to get hired and work remotely for US Fortune 500 tech companies.

This is not the first time North Korea has been accused of using AI in its cyber operations. In February, OpenAI banned suspected North Korean accounts from using their service to create fraudulent resumes, cover letters, and social media posts.

The funds generated from these cyberattacks are said to be used to help the regime subvert international sanctions and develop its nuclear weapons programs, according to the US government. The number of victims breached in this spree wasn't immediately clear.

The use of AI in state-sponsored cyberattacks underscores the need for continued vigilance and the development of advanced cybersecurity measures to protect against such threats.

Read also:

Latest