Microsoft is warning that operatives linked to North Korea are using artificial intelligence tools to impersonate Western job applicants and secure remote technology positions, a scheme the company says can generate revenue for Pyongyang while creating potential security risks for employers.

According to Microsoft, the operation involves individuals posing as legitimate candidates for software development and information technology roles at Western companies. The applicants typically rely on fake or stolen identities when applying for remote positions, allowing them to pass through standard hiring processes while concealing their true location.

The company said the operatives increasingly rely on artificial intelligence to improve their chances of being hired. AI-generated headshots, fabricated résumés, and locally plausible names and email formats are used to make the candidates appear legitimate to hiring managers. Microsoft also said some individuals use voice-altering software during interviews to disguise accents or speech patterns that could reveal their true background.

In some cases, face-swapping technology and other digital manipulation tools are reportedly used to further disguise the applicant’s identity during video calls.

Microsoft said the use of AI tools continues even after individuals are hired. According to the company, operatives may rely on AI systems to draft emails, translate documents, or generate software code while attempting to maintain the appearance of a normal remote employee.

The company tracks one cluster of activity under the name “Jasper Sleet,” which it said uses artificial intelligence throughout the operation. Microsoft said the group “leverages AI across the attack lifecycle to get hired, stay hired, and misuse access at scale.”

Federal authorities have previously warned that North Korea’s remote IT worker operations are part of a broader effort to generate foreign currency for the regime while also potentially gaining access to sensitive corporate systems.

According to the U.S. Department of Justice, such schemes have reached more than 100 U.S. companies. Investigators say the operations often involve stolen identities, facilitators located in the United States, and “laptop farms” that allow overseas workers to appear as if they are operating from within the country.

The Federal Bureau of Investigation has also warned that some workers involved in these schemes have attempted to steal sensitive data or use access to corporate systems as leverage for extortion.

Microsoft said it disrupted roughly 3,000 Outlook and Hotmail accounts last year that were linked to suspected North Korean IT worker operations.

The company urged employers to strengthen hiring verification procedures for remote positions, including conducting video or in-person interviews and watching for signs of manipulated images. Microsoft also advised hiring teams to look for visual distortions around facial features such as eyes, ears, or glasses that could indicate the use of AI-generated images or face-swapping technology.

Federal investigators have issued similar recommendations, encouraging businesses to conduct stronger identity verification, confirm employment history directly, and more closely review remote hiring arrangements.

Upwork, one of the online job platforms where technology roles are frequently posted, said it actively monitors its marketplace and takes aggressive steps to remove fraudulent accounts and other bad actors.