Warning About Shadow AI Microsoft Warns Against Autonomous Software Assistants

Source: dpa 2 min Reading Time

Related Vendors

In the new Cyber Pulse Report, Microsoft issues an urgent warning against the use of autonomous software assistants. The uncontrolled use of AI already poses a security risk for companies today.

According to the Microsoft study, 29 percent of employees use AI agents without the approval of their IT department or superiors.(Picture: © immimagery - stock.adobe.com)
According to the Microsoft study, 29 percent of employees use AI agents without the approval of their IT department or superiors.
(Picture: © immimagery - stock.adobe.com)

Microsoft warns against the uncontrolled use of autonomous software helpers with artificial intelligence. In its latest Cyber Pulse Report, which was published in the run-up to the Munich Security Conference, researchers from the software company found that AI helpers are already being used for programming in over 80 percent of the largest companies (Fortune 500). However, very few companies have clear rules for the use of AI. The rapid spread poses incalculable risks. A lack of overview by those responsible and "shadow AI" open the door to new methods of attack.

Managers Often Ignorant

"Shadow AI" refers to the use of artificial intelligence applications by employees without the company's IT or security department being aware of it or having officially approved it. Employees use AI tools or agents from the internet on their own authority, i.e. autonomous computer programs, to complete their tasks more quickly without anyone in the company hierarchy being informed.

The Microsoft report warns of a growing discrepancy between innovation and security. While AI use is growing explosively, not even half of companies (47 percent) have specific security controls for generative AI. And 29 percent of employees are already using unauthorized AI agents for their work. This creates blind spots in corporate security.

Hasty Introduction Increases Risks

According to the Microsoft experts, the risk increases if companies do not take enough time to introduce AI applications. "Rapid deployment of AI agents can undermine security and compliance controls and increase the risk of shadow AI," the report states. Malicious actors could exploit the permissions of agents and turn them into unintended double agents.

The authors of the study emphasize that these are not theoretical risks. Recently, Microsoft's Defender team discovered a fraudulent campaign in which several actors used an AI attack technique known as "memory poisoning" to permanently manipulate the memory of AI assistants—and thus the results.

Limit Access to Data

The report recommends several countermeasures to minimize the risk when using AI applications. The software assistants with AI should only have access to the data that they absolutely need to solve their task. In addition, companies should set up a central register to see which AI agents exist in the company, who they belong to and what data they access. Unauthorized agents should be identified and isolated.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent