Login

Security expert says that creating your own Microsoft Copilot Chatbot is easy, but making it secure and safe is almost impossible.

We all could use our own custom-built chatbot. Microsoft's Copilot studio is a great tool that allows those who are less technically minded (or don't know Fortran) to build their own chatbot. The idea is that most businesses and organizations can easily create a chatbot using their internal documents and data.

Imagine a game developer using a chatbot for gamers to ask questions on everything from how to finish a game, to applying the best settings, and fixing technical problems. But there is always a catch.

Zenity, a specialist in AI security, says that Copilot Studio, and the chatbots created by it, are a security nightmare. (via The Register). Zenity CTO Michael Bargury presented a session at the Black Hat Security Conference, examining the horrors of allowing Copilot to access data to create a bot.

It seems that Copilot Studio's default settings are inadequate. The danger is that, by using the super-easy Copilot Studio to create an extremely useful tool that customers or staff can use to query in natural language, you may find that it opens a huge door to exploits.

Bargury demonstrated how a bad actor can place malicious code in a harmless-looking email, instruct the Copilot bot to "inspect" it, and--presto--malicious code injection achieved.

Another example was Copilot displaying a fake Microsoft login screen where the victim's credentials were harvested. This was displayed within the Copilot bot itself (via TechTarget).

Zenity also claims that the average large company in the US has 3,000 bots already up and running. It is frightening to think that 63% of these bots are easily discoverable online. If this is true, it means that the average Fortune 500 company has around 2,000 bots willing to give out confidential corporate information.

Bargury stated that "we scanned the Internet and found tens thousands of these bots." He claims that Copilot Studio's default settings automatically published the bots on the web, without requiring authentication. Microsoft has since fixed the issue after Zenity reported the problem, but it does not help bots built before the update.

Bargury: "There's an issue fundamental here." "When you allow AI to access data, this data becomes an attack surface that can be used for prompt injection." Bargury says that chatbots that are publicly accessible are inherently unsecure.

There are two main problems. On the other hand, the bots require a certain degree of autonomy and flexibility in order to be useful. This is a difficult problem to fix. Microsoft also seems to have made some obvious mistakes.

This last issue is perhaps not surprising, given the mess that was made by the Windows Copilot Recall tool. It involved taking screenshots of the user's activity and storing them without any protection.

Microsoft's response to all of this was a little salty, according to the Register.

Michael Bargury's work in identifying these techniques and responsibly reporting them through a coordinated release is greatly appreciated. We are investigating the reports and improving our systems in order to identify and mitigate such threats.

"Similarly to other post compromise techniques, these methods require a prior compromise of a computer system or social engineering. Microsoft Security offers a robust set of protections that customers can use to mitigate these risks. We're also committed to improving our safety mechanisms in line with the rapid evolution of technology.

As with so many other aspects of AI, security will likely be a minefield full of unintended effects and collateral damage. It feels like we are a long way away from a reliable, safe AI that does only what we ask.

Interesting news

Comments

Выбрано: []
No comments have been posted yet