
Microsoft has introduced a new feature called “computer use” in Copilot Studio, allowing AI agents to interact with websites and desktop applications just like a human user.
Copilot Studio agents can now perform actions such as clicking buttons, selecting menus, and typing into fields on the screen, according to Microsoft. They can interact with any system that has a graphical user interface (GPU) through computer use.
“This allows agents to handle tasks even when there is no API available to connect to the system directly. If a person can use the app, the agent can too,” Charles Lamanna, corporate vice president at Business & Industry Copilot, wrote in a blog post.
According to Microsoft, the system is resilient to interface changes and can adapt on the fly, reducing the need for constant reprogramming.
“Computer use adapts to changes in apps and websites automatically. It adjusts in real time using built-in reasoning to fix issues on its own, so work continues without interruption,” Lamanna explained.
The feature mimics real user behavior, making it possible to automate workflows in apps and websites that were traditionally off-limits to bots. Earlier this month, Microsoft introduced a similar capability called Actions to its consumer Copilot.
Similar features have been launched by Anthropic’s Claude – also under the name “computer use” – and by OpenAI, which introduced OperatorAI in January.
While AI agents that can use a computer on their own may boost productivity, cybersecurity experts also warn that they can be successfully used to carry out cyberattacks. A threat-hunting team at Symantec recently demonstrated how OperatorAI could be used to craft a convincing phishing attack.
Your email address will not be published. Required fields are markedmarked