Photo-Illustration: Intelligencer; Anthropic
In 2024, Anthropic tested out a feature called “computer use,” a tool that could “try to manipulate a computer desktop environment,” clicking and scrolling on a user’s behalf. At the time, LLM use was mostly limited to what you could accomplish in direct conversation with chatbots, which had little access to outside tools (at the time, Claude didn’t even have access to the web). Giving Claude the ability to move a cursor, input text, and interact with an operating system and apps offered a much broader vision for the future of AI capability than the one suggested by closed-circuit chatbots of the time, such as ChatGPT, suggesting the possibility of automating working routines and engaging with the digital world by emulating a person: a self-driving computer.
In 2024, as Anthropic suggested at the time, the feature wasn’t really ready for productive use — it was genuinely crazy to watch work but also slow, error-prone, and prone to quickly losing track of what it was doing — intended instead as a compelling demo and way to get “feedback from developers” about the sorts of things they want their AI to do. Not quite two very eventful years later, the company is again focusing on computer use, this time with more credibility, more capability, and quite a bit more attention:
A lot has changed since Anthropic first debuted this feature. Models have become far more capable, but AI scaling followed a surprising path that wasn’t yet widely understood in 2024. As a result, “agentic” AI — the industry term for tools that can execute on users’ behalf — has become more viable, but not quite in the way anticipated in 2024 or even by more recent software like OpenAI’s self-clicking Atlas web browser. Instead, it has taken the form of power-user tools like Claude Code, which broadly reset the terms of the AI and is in the process of upending software development as we know it. It works not by taking control of users’ screens but by asking for more direct access to their codebases, developer tools, data, and the web, no big use-my-Mac loophole necessary.
Next, AI firms are targeting the far bigger share of the economy that involves people sitting in front of keyboards. They’re also trying to capture some of the recent energy around software like OpenClaw, which lets people build their own agents on personal computers, controlling them by chat and linking them to email, social media, and e-commerce accounts, among many other things. Anthropic’s Cowork, which is a sort of Claude Code for people who work in spreadsheets and make a lot of slide decks, is a step in that direction, providing a way to connect its AI with the sorts of files, data, and workflows common in computer-bound desk jobs.
It’s also where Anthropic is testing its new computer-use feature, alongside a tool that lets you take charge of your agent remotely through the Claude app. Much of what you’ll see this time around is the same but better: Here, again, you can watch Claude use your desktop, now more fluidly and competently than it could in 2024, and with the option of remote control from your phone and the ability to schedule tasks (“Send me a morning digest of Slack and email messages”; “Suggest some times in my daily calendar where I might be able to go for a 30-minute run,” etc.). It’s enough to test out what current AI can do with your work materials and to sketch a preliminary plan for how improving agents might help with or automate tasks you’re familiar with. But commanding a chatbot to take over your screen and use a human software interface still feels like a weird and inefficient way to use a computer. It also requires granting wild levels of access to your chosen AI provider, giving these companies as much visibility into your life as, well, a person sitting at your actual computer or using your phone. Anthropic says the tool attempts to screen for “potentially destructive actions” — it refused, for example, to remotely empty my Mac’s trash. But these safety features are primitive, as is the Cowork integration itself, which, in my brief testing, had trouble navigating my files, connecting its own browser extension, and following through on commands. Where Claude Code is shockingly fluent and able to help troubleshoot its own problems on the way to making functional code, Cowork often feels a bit lost on your machine, thwarted by permissions, apps, and tasks that it isn’t quite trained for.
Yet. This time around AI computer use doesn’t just feel like a transitional technology — it’s hard to imagine that the future of AI is just models getting better at pretending to be humans sitting at the same computers we use today — but has explicitly become one. (In the meantime, though, it’s a funny consumer-tech detour: Users moved from desktops to laptops to mobile devices back to … desktops controlled by mobile devices?) As Anthropic puts it, you’ll only find yourself asking Claude to take over your screen when there’s “no connector for the tool you need.” That is, when the tools you want Claude to use — a productivity app, a piece of SaaS software you have to use for work — haven’t made themselves directly available to agentic AI already (or when you need remote access to software that you can’t use on your phone or as a website). When Cowork can plug into your actual workflows more directly, it works much better, giving office workers a taste of the delirious enthusiasm surrounding coding tools in the last few months. That’s where these tools seem to be going. Apps that take over your screen are a way to get there.
In 2026, a growing number of firms have already gotten onboard. Claude can connect to platforms like Slack, Atlassian, Notion, and Asana; it has plugins for productivity tools like Figma and Canva; it can hook to a range of cloud services and data platforms; it can access, in limited ways, payment tools like PayPal and Stripe. These are companies that have elected, to different degrees, to make it easier for Claude (and some other models) to access their tools and data, investing in, or at least allowing for, a future where they increasingly deal with bots rather than people and where, in the extreme, some customers no longer see their interfaces at all. They’re the ones preparing for an updated, post–Claude-Code theory of an agentic web: Not just that AI will be able to perform a wide range of labor with the help of outside tools, but that it behooves the rest of the digital economy to, rather than resisting, pitch into the effort, either because they sense opportunity or because they’re fearful of getting left behind.
It’s a way, in other words, to see what users want to do with more capable AI agents that they can’t do yet, at least not conveniently. It’s market research for agentic automation, identifying the gaps in what Claude can do on its own and putting pressure on outside firms to help fill them, in turn making Claude more powerful and helping to solidify its reputation as the AI firm for power users and work. It’s a way for Anthropic, on its way to automating as much of the white-collar economy as it can — while warning, guiltily, that it is attempting to automate as much of the white-collar economy as it can — to enlist its most enthusiastic users to identify its next targets, and for those targets to then decide if they want to try to figure out how to play along. As a literal vision of the future, computer use remains as bluntly effective as ever. (It can type and click? That’s what I do!) It’s also something that all but the most enthusiastic, early-adopter users are unlikely to use or see outside of a demo: a stopgap between agentic AI that has to pretend to be a person to use software and agentic AI that works more like, well, software.

