Skip to main content
Logo
Overview

OpenAI shipped workspace agents yesterday. If you’re on ChatGPT Business, Enterprise, or Edu, you can already try them. And unlike a lot of “AI agent” announcements that amount to a chatbot with a new label, this one actually does something different: these agents run in the cloud, persist between sessions, and plug into tools your team already uses.

I’ve been poking at them since the announcement dropped, and here’s my honest take — what works, what’s half-baked, and whether this changes anything if you’re already using Zapier or Make.

What Workspace Agents Are (and Aren’t)

Think of workspace agents as the next step past custom GPTs. Custom GPTs let you configure a chatbot with specific instructions and knowledge. Workspace agents do that, but they can also act — execute code, connect to external services, run on a schedule, and keep working after you close the tab.

The “powered by Codex” part matters. These agents can write and run code in a sandboxed cloud environment, which means they can do things like pull data from an API, transform it, and push the results somewhere else. That’s a fundamentally different capability than “answer questions based on these uploaded PDFs.”

Here’s what they’re not: a replacement for your entire automation stack. They can’t handle the kind of complex branching workflows that Make excels at, and they don’t have anywhere near Zapier’s 7,000+ app integrations. They’re more like a capable assistant that sits inside ChatGPT and can reach out to a handful of connected tools.

The Feature Set That Matters

Cloud Execution

This is the big one. Workspace agents run asynchronously in the cloud. You can kick off a task, close your laptop, and come back to find the results waiting for you. Previous ChatGPT features required you to keep the conversation open. That constraint is gone.

The cloud execution is Codex-powered, which means the agent can write Python, bash scripts, or whatever it needs to process your request. Ask it to analyze a CSV, scrape pricing data, or generate a formatted report — it spins up an environment, runs the code, and delivers the output.

Slack Integration

You can deploy workspace agents directly into Slack channels. Someone drops a question in #sales-ops, the agent picks it up, does its thing, and responds in-thread. No switching to ChatGPT, no copy-pasting context back and forth.

There are some caveats here. If your Slack workspace isn’t on Business+ or Enterprise+, search defaults to keyword matching rather than semantic search. And rate limits can bite you — heavy usage with agent mode can exhaust your per-user Slack API quota before hitting workspace-wide limits. So if you’re planning to deploy this for a team of 50, do some napkin math on API calls first.

The European availability gap is also worth flagging: Plus and Pro users in the EEA, UK, and Switzerland can’t use the ChatGPT Slack app yet. Business and Enterprise plans aren’t affected, but if you have team members across regions, verify access before building workflows around it.

Scheduled Triggers

Agents can run on a schedule. Set one up to generate a weekly metrics summary every Friday morning, compile a competitor news digest every Monday, or check your support queue daily and flag anything that’s been open longer than 48 hours.

This is where the “workspace” part of the name earns its keep. You’re not building these for yourself — you’re building them for your team. One person configures the agent, sets the schedule, and everyone in the workspace benefits.

App Connectors

Workspace agents can connect to external tools — CRM systems, IT ticketing platforms, communication tools. The connector ecosystem is still young, but OpenAI obviously wants these agents to be the glue between your existing SaaS stack.

Right now, the list of native connectors is slim compared to what Zapier or Make offer. But the ability to write and execute code means the agent can hit any REST API directly. It’s more work than dragging a connector in Zapier, but it’s also more flexible.

Building Your First Workspace Agent: A Walkthrough

Setting one up is straightforward if you’re already on a supported plan.

Step 1: Access the Agent Builder

Head to chatgpt.com/codex (yes, it’s under the Codex umbrella now). You’ll see the option to create a new workspace agent from the dashboard. If you don’t see it, check that your admin has enabled the research preview in workspace settings.

Step 2: Define the Agent’s Role

Give it a name, description, and instructions. This is similar to configuring a custom GPT, but you’ll also specify:

  • What tools it can access (code execution, web browsing, connected apps)
  • Who in the workspace can use it
  • Whether it should be available in Slack

Be specific in your instructions. “Help with sales” is useless. “When asked about a prospect, look up their company in our CRM, pull their last 3 interactions, and summarize the engagement status in 2-3 sentences” gives the agent something to work with.

Step 3: Connect Your Tools

If you want the agent to interact with external services, you’ll need to set up connections. For Slack, there’s a native integration. For other tools, you’ll either use available connectors or write code that calls their APIs.

If you’re connecting a GitHub repository (useful for agents that interact with code), open environment settings and follow the repo connection flow. You can guide the agent’s behavior with an AGENTS.md file in your repository — similar to how you’d configure a .cursorrules or CLAUDE.md file for other AI coding tools.

Step 4: Set Up Triggers (Optional)

Configure when the agent should run:

  • On-demand: Someone messages it in ChatGPT or Slack
  • Scheduled: Runs at specified intervals (daily, weekly, custom cron)
  • Event-driven: Responds to messages in specific Slack channels

Step 5: Test Before Sharing

Run through your key use cases manually before sharing with the team. Agents can behave differently with different inputs, and you don’t want your first impression with the team to be a hallucinated sales report.

Use Cases That Actually Work

After spending time with workspace agents, here are the scenarios where they pull their weight versus where they fall short.

What Works Well

Meeting prep automation. Set up an agent that, given a meeting invite or attendee list, pulls relevant context from your CRM, recent email threads, and Slack conversations. It compiles a one-page brief with talking points. I’ve been doing this manually for years. Automating it saves 15-20 minutes per meeting, which compounds fast.

Recurring reports. Weekly status reports, metrics digests, competitive monitoring — anything where the format is consistent but the data changes. Schedule the agent to generate and deliver these automatically. The code execution capability means it can pull from APIs, calculate deltas, and format everything nicely.

Triage and routing. Deploy an agent in your support Slack channel that reads incoming requests, categorizes them, and routes to the right team. Not as a replacement for a proper ticketing system, but as a first-pass filter that reduces the noise.

Onboarding assistance. Create an agent loaded with your company docs, processes, and FAQs. New hires ask it questions instead of pinging random colleagues. This is basically what custom GPTs were good at, but with the added ability to take action — like creating a Jira ticket for IT access requests.

Where It Falls Short

Complex multi-step workflows. If your automation has branching logic, error handling, retries, and conditional paths, you still want Make or n8n. Workspace agents are smart, but they’re not workflow engines.

High-volume data processing. The agent runs in a sandboxed environment with resource limits. Crunching through 100,000 rows of data isn’t its sweet spot. Use a dedicated data pipeline for that.

Anything requiring guaranteed reliability. These agents are in “research preview.” That means things will break. Don’t build mission-critical processes on them yet.

Pricing: Free Now, Pay Later

Workspace agents are free to use through May 6, 2026. After that, OpenAI moves to credit-based pricing. The specific rate card hasn’t been published yet, which is… a pattern with OpenAI. They launch features, let people get hooked, and then announce pricing once they have usage data.

Here’s what we know about the plan requirements:

  • ChatGPT Business ($25/user/month): Includes workspace agents access
  • ChatGPT Enterprise (custom pricing): Full access with admin controls
  • ChatGPT Edu and Teachers: Also included

The credit-based model for agent execution will likely be separate from your subscription cost. So budget for both: the per-seat subscription plus whatever the compute costs end up being. If your agents are running code, calling APIs, and executing on schedules, those credits will add up.

My advice: use the free preview period to figure out which use cases actually stick. Don’t build ten agents — build two or three, use them seriously for two weeks, and then decide whether the post-May 6 costs are justified.

Workspace Agents vs. Zapier AI vs. Make

This is the question everyone’s asking, and the honest answer is they solve different problems.

Zapier AI

Zapier dominates in integration breadth — 7,000+ app connections, most of which work out of the box. Its trigger-action model is dead simple. If your automation is “when X happens in App A, do Y in App B,” Zapier is still the fastest path. Pricing starts at $19.99/month for 750 tasks.

Where Zapier struggles: complex logic. Its linear automation model gets clunky when you need branching, loops, or conditional handling. And Zapier’s AI capabilities, while improving, are bolted onto a workflow engine rather than native to the experience.

Make

Make is the power user’s choice. Its visual scenario builder handles branching logic, parallel paths, routers, and iterators natively. At $9/month for 10,000 operations, it’s significantly cheaper per action than Zapier. If you need sophisticated automation with data transformation, Make is hard to beat.

The downside: steeper learning curve, and the AI features (still in beta) aren’t as mature as what OpenAI offers natively.

Where Workspace Agents Win

Workspace agents beat both Zapier and Make in one specific area: natural language interaction. You don’t define automations through a visual builder or trigger-action pairs. You describe what you want, and the agent figures out how to do it — including writing code on the fly.

This matters for teams that don’t have a dedicated automation person. A marketing manager can’t build a Make scenario, but they can tell a workspace agent “every Monday morning, compile last week’s blog performance from Google Analytics and post a summary in #content-team.”

The Decision Framework

  • Use Zapier if you need simple, reliable connections between specific apps and don’t want to think about it.
  • Use Make if you need complex workflows with branching logic and high data volumes at a reasonable cost.
  • Use workspace agents if your team lives in ChatGPT/Slack and you want AI-native automation where natural language is the interface.
  • Use all three if you’re like most companies. They’re not mutually exclusive. Zapier and Make even have MCP hooks that can integrate with OpenAI’s agent ecosystem.

What’s Missing (and What to Watch For)

A few things that would make workspace agents significantly more useful:

Transparency on pricing. The May 6 deadline is two weeks away, and we still don’t have a rate card. Hard to plan adoption without knowing costs. OpenAI, if you’re reading this: publish the pricing before the free period ends, not during the last week.

More native connectors. The “write code to call the API” approach works for developers, but most business users setting up agents won’t be comfortable writing API calls. The connector library needs to grow fast.

Better debugging. When an agent fails or produces unexpected results, the feedback loop is opaque. There’s no step-by-step execution log like you’d get in Make. You just get the output, and if it’s wrong, you’re guessing at why.

Audit and compliance tooling. For Enterprise customers, knowing exactly what data an agent accessed, what code it ran, and what external calls it made is table stakes. The current observability is thin.

Should You Try Them?

Yes, but with the right expectations. Workspace agents aren’t going to replace your automation stack overnight. They’re a new tool in the kit — one that’s particularly good at bridging the gap between “I wish someone would automate this” and “nobody on the team knows how to set up automation.”

If your team is already on ChatGPT Business or Enterprise, the free preview period is a no-brainer. Pick one recurring task that eats time every week — a status report, a data pull, a triage workflow — and build an agent for it. Two weeks is enough to know whether the productivity gain is real or just novelty.

The real test comes after May 6, when the credit meter starts running. That’s when we’ll find out which teams keep their agents and which ones quietly go back to doing things manually.