返回所有文章
yeonghyeon yeonghyeon · 2026年4月29日

Building a ChatGPT App with the Apps SDK: MCP, OAuth, and Notes from Submission

Building a ChatGPT App with the Apps SDK: MCP, OAuth, and Notes from Submission

Hi, I'm yeonghyeon, leading 3Min API development.

Over the past few weeks I worked through the ChatGPT Apps registration for 3Min API. We hit one rejection, fixed the issue, and got approved — and along the way submitted the same codebase to Claude Connectors. This post is a write-up for fellow developers heading down the same path. I've put together what you need to implement, where rejections tend to happen, and what actually goes on during review in one place.

I've intentionally left out anything tied to a specific cloud or deployment stack. ChatGPT Apps registration follows the same flow no matter what's underneath.

Overview — ChatGPT Apps and the MCP behind it

ChatGPT Apps is OpenAI's extension model that lets you call external services from inside ChatGPT. Users add your app on the ChatGPT screen, and from inside the conversation flow they can directly invoke the tools your backend exposes.

The foundation is Model Context Protocol (MCP). MCP is an open spec for connecting LLMs to external tools — it defines how tools are listed and how they get called safely. ChatGPT Apps is one client running on top of MCP, and Anthropic's Claude Connectors sits on the same spec. In other words, get one MCP server right and you've covered both ecosystems. We submitted the same codebase to both directories, and the tool definitions and auth flow are shared as-is.

Why we became a tool instead of building our own AI chatbot

We considered building our own chatbot at first. Stepping back, though, our users had already poured their business context into ChatGPT or Claude. If we built yet another chatbot, the same person would have to explain the same backstory all over again.

So we changed direction. We add 3Min API as a tool to the AI you're already using. The same ChatGPT you've been talking to about your business can now say "I'll set that up in 3Min API for you" and actually get it done. We're still considering our own chatbot, but tooling comes first.

Implementation

SDK choice — Standard MCP SDK only

OpenAI does provide a dedicated ChatGPT Apps SDK, but we didn't go that route. Our server is built on a single dependency: the standard MCP SDK (@modelcontextprotocol/sdk). The reasoning is simple — the same server has to be recognized by both ChatGPT Apps and Claude Connectors. Sticking to the standard SDK keeps the code from forking when one ecosystem updates, and lets us submit the same build to both directories.

OpenAI's specs are documented at Build your MCP server and Connect from ChatGPT. Both describe how the ChatGPT client recognizes and calls your server from the client side — implement a standard MCP server faithfully and OpenAI's requirements fall into place naturally.

Auth — OAuth 2.1 as the default, API Key alongside on the same server

For ChatGPT Apps, the practical answer is OAuth 2.1, single track. OpenAI's official guide states it clearly: "ChatGPT does not support … custom API keys or customer-provided mTLS certificates." So if you're submitting to the ChatGPT Apps directory, OAuth 2.1 done right is enough (a noauth anonymous mode is allowed for tools that don't touch user data, but for tools like ours that work with someone's endpoints and logs, OAuth is required).

That said, if you want the same MCP server to work from Cursor, Claude Code, Gemini CLI and other local clients, the situation is different. Local environments are places where users can hold their own keys directly, and the OAuth flow becomes heavy there. So we went with OAuth 2.1 as the default auth for ChatGPT Apps and Claude Connectors, plus an additional API Key path layered on the same server. The same tool definitions work under both auth modes.

OAuth 2.1 — Default for ChatGPT Apps and Claude Connectors

Hosted clients can't expect users to manage keys directly, and OpenAI's policy leaves no other option. The following standards have to be in place for clients to recognize the server properly.

  • RFC 7591 — OAuth Dynamic Client Registration. Clients register themselves.
  • RFC 8414 — Authorization Server Metadata. Auth endpoints exposed at /.well-known/oauth-authorization-server.
  • RFC 9728 — Protected Resource Metadata. Metadata about the MCP resource itself.
  • RFC 7636 PKCE — S256 challenge that prevents authorization code interception.

The flow is the standard one. Client dynamic registration → user signs in and an authorization code is issued (single-use, 10-minute expiry) → token exchange with PKCE verification → access token (1 hour) plus refresh token (30 days, rotation). When access tokens expire, refresh tokens renew them, and the refresh token itself is rotated each time to reduce theft risk.

The part we paid the most attention to was guaranteeing the code as atomically single-use. Authorization codes are too valuable to intercept, so we made sure no race could happen at the token exchange step by claiming the code in a single update statement.

API Key — A side path for using the same code from CLI/IDE tools

ChatGPT Apps reaches us through OAuth, but we wanted the same tool definitions to work from local MCP clients like Cursor, Claude Code, Gemini CLI, and Smithery — so we layered an API Key path on top. Locally, users can hold their own keys, and an x-api-key header is the cleanest fit. We issue per-user keys with the tm_mcp_ prefix and track active/inactive state and last-used time per key.

Eight tools we exposed, and their permission flags

3Min API exposes 8 tools to ChatGPT/Claude. Each tool must declare three MCP-standard annotations. First, the meanings.

annotationmeaningtrue example
readOnlyHintRead-only? (no data changes)Reading logs/stats
destructiveHintCan it cause irreversible changes?Editing/deleting an endpoint
openWorldHintTalks to the outside world or causes side effects?Sending an external webhook

Applying these three values to our 8 tools:

toolsummaryreadOnlydestructiveopenWorld
helpService guide lookuptruefalsefalse
endpointsEndpoint CRUD/deployfalsetruefalse
api_callEndpoint invocationfalsetruetrue
logsCall logs and searchtruefalsefalse
statsUsage statisticstruefalsefalse
collaboratorsCollaboration keys/invitesfalsetruetrue
subscriptionSubscription infotruefalsefalse
archivesArchive downloadtruefalsetrue

Areas we deliberately didn't expose

Actions like deleting the account itself, changing payment methods, or issuing/revoking MCP auth keys — anything where one wrong consent can't be undone or that ties straight into the auth system itself — we didn't expose as tools. Leaving those to LLM judgment turns one bad call straight into a security incident, and it doesn't match what the ChatGPT Apps review guidelines recommend either.

Testing — You'll want to run both auth paths

Before submitting, we ran both paths.

  • OAuth path — On ChatGPT web, we turned on developer mode at Settings → Apps & Connectors → Advanced settings, then registered our MCP server URL via 'Create app'. ChatGPT automatically reads our server's /.well-known/... to pick up the OAuth metadata, so you can walk through the consent screen and get to actual tool calls yourself.
  • API Key path — We added the MCP server to Claude Code, Cursor, and Gemini CLI, and verified tool calls there. We also registered with Smithery to confirm directory exposure.

The code is the same on both paths, but OAuth has to clear well-known metadata, the consent screen, and token refresh on top of everything else, so it's a step trickier. Before submitting, I'd recommend at least running the step where the actual ChatGPT instance reaches your server and pulls down the tool list yourself.

Submitting for review — Things we learned after one rejection

The submission itself is doable while reading OpenAI's official guide (app-submission-guidelines) and asking ChatGPT questions along the way. But during the form, ChatGPT actually connects to your server to pull the tool list, so your production must be live at submission time. You also need to supply test account credentials (email/password or a separate review account) so OpenAI's reviewers can call the tools themselves.

Caveat 1 — Mark tool permissions conservatively

This is the most common reason for rejection. We got rejected on our first submission because we left openWorldHint as false on three tools that talk to the outside (api_call, collaborators, archives). After fixing that, the resubmission went through.

Of those three, archives issues an external download URL, so openWorldHint=true is obvious. The other two are less direct: api_call triggers external traffic via user-defined webhooks, and collaborators sends invitation emails to collaborators. They reach outside, but indirectly enough that you might hesitate. As it turns out, marking both as true conservatively was the right call.

The reasoning is simple. Once rejected, you wait roughly two more weeks for the next review. If a tool has any flow that touches the outside, don't hesitate — mark it true. Time-wise, that's the cheapest path. List the side effects each tool produces, check whether any of them reach beyond your system, and the flags fall into place naturally.

Caveat 2 — Screenshots are "template work," not "screen captures"

The submission form has a screenshot field showing how your app appears inside ChatGPT. Capturing on your own and uploading it gets rejected, no exceptions. OpenAI provides a clear guide for this step, and the link is right there in the submission UI. Here's how it goes.

  1. At the screenshot step in the submission form, the guide link sends you to OpenAI's official Figma file.
  2. You copy the screenshot template from that Figma file into your own Figma workspace.
  3. Then you paste your app's actual ChatGPT screen into the template's designated slot. The aspect ratio, padding, and composition are already set in the template — you only fill in the content.
  4. Finally, you export the finished board as an image and upload it on the submission form.

So what gets submitted isn't a raw capture — it's your app's screen placed inside the frame OpenAI defined. Check the guide once more right before submission and follow it as-is. Skip this step and submit your own captures, and the submission gets rejected on the spot.

Review notes — Even after approval, one more step

From submission to result took about two weeks or more. If you get rejected and resubmit after fixing, expect a similar wait again. You can check status from the OpenAI developer dashboard, and you get an email when it's rejected or approved.

Approval doesn't mean it's live. You also have to hit the Publish button on the dashboard for it to start showing up to regular users.

One more thing right after launch. When users add your app, ChatGPT doesn't automatically reach for your tools in every conversation. Even on relevant topics, you'll see plenty of replies that don't invoke any tool. In those cases users will want to call your app explicitly with an @ mention. Pass that note along and your tool usage rate goes up significantly.

Claude Connectors — Where things stand

Claude Connectors runs on the same MCP. Almost no extra code work was needed. The review process, though, is less polished than OpenAI's. There's no dashboard for tracking submission status, and we're approaching a month since we submitted without an approval or rejection notice. Submitting to both at once is worth recommending, but plan timelines around the OpenAI side to be safe.

Try it yourself

If you'd like to try 3Min API's MCP tools inside ChatGPT, or want to see how we shaped it, just search for "3Min API" on the ChatGPT Apps add screen to install it. After signing in once, mention @3Min API in your conversations to call the tools.

If you'd like to try it from a local MCP client (Claude Code, Cursor, etc.), sign in and head to Settings → AI Integration to issue an API key, then start right away. Per-tool details and call examples are collected in the AI Integration Guide.

I hope this serves as a small shortcut for others walking the same path.