AI Tooling
Install OpenClaw on Mac and complete Codex OAuth setup
A practical setup walkthrough for developers who want a working OpenClaw + Codex workflow on Mac, with a clear verification checklist.
Contents
OpenClaw is useful on macOS when your goal is repeatable agent workflows, not one-off prompts. It gives you a runtime layer that keeps model access, authentication, gateway state, and workspace behavior in one place.
This rewrite is aligned with the currently published docs (checked on 2026-03-27). The focus is on the decisions that usually create setup friction: choosing the right provider lane, picking onboarding vs direct login, validating real model availability, and handling the OAuth/embeddings edge cases early.
1. Pick the right provider lane first
Before running commands, decide how you want to authenticate. Based on the OpenAI provider doc:
openai-codex/*is for ChatGPT/Codex subscription OAuth.openai/*is for OpenAI API-key-based access.
If your plan is account-based sign-in, start with openai-codex. If your plan is API usage through OPENAI_API_KEY, use openai.
2. Run the shortest working install path on Mac
The Install doc supports this baseline flow:
curl -fsSL https://openclaw.ai/install.sh | bash
openclaw --version
openclaw doctor
openclaw gateway status
This tells you quickly whether the CLI is reachable, the local health checks pass, and the gateway service is alive.
Two preflight notes that matter in practice:
- Current recommendation is Node 24, with Node 22.14+ as minimum support.
- If you want to install first and onboard later, use
--no-onboard.
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboard
3. For Codex auth, use onboarding first, direct login later
The provider docs support both commands:
# Better for first-time setup
openclaw onboard --auth-choice openai-codex
# Better when OpenClaw is already installed
openclaw models auth login --provider openai-codex
For a fresh install, onboarding is safer because model selection and auth happen in one guided path. For existing machines, direct login is the faster maintenance path.
4. OAuth callback failure is often recoverable
Per the OAuth doc, Codex auth uses PKCE and typically returns to a local callback (commonly http://127.0.0.1:1455/auth/callback).
If that callback page fails locally, it does not always mean auth failed. The docs explicitly allow continuing by pasting the final redirect URL/code back into the CLI when callback binding is blocked.
That distinction saves time during debugging: many failures are local callback handling issues, not broken OpenAI authorization.
5. Validate model readiness before touching advanced config
Use the Models doc workflow:
openclaw models status
openclaw models list --provider openai-codex
openclaw models status --check
What to confirm:
models statusresolves a primary model and shows healthy auth state.models list --provider openai-codexreturns models your account can actually use.models status --checkgives machine-friendly pass/fail signaling for CI or scripts.
Also, provider docs note that openai/* and openai-codex/* default to auto transport (WebSocket first, then SSE fallback). In most first-time installs, there is no need to override transport early.
6. Commonly missed: Codex OAuth does not include embeddings access
The FAQ states this clearly: Codex OAuth covers chat/completions, not OpenAI embeddings.
So:
- If you only need Codex reasoning, OAuth may be enough.
- If you need semantic memory search with OpenAI embeddings, you still need
OPENAI_API_KEY(or equivalent provider key config).
This is a frequent source of confusion when core model calls work but retrieval/memory flows still fail.
7. A practical setup order you can reuse
- Install with the official script.
- Run
openclaw --version,openclaw doctor, andopenclaw gateway status. - Complete first-time auth via
openclaw onboard --auth-choice openai-codex. - Verify with
openclaw models statusandopenclaw models list --provider openai-codex. - Add
openclaw models status --checkfor automation monitoring. - Add embeddings provider/API key only when your workflow actually needs memory search.
Final note
Most setup pain comes from doing steps out of order. If you verify install, auth, and model visibility first, you can postpone advanced tuning and isolate failures much faster.