TL;DR:
OpenSpec is the most widely adopted open-source Spec Coding framework. Through its /opsx:propose → /opsx:apply → /opsx:archive three-step workflow, you can have AI produce production-grade code under structured spec constraints. This guide walks you through the entire flow from installation to archival with a complete hands-on example.
Introduction
After understanding the theoretical foundation of Spec Coding, you might wonder: How does this actually work in a real IDE? What tools should I use to manage specification files?
The answer is OpenSpec — an open-source Spec Coding framework by Fission-AI (MIT license). It supports 20+ mainstream AI coding assistants and provides a complete artifact-driven development workflow.
What is OpenSpec?
OpenSpec is a lightweight spec layer that establishes an "agree before you build" contract between you and AI. Its core philosophy:
→ fluid not rigid
→ iterative not waterfall
→ easy not complex
→ built for brownfield not just greenfield
→ scalable from personal projects to enterprises
Every change generates a set of standardized artifacts:
| Artifact | Purpose |
|---|---|
proposal.md |
Why we're doing this, what's changing |
specs/ |
Requirements and acceptance scenarios |
design.md |
Technical approach |
tasks.md |
Implementation checklist |
5-Minute Setup
Prerequisites: Node.js 20.19.0 or higher.
npm install -g @fission-ai/openspec@latest
cd your-project
openspec init
openspec init creates an openspec/ directory and AI guidance files at your project root. Once initialized, you can start using slash commands in your AI coding assistant immediately.
Also works with pnpm, yarn, bun, and nix. See the installation docs.
The Practical Trio: OpenSpec + CLAUDE.md + .cursorrules
In the 2026 AI programming ecosystem, these three form the complete Spec Coding toolchain:
- OpenSpec: The artifact-driven development framework that manages the entire lifecycle of changes via slash commands.
- CLAUDE.md: A project memory specification popularized by Anthropic. It acts as a "Project Constitution," recording core architecture, tech stack choices, and established conventions.
- .cursorrules: IDE-level instructions that enforce code style and constraints (e.g., "Use TypeScript strict mode," "No inline styles").
The division of labor is clear: OpenSpec manages the workflow, CLAUDE.md manages memory, .cursorrules manages style.
Complete Walkthrough: Building "Recipe Search" with OpenSpec
Let's walk through OpenSpec's complete three-step workflow.
Step 1: /opsx:propose — Propose the Change
Don't just tell the AI "Write a search feature." Use OpenSpec's propose command:
/opsx:propose add-recipe-search
The AI automatically creates an openspec/changes/add-recipe-search/ directory with four artifacts:
openspec/changes/add-recipe-search/
├── proposal.md ← Why we're building this feature
├── specs/ ← Requirements and acceptance scenarios
│ └── search-recipe.md
├── design.md ← Technical approach
└── tasks.md ← Implementation checklist
The specs/search-recipe.md will contain structured acceptance criteria:
## Context
Users need to search existing recipes by keyword and filter them by "Cooking Time."
## Acceptance Criteria (Scenarios)
#### Scenario 1: Basic Keyword Search
- **WHEN** the user enters "Egg"
- **THEN** the system should return a list of all recipes with "Egg" in the title
#### Scenario 2: Empty State Handling
- **WHEN** the search results are empty
- **THEN** the system should display "No recipes found" and recommend 3 popular recipes
## Constraints
- Search API response time must be < 200ms
- Search results must support pagination
The tasks.md generates an implementation checklist:
## Implementation Tasks
- [ ] 1.1 Create Search API Endpoint (`/api/recipes/search`)
- [ ] 1.2 Implement full-text search logic (using existing Prisma queries)
- [ ] 2.1 Implement Search UI components (SearchBar + ResultList)
- [ ] 2.2 Add "Cooking Time" filter
- [ ] 3.1 Write API-layer unit tests
- [ ] 3.2 Write component-level E2E tests
Your job at this point is: review these artifacts and ensure the AI understood your intent. You can modify any artifact at any time — OpenSpec's philosophy is "fluid not rigid," with no rigid phase gates.
Step 2: /opsx:apply — Execute Tasks
Once artifacts are reviewed, run the implementation command:
/opsx:apply
The AI executes tasks from tasks.md one by one. Output looks like:
Implementing tasks...
✓ 1.1 Create Search API Endpoint
✓ 1.2 Implement full-text search logic
✓ 2.1 Implement Search UI components
✓ 2.2 Add Cooking Time filter
✓ 3.1 Write API-layer unit tests
✓ 3.2 Write E2E tests
All tasks complete!
Key advantage: The AI reads only one task and its relevant spec context at a time. This "small-step" approach drastically reduces hallucination probability. Compared to throwing a vague requirement at AI all at once, task-granular execution boosts effective output rate to over 95%.
Step 3: /opsx:archive — Archive the Change
After all tasks pass tests:
/opsx:archive
OpenSpec archives the change to openspec/changes/archive/ with a timestamp:
openspec/changes/archive/2026-04-01-add-recipe-search/
This gives you a complete change history. Even three months later, you can quickly understand "why we did this" by reading the proposal.md.
Extended Workflow
Beyond the core three commands, OpenSpec provides an extended workflow (selected via openspec config profile and applied with openspec update):
| Command | Purpose |
|---|---|
/opsx:new |
Create a new empty change |
/opsx:continue |
Resume an unfinished change |
/opsx:ff |
Fast-forward (skip completed tasks) |
/opsx:verify |
Verify spec alignment |
/opsx:sync |
Synchronize artifact state |
/opsx:bulk-archive |
Bulk archive changes |
/opsx:onboard |
Onboard new team members |
OpenSpec vs. Alternatives
| Dimension | OpenSpec | Spec Kit (GitHub) | Kiro (AWS) |
|---|---|---|---|
| License | MIT | MIT | Proprietary |
| Installation | npm global install | Python setup | Dedicated IDE |
| Tool Support | 20+ AI assistants | Limited | Claude models only |
| Workflow | Fluid, no phase gates | Rigid phase gates | IDE-built-in |
| Project Type | New and existing | Mainly new projects | New projects |
| Learning Curve | Very low (3 commands) | Higher | Medium |
How to Write High-Quality Specs (The Golden Rules)
A good specification should have three key elements:
- Use WHEN/THEN Syntax: This Behavior-Driven Development (BDD) syntax is extremely AI-friendly because it clarifies inputs and expected outputs.
- Define Boundary Conditions: Tell the AI what to do if a user enters invalid characters, the network drops, or the data is empty.
- Define the "Non-Goals": Explicitly forbid behaviors in the spec (e.g., "Do not modify the database schema," "Do not use third-party libraries").
Common Pitfalls: Avoiding Spec Coding Mistakes
- Over-specifying: OpenSpec's philosophy is "fluid not rigid." If you dictate every line of code, you lose the AI's flexibility. Specs should focus on "outcomes," not "low-level instructions."
- Ignoring the Single Source of Truth (SSOT): If you change requirements in chat but don't update the files in
specs/, the AI will quickly become confused. Remember: If requirements change, update the Spec first. - Lack of Task Verification: For every completed task, require the AI to run corresponding tests rather than just trusting its "I'm done" message.
- Skipping Archive:
/opsx:archiveis not optional. Archiving means knowledge persistence — three months later, you or your teammates can quickly understand the change history through archived artifacts.
Model Recommendations and Context Hygiene
OpenSpec works best with high-reasoning models. Official recommendations:
- Planning phase: Opus 4.5, GPT 5.2
- Implementation phase: Opus 4.5, GPT 5.2
Context hygiene: Before starting /opsx:apply, clear the AI's context window to ensure it focuses on the current change's artifacts rather than stale conversation history.
Summary
Spec Coding is the dividing line between "toy projects" and "production-grade software." Through OpenSpec's three-step workflow (propose → apply → archive), you can turn AI into a true senior development partner rather than an error-prone intern.
Ready to go further? Learn how to build an automated runtime environment for your AI Agent with Harness Engineering.
Related Reading: