Builder
You have a problem. You know what you want a team of agents to do. You don’t want to hand-wire a workflow DAG, write five role prompts, figure out gate commands, and structure the handoff channels yourself.
So don’t.
cliq builder generate -m "Due diligence team that reads a data room from Google Drive, produces financial analysis, legal risk assessment, and a consolidated report, with a quality gate before delivery"That’s it. One sentence. The builder reads your intent, designs a workflow DAG, writes every role prompt, adds quality gates with real shell commands, wires up pull/push for external data, generates A2A metadata for discovery, validates the whole thing, and installs it — ready to run.
✓ Team 'due-diligence-pipeline' created in @local
Path: ~/.cliqrc/teams/@local/due-diligence-pipeline Phases: 5 Roles: 4
Workflow: financial-analyst [standard] (root) legal-analyst [standard] (root) report-writer [standard] → depends on financial-analyst, legal-analyst quality-gate [gate] → depends on report-writer fixer [support section]
Run 'cliq assemble @local/due-diligence-pipeline' to deploy this team to a project.From there, it’s the same three commands you already know:
cliq assemble @local/due-diligence-pipelinecliq req -m "Evaluate the Series B data room for Acme Corp"cliq runThe builder needs an LLM provider to generate teams. Add a builder section to ~/.cliqrc/settings.json:
{ "builder": { "provider": "google", "api_key": "AIza...", "model": "gemini-2.5-pro" }}Three providers are supported:
| Provider | provider value | Default model | Best for |
|---|---|---|---|
"google" | gemini-2.0-flash | Fast generation, good value | |
| OpenAI | "openai" | gpt-4o | Strong reasoning, reliable structure |
| Anthropic | "anthropic" | claude-sonnet-4-20250514 | Nuanced role prompts, detailed analysis |
You can override the provider or model on any command with --provider and --model:
cliq builder generate -m "..." --provider anthropic --model claude-sonnet-4-20250514Generating a Team
Section titled “Generating a Team”The generate command is the builder’s main act. Give it a description — as brief or as detailed as you like — and it produces a complete, validated team.
From a sentence
Section titled “From a sentence”cliq builder generate -m "TDD pipeline for a Node.js Express API with linting, type checking, and security audit"The builder infers the right phases, dependencies, gate commands, and role prompts. It knows that a TDD pipeline needs an architect, a developer, a test gate with npm test, and probably a fixer support phase for when tests fail.
From a file
Section titled “From a file”For complex teams, write a detailed spec and pass it with -f:
cliq builder generate -f team-spec.txtFrom stdin
Section titled “From stdin”Pipe from scripts or heredocs with --stdin:
cat <<'EOF' | cliq builder generate --stdinContent production pipeline for a technical blog.
Phases:1. A researcher who gathers background material on the topic2. A writer who produces a draft blog post (1500-2000 words)3. A technical reviewer who checks accuracy and code examples4. An editor who polishes prose and ensures consistent tone5. A quality gate that verifies the final post meets our style guide
The researcher should read source material from a Google Drive folder.The final approved post should be pushed back to Google Drive.EOFDry run
Section titled “Dry run”Not sure what you’ll get? Preview without writing anything:
cliq builder generate -m "Security audit team" --dry-runThis shows the full file list, workflow DAG, and role names — but writes nothing to disk.
Naming
Section titled “Naming”The builder picks a name from your description. Override it:
cliq builder generate -m "..." --name my-custom-teamIf a team with that name already exists, use --force to overwrite:
cliq builder generate -m "..." --name my-team --forceWhat Gets Generated
Section titled “What Gets Generated”A single generate call produces an entire team directory:
~/.cliqrc/teams/@local/your-team/├── team.yml # Workflow DAG, phase definitions, A2A metadata├── roles/│ ├── architect.md│ ├── developer.md│ ├── reviewer.md # Gate role with evaluation criteria│ └── fixer.md # Support phase role (activated by gate routing)└── README.md # Auto-generated overviewteam.yml
Section titled “team.yml”The workflow definition includes correctly ordered phases, dependency edges, gate commands with real shell commands, and (when your description mentions external data) pull and push declarations:
name: "@local/tdd-express-api"description: TDD pipeline for a Node.js Express API with linting, type checking, and security audit.tags: [code, tdd, nodejs, express, api]
use_when: - Building a new Express API endpoint or service - Adding features that require comprehensive test coveragenot_for: - Frontend-only changes with no backend component - Infrastructure or deployment tasks
workflow: phases: - name: architect type: standard - name: developer type: standard depends_on: [architect] - name: quality-gate type: gate depends_on: [developer] commands: - name: lint run: npm run lint - name: typecheck run: npx tsc --noEmit - name: tests run: npm test - name: security run: npm audit --audit-level=moderate max_iterations: 3 support: - name: fixer type: standardRole prompts
Section titled “Role prompts”Each role gets a detailed prompt with identity, objective, context, deliverables, constraints, and handoff instructions. Gate roles describe evaluation criteria in plain language — the orchestrator automatically injects the verdict protocol at runtime:
# Role: Quality Gate
You are a senior QA engineer responsible for verifying that the implementationmeets all quality standards before release.
## ContextRead the implementation from the developer's outgoing channel and review itagainst the architectural design.
## Gate EvaluationRun the automated commands and review code quality:
- If ALL commands pass and the code follows the design: approve and proceed
- If tests fail or linting issues are fixable: route to fixer for remediation
- If the implementation has fundamental design flaws: escalate to a humanA2A metadata
Section titled “A2A metadata”Every generated team includes top-level A2A metadata fields, making it discoverable by other agents in A2A meshes:
tags: [code, tdd, nodejs, express, api]
use_when: - Building a new Express API endpoint or service - Adding features that require comprehensive test coveragenot_for: - Frontend-only changes with no backend component - Infrastructure or deployment tasksRefining a Team
Section titled “Refining a Team”Generated teams are good. With a few rounds of refinement, they’re great.
Improve a role
Section titled “Improve a role”The improve command takes an existing role and makes it sharper — more specific instructions, better-defined deliverables, concrete examples:
cliq builder improve my-team --role architectWithout any instruction, the LLM reviews the role in context of the full team and improves it using its own judgement. With an instruction, you direct the improvement:
cliq builder improve my-team --role developer --instruction "add error handling patterns and logging conventions"cliq builder improve my-team --role reviewer --instruction "add OWASP top 10 security checklist"The output shows what changed:
ℹ Improving role 'developer' — "add error handling patterns"...
ℹ Changes: Added structured error handling section with try/catch patterns, custom error classes, and logging conventions using Winston.
Original: 45 lines, 2340 chars Improved: 72 lines, 3891 chars
✓ Role 'developer' updated in ~/.cliqrc/teams/@local/my-team/roles/developer.mdUse --dry-run to preview changes before writing.
Pipe longer instructions from stdin:
echo "Focus on making the deliverables section extremely specific, with exact file paths and content structure for each output file" | cliq builder improve my-team --role writer --instruction -Analyse gaps
Section titled “Analyse gaps”The gaps command is your team’s peer review. It examines the workflow structure, role prompts, and overall coverage, then suggests concrete improvements:
cliq builder gaps my-teamℹ Analysing team 'my-team'...
ℹ 3 suggestion(s):
1. [missing gate] Add integration test gate The pipeline runs unit tests but has no integration test phase. Tests may pass individually but fail when components interact. → suggested phase: integration-gate [gate]
2. [workflow improvement] Parallelize independent analysts The financial-analyst and legal-analyst phases have no dependency on each other but are currently sequential. Running them in parallel would halve the pipeline time.
3. [metadata enhancement] Add output definitions The team is missing output definitions. Adding them enables downstream A2A agents to understand what this team produces.Each suggestion includes a type, explanation, and when applicable, a fully-defined phase and role that you can add manually.
Regenerate A2A metadata
Section titled “Regenerate A2A metadata”If you’ve modified a team’s workflow or roles, regenerate the A2A metadata fields (tags, use_when, not_for) to keep them accurate:
cliq builder capability my-team A2A metadata: tags: finance, legal, analysis, due-diligence use_when: - Evaluating a company for investment or acquisition - Performing regulatory compliance review not_for: - Real-time market trading decisions - Personal tax preparation✓ A2A metadata updated in ~/.cliqrc/teams/@local/my-team/team.ymlTutorial: Build a Research Team from Scratch
Section titled “Tutorial: Build a Research Team from Scratch”Let’s walk through the full builder workflow — from idea to running pipeline.
Step 1: Generate
Section titled “Step 1: Generate”You want a team that reads a Google Doc, researches the topic, writes a comprehensive analysis, and pushes the result to Google Drive.
cliq builder generate -m "Research team that pulls a question from a Google Doc, researches the topic using the document's context, writes a detailed analysis with citations, and pushes the final report to a Google Drive folder. Include a quality gate to verify completeness."The builder produces a team with phases like researcher, writer, quality-gate, and a fixer support phase. The first phase has a pull declaration for the Google Doc, and the gate phase has a push with on: pass for Google Drive.
Step 2: Review
Section titled “Step 2: Review”Look at what was generated:
cliq team show @local/research-teamCheck the workflow:
cliq team show @local/research-team --workflowRead a specific role (team packages store one file per role under roles/):
cat ~/.cliqrc/teams/@local/research-team/roles/researcher.mdStep 3: Refine
Section titled “Step 3: Refine”Maybe the writer role is too generic. Sharpen it:
cliq builder improve research-team --role writer --instruction "emphasize structured arguments with evidence, include a methodology section, and require inline citations in [Author, Year] format"Run a gap analysis:
cliq builder gaps research-teamIf it suggests adding a fact-checker phase, you can add it manually with cliq team add-phase or regenerate with a more detailed description.
Step 4: Deploy and run
Section titled “Step 4: Deploy and run”cd ~/src/my-projectcliq initcliq assemble @local/research-teamcliq req -m "What are the competitive dynamics in the enterprise AI agent market?" \ --input source_document_url="https://docs.google.com/document/d/abc123/edit" \ --input output_folder_url="gdrive://1NheCF5I0pB_c4SJ1s0EJhUbsQ7cyPmrE"cliq runThe pipeline pulls the Google Doc, runs the research and writing phases, verifies quality, and pushes the final report to Google Drive — all automated, all from one command.
Step 5: Iterate
Section titled “Step 5: Iterate”After a run, improve based on what you observed:
cliq builder improve research-team --role researcher --instruction "be more thorough about finding contrarian viewpoints"cliq builder improve research-team --role quality-gate --instruction "add check for minimum word count of 2000"Each improvement is instant. Reassemble, re-req, re-run.
Be descriptive. The more context you give the builder, the better the output. “Code review team” produces something generic. “Code review team for a React TypeScript monorepo with Playwright e2e tests, focusing on accessibility compliance and performance budgets” produces something you can actually use.
Use --dry-run first. Especially for complex teams. Preview the structure before committing.
Iterate with improve. The first generation is a starting point. Two or three rounds of targeted improvements — with specific instructions — produce significantly better role prompts.
Run gaps after manual edits. If you’ve added phases or modified roles by hand, gaps catches structural issues you might have introduced.
Use -f for complex descriptions. For teams with many requirements, write the description in a file and pass it with -f. The builder handles multi-paragraph, detailed specifications.
Override models for quality. For critical teams, use a stronger model: --provider anthropic --model claude-sonnet-4-20250514 or --provider google --model gemini-2.5-pro. For quick experiments, the defaults are fast and cheap.
How It Works
Section titled “How It Works”Under the hood, the builder is a structured generation pipeline:
-
Prompt composition — Your description is combined with a comprehensive platform reference that teaches the LLM everything about cliq: phase types, channel conventions, signal protocols, pull/push semantics, gate verdicts, and the filesystem contract. The LLM doesn’t guess how cliq works — it’s told.
-
LLM generation — The composed prompt is sent to your configured provider. The LLM returns a structured JSON object containing the team name, description, A2A metadata, workflow phases (with dependencies, commands, pull/push), and full role prompts.
-
Parsing — The raw LLM output is parsed and extracted into typed structures. The parser handles JSON embedded in markdown fences, trailing commas, and other common LLM output quirks.
-
Validation — Every generated team is validated before installation: phase names are unique and kebab-case, role names match phase names, gate phases have commands, dependencies reference real phases, support phases are referenced by gates, and the DAG has no cycles.
-
Serialization — The validated team is written to
~/.cliqrc/teams/@local/<name>/as ateam.yml, individual role files underroles/, and an auto-generatedREADME.md.
The same engine powers improve (targeted role refinement), gaps (structural analysis), capability (A2A metadata generation), and the server-side A2A builder skills.
Command Reference
Section titled “Command Reference”| Command | What it does |
|---|---|
cliq builder generate -m <text> / -f <file> / --stdin | Generate a complete team from a description |
cliq builder improve <team> --role <name> | Improve a specific role with optional --instruction |
cliq builder gaps <team> | Analyse a team for structural gaps and suggest improvements |
cliq builder capability <team> | Generate or regenerate A2A metadata (tags, use_when, not_for) |
All commands accept --provider <name> and --model <model> overrides. See the full CLI Reference for all options.