When an agent produces a result -- code, a document, an analysis -- how do you ensure its quality? Manual review is expensive and often skipped. The /verify skill automates this critical step.

By invoking /verify, an independent verification agent reviews the deliverable with an adversarial eye. It actively looks for errors, omissions, inconsistencies, and edge cases. The result is a structured verdict: PASS, FAIL, or PARTIAL.

This process doesn't replace human validation, but it prepares it. When a deliverable reaches a human's desk, obvious problems have already been caught and fixed.

1

Invocation

The primary agent completes its work and launches /verify on the result, or the skill is automatically triggered via a post-execution hook.

2

Isolation

A separate verification agent is instantiated. It has read-only access to the deliverable and its context.

3

Adversarial analysis

The verification agent actively looks for: logical errors, unhandled edge cases, inconsistencies with requirements, security issues.

4

Verdict

A structured report is produced: PASS / FAIL / PARTIAL, with detailed justification for each point.

5

Correction

On FAIL or PARTIAL, the primary agent receives the report and can fix issues before delivery.

Code generation

An agent writes a complex function. /verify catches an unhandled edge case (division by zero, empty list) before the code is committed.

Content writing

An agent produces a business document. /verify identifies an inconsistency between two quoted figures and an inappropriate tone in a paragraph.

System configuration

An agent generates a config file. /verify spots an already-used port and a missing environment variable.

featurePages.verify.config
  • Available for all EasyClaw v2 agents
  • Usable on demand (/verify) or in automatic mode
  • Recommended in combination with the Verification Agent for critical tasks
Adversarial Verification -- EasyClaw v2