/camel-verify is the runtime verification orchestrator that validates generated integrations actually work. Through a 3-phase feedback loop with error classification and automated fixes, the AI ensures your integration builds, passes integration tests, and handles failures gracefully.
The output is a verified, working integration ready for deployment.
When to Use
Invoke /camel-verify when you:
Want to validate a generated integration works at runtime
Encounter build failures, runtime errors, or test failures
Need to troubleshoot an existing integration
Want automated diagnosis and fixing of common issues
Auto-invocation: After /camel-execute completes, the AI automatically invokes /camel-verify. You can also invoke it standalone for troubleshooting.
The Verification Loop
The verification process runs three phases in sequence. If any phase fails, the AI classifies the error, fixes it, and retries (up to 15 attempts per phase). An environment probe runs before verification (as the first step of camel-execute) to catch dependency, service, and startup issues before code is generated.
Phase 1: Build
Phase 1: Build
Goal: Compile the integration and resolve all Maven dependencies.
What Happens
The AI runs the Maven build:
./mvnw clean package -DskipTests
Why skip tests? We’re verifying the build compiles. Tests run in Phase 2. For JBang-based integrations, this phase is skipped entirely since JBang handles compilation on the fly.
Verification
The AI checks the Maven exit code and output:
Success:
[INFO] BUILD SUCCESS
[INFO] Total time: 45.321 s
[INFO] Finished at: 2026-04-23T14:32:10Z
Build: PASS
Error: package org.apache.camel.component.kafka does not exist
Diagnosis: Missing camel-kafka dependency
Fix: Adding to pom.xml...
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka</artifactId>
</dependency>
Retrying build...
2. Version Conflict
Error: Dependency convergence error for org.apache.camel:camel-core
Diagnosis: Multiple Camel versions detected
Fix: Enforcing version via BOM...
<dependencyManagement>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-bom</artifactId>
<version>4.14.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencyManagement>
Retrying build...
3. YAML Syntax Error
Error: Cannot parse route file: order-validation.camel.yaml
Line 12: Unexpected token
Diagnosis: YAML syntax error
Fix: Invoking camel-validate skill to repair...
(camel-validate analyzes and fixes YAML)
Retrying build...
4. Java Version
Error: Source option 17 is no longer supported. Use 21 or later.
Diagnosis: Java version mismatch
Fix: Updating pom.xml maven.compiler properties...
<maven.compiler.source>21</maven.compiler.source>
<maven.compiler.target>21</maven.compiler.target>
Retrying build...
Retry Strategy
The AI retries the build up to 15 times, applying different fixes based on error patterns.
Goal: Execute Citrus YAML integration tests to verify the integration builds, starts, and behaves correctly.
What Happens
The AI runs Citrus integration tests using the Camel JBang test plugin:
camel test run *.it.yaml
These tests are self-contained: Testcontainers automatically start external services (databases, message brokers), camel:jbang:run starts the application, and send/receive actions validate behavior. There is no need for a separate startup or environment setup phase.
Test: testInvalidOrderHandling
Expected: Message on 'orders.invalid' topic
Actual: No message received
Diagnosis: Validation route not routing to invalid topic
Fix: Reviewing acceptance criteria for Task 2...
Criterion: "Invalid orders → send to Kafka topic orders.invalid"
Checking order-validation.camel.yaml...
Issue: Missing route to Kafka topic
Fix: Invoking camel-implement to regenerate route...
(Regenerates order-validation.camel.yaml with invalid routing)
Retrying tests...
2. Timing Issue
Test: testKafkaPublish
Error: Timeout waiting for Kafka message
Diagnosis: Test timeout too short for async processing
Fix: Increasing test timeout in Citrus YAML test...
Retrying tests...
3. Test Re-generation
Test: testDatabaseFailureRetry
Error: Test expectations don't match actual behavior
Diagnosis: Test assumptions are incorrect — the test needs updating
Fix: Invoking camel-test to regenerate test...
(Regenerates the Citrus YAML test with corrected expectations)
Retrying tests...
4. Persistent Architectural Failure (Re-plan)
Test: testValidOrderProcessing
Error: Route architecture cannot satisfy the acceptance criteria
Diagnosis: Persistent failure across multiple fix attempts —
the TDD plan itself needs modification
Fix: Triggering re-plan to modify the TDD structure...
(Automatically adjusts the task decomposition and test expectations)
Retrying from Phase 1...
Fix Routing
Based on error classification, the AI routes fixes to the appropriate target:
# Verification Report: Order Processing Integration
**Status:** ✗ FAILED
## Summary
- Build: ✓ PASS (2 attempts)
- Test Verification: ✗ FAIL (15 attempts)
## Error Details
**Phase:** Test Verification
**Error Type:** TEST_ERROR
**Error Message:** testInvalidOrderHandling — route logic does not match acceptance criteria
**Attempted Fixes:**1. Regenerated route via camel-implement (attempt 1)
2. Regenerated test via camel-test (attempt 5)
3. Triggered re-plan for TDD modification (attempt 10)
...
**Root Cause:**Acceptance criterion conflicts with the actual data model — the test expects
a field that does not exist in the source schema.
**Manual Fix Required:**Review the Design Specification acceptance criteria and update the
data model or test expectations accordingly.
After applying the manual fix, run:
/camel-verify
## Partial Results
- Build: Successful
- Tests: 3/4 passing, 1 failing
## Recommendations
1. Review the acceptance criteria for order validation
2. Verify the source schema includes all expected fields
3. Consider running `/camel-test` to regenerate tests after schema updates
Error Classification System
The AI classifies every error into one of four categories and routes fixes to the appropriate target:
1. BUILD_ERROR
Indicators:
Maven compilation errors
Missing dependencies
YAML syntax errors
Java source errors
Fix Strategy:
Add missing dependencies to pom.xml
Repair YAML with camel-validate skill
Update Java code (rare for generated code)
Resolve version conflicts with BOM
Routed To: Build system, dependency manager, camel-validate skill
2. RUNTIME_ERROR
Indicators:
Application fails to start (detected by Citrus test runner)
Routed To: Environment manager, Testcontainers configuration
Retry Budget
Each phase has a retry budget of 15 attempts.
Retry Strategy:
Attempt 1: Try original approach
Attempt 2: Apply simple fix (restart service)
Attempt 3: Apply configuration fix
Attempt 4: Apply code fix
Attempt 5: Apply environment fix
...
Attempt 15: Last attempt
→ If still failing, give up and report
Exponential Backoff:
Between retries, the AI waits:
Attempts 1-3: 5 seconds
Attempts 4-7: 10 seconds
Attempts 8-12: 30 seconds
Attempts 13-15: 60 seconds
This gives external services time to recover.
Standalone Invocation
You can invoke /camel-verify standalone for troubleshooting:
Use Case 1: After Manual Code Changes
(You manually edit a route)
/camel-verify
→ Runs full 3-phase verification
→ Reports if your changes broke anything
Use Case 2: Test Failures
You: My integration used to work but now tests fail
/camel-verify
→ Phase 1: Rebuild (detects updated code)
→ Phase 2: Runs tests (identifies failure)
→ Fix: Regenerates route or test as needed
→ Phase 3: Reports results
Use Case 3: New Test Scenarios
(You add a new Citrus YAML test)
/camel-verify --phase=2
→ Skips Phase 1 (already built)
→ Runs only Phase 2 (Test Verification via camel test run)
→ Reports test results
Environment-in-the-Loop Concept
/camel-verify is “environment-in-the-loop” verification: it doesn’t just check code, it actually runs the integration with real databases, message brokers, and HTTP endpoints via Testcontainers.
Why This Matters:
Catches real issues: Code might compile but fail at runtime
Tests behavior: Not just unit tests, but full integration tests with real services
Prevents surprises: Find issues now, not in production
Self-contained: Testcontainers manage the service lifecycle – no manual Docker Compose setup needed
Contrast with traditional testing:
Unit tests: Mock everything (no environment)
Integration tests: Run against real services managed by Testcontainers (environment-in-the-loop)
Camel-Kit uses Citrus YAML integration tests with Testcontainers for verification because integrations are, by definition, about connecting systems. Docker Compose files are still generated as user artifacts for local development, but they are not used during the verification loop.
Graceful Degradation
If tools are unavailable, the AI adapts:
No Docker
Warning: Docker not available.
Skipping test verification phase (Testcontainers requires Docker).
Note: Without Docker, integration tests cannot start external services.
Consider running on a system with Docker installed.
Proceeding to report phase with partial results...
No Maven Wrapper
Warning: ./mvnw not found. Skipping build verification phase.
Proceeding to test verification...
No Camel Test CLI
Warning: camel test command not available.
Skipping test verification phase.
Note: Without the Camel JBang test plugin, we cannot run integration tests.
Consider installing Camel JBang: https://camel.apache.org/manual/camel-jbang.html
No Citrus Tests
Warning: No Citrus YAML tests found (*.it.yaml)
Skipping test verification phase.
Note: Without tests, we cannot verify integration behavior.
Consider generating tests with /camel-test skill.
The AI continues with available tools, warning about limitations.
Summary
/camel-verify validates integrations through a 3-phase feedback loop:
Build - Compile and resolve dependencies (skipped for JBang)
Test Verification - Run Citrus YAML tests via camel test run with Testcontainers
Report Generation - Summarize results and provide insights