Skip to main content
v2

MCP Example 1 (ConvEngine Demo + Mockey)

This guide shows a complete local test of ConvEngine 2.0.7 advanced MCP HTTP tools using convengine-demo and mockey.

It covers:

  1. starting mock APIs in mockey
  2. wiring demo MCP tools to mockey endpoints
  3. seeding ce_mcp_tool and planner prompts
  4. testing via SQL panel + chat requests

Prerequisites

  • mockey repo available locally
  • convengine-demo repo available locally
  • Postgres running for convengine-demo
  • ConvEngine dependency 2.0.7

What is already wired in demo

convengine-demo now includes four HttpApiRequestingToolHandler mappings:

  • mock.order.submit -> POST /api/mock/order/submit
  • mock.order.status -> GET /api/mock/order/status
  • mock.order.async.trace -> GET /api/mock/order/async/trace
  • mock.customer.profile -> GET /api/mock/customer/profile

Base URL is configured by:

convengine:
demo:
mockey:
base-url: http://localhost:31333
api-key: demo-live-key

Step 1: Start mockey

From mockey root:

npm install
npm start

Expected log:

listening to 31333

Step 2: Quick endpoint smoke tests

curl -s "http://localhost:31333/api/mock/order/status?orderId=ORD-7017"
curl -s "http://localhost:31333/api/mock/order/async/trace?orderId=ORD-7017"
curl -s "http://localhost:31333/api/mock/customer/profile?customerId=CUST-1001"
curl -s -X POST "http://localhost:31333/api/mock/order/submit" -H "Content-Type: application/json" -d '{"orderId":"ORD-7017","customerId":"CUST-1001","submittedByRole":"ADMIN"}'

Step 3: Start convengine-demo

From convengine-demo root:

./mvnw spring-boot:run

Step 4: Seed MCP tools and planner prompts (SQL panel)

Run convengine-demo/src/main/resources/sql/seed.sql.

That seed includes the live MCP tools and ORDER_DIAGNOSTICS response/rule wiring (ANALYZE -> COMPLETED via POST_AGENT_MCP when context.mcp.finalAnswer exists):

INSERT INTO ce_mcp_tool (tool_id, tool_code, tool_group, intent_code, state_code, enabled, description)
VALUES
(3, 'mock.order.submit', 'HTTP_API', 'ORDER_DIAGNOSTICS', 'ANALYZE', true, 'Submit order through mockey live API'),
(4, 'mock.order.status', 'HTTP_API', 'ORDER_DIAGNOSTICS', 'ANALYZE', true, 'Fetch order status from mockey live API'),
(5, 'mock.order.async.trace', 'HTTP_API', 'ORDER_DIAGNOSTICS', 'ANALYZE', true, 'Fetch async callback trace from mockey live API'),
(6, 'mock.customer.profile', 'HTTP_API', 'ORDER_DIAGNOSTICS', 'ANALYZE', true, 'Fetch customer profile from mockey live API');

Then seed planner prompts (framework seed) if your environment uses scoped MCP planner rows:

  • Postgres: convengine/src/main/resources/sql/mcp_planner_seed.sql (or mcp_planner_seed_postgres.sql)
  • SQLite: convengine/src/main/resources/sql/mcp_planner_seed_sqlite.sql

Step 5: Run as chat-style walkthrough

Turn 1 - User
Order ORD-7017 was submitted by admin, callback is still null. Can you check status and trace?
Turn 1 - Assistant (internal MCP)
intent: ORDER_DIAGNOSTICSstate: ANALYZE
Calling tools: mock.order.status -> mock.order.async.trace
Turn 1 - Final Assistant Output
intent: ORDER_DIAGNOSTICSstate: COMPLETED
Order ORD-7017 is submitted. Async callback is pending/missing, so no callback timestamp is available yet.

Step 6: Validate advanced HTTP behavior

HttpApiRequestingToolHandler calls are executed by framework HttpApiToolInvoker.

Observation payload will include:

  • status
  • attempt
  • latencyMs
  • mapped

Expected shape:

{
"status": 200,
"attempt": 1,
"latencyMs": 40,
"mapped": {
"orderId": "ORD-7017",
"status": "SUBMITTED",
"api4AsyncStatus": null
}
}

Step 7: Verify MCP audit stages

Check audit timeline in audit APIs for:

  • TOOL_ORCHESTRATION_REQUEST
  • TOOL_ORCHESTRATION_RESULT
  • MCP_TOOL_CALL
  • MCP_TOOL_RESULT

MCP Output to Final Response (Deep Dive)

This is the exact handoff path from MCP tools to response text/json in the same request turn:

  1. McpToolStep clears stale context.mcp.* at start of turn.
  2. Planner (McpPlanner) receives user_input, context, mcp_tools, and current mcp_observations.
  3. Each CALL_TOOL result is appended into context.mcp.observations[] as {toolCode, json}.
  4. When planner returns ANSWER, step writes:
    • context.mcp.finalAnswer
    • input param mcp_final_answer
  5. RulesStep runs POST_AGENT_MCP phase; when context.mcp.finalAnswer exists, rule transitions ANALYZE -> COMPLETED.
  6. ResponseResolutionStep selects ce_response for current intent/state.
  7. If selected response is DERIVED, resolver selects matching ce_prompt_template and invokes LLM with:
    • rendered prompt
    • derivation_hint from ce_response
    • session.contextDict() as context (includes the MCP block)
  8. Final assistant output is audited (ASSISTANT_OUTPUT) and persisted.
Why this works

Even if your template does not explicitly reference mcp_final_answer, it still receives full context JSON. If context.mcp.observations and context.mcp.finalAnswer exist, response derivation can use them.

Turn-by-Turn E2E (Tab View)

Natural language planner path (single request turn)

Loop/StepWhat LLM deducesTables touchedArtifact produced
Intent+state phaseQuestion is an order diagnostics request in tool-eligible scope.ce_intent(R), ce_intent_classifier(R), ce_rule(R), ce_audit(W)resolved intent/state
MCP loop #1Need order status first.ce_mcp_tool(R), ce_mcp_planner(R), ce_audit(W)CALL_TOOL mock.order.status
Tool exec #1HTTP mapped status payload available.ce_audit(W)context.mcp.observations[0]
MCP loop #2Need callback trace to confirm async issue.ce_mcp_tool(R), ce_mcp_planner(R), ce_audit(W)CALL_TOOL mock.order.async.trace
Tool exec #2Trace confirms callback missing/pending.ce_audit(W)context.mcp.observations[1]
MCP loop #3Enough evidence; produce conclusion.ce_mcp_planner(R), ce_audit(W)context.mcp.finalAnswer + mcp_final_answer
Post-MCP ruleWhen context.mcp.finalAnswer exists, move ANALYZE to COMPLETED.ce_rule(R), ce_audit(W)state=COMPLETED
Response resolutionRender user-facing answer with MCP evidence.ce_response(R), ce_prompt_template(R), ce_audit(W)final payload

Keep it simple:

  • direct tool_request path uses inputParams.tool_result
  • planner path uses context.mcp.observations and context.mcp.finalAnswer

Use these SQL updates so prompt behavior is explicit:

-- Direct tool_request path (ToolOrchestrationStep output)
UPDATE ce_prompt_template
SET user_prompt = 'User input: {{user_input}}\nTool result: {{tool_result}}\nContext: {{context}}\nSummarize status and next action.'
WHERE intent_code = 'FAQ' AND state_code = 'IDLE' AND response_type = 'JSON';

-- Planner path (McpToolStep output)
UPDATE ce_prompt_template
SET user_prompt = 'Context JSON:\n{{context}}\n\nRead context.mcp.observations and context.mcp.finalAnswer. Produce a concise diagnostic summary.'
WHERE intent_code = 'ORDER_DIAGNOSTICS' AND state_code = 'COMPLETED' AND response_type = 'TEXT';

ORDER_DIAGNOSTICS is now included in demo seed with POST_AGENT_MCP completion and DERIVED final response from context.mcp.finalAnswer.

ReactFlow View

MCP Live Request Execution

Planner loop, tool observations, and response resolution handoff.

React Flow mini map

Optional planner-driven test

After deterministic tool_request tests pass, ask natural prompts without tool_request, for example:

  • Order ORD-7017 was submitted by admin but async callback is null. Check status and trace.

Planner should choose one or more mock.* tools based on descriptions and context.

Troubleshooting

  • No HttpApiToolHandler...:
    • ensure convengine-demo depends on ConvEngine 2.0.7
    • ensure handler toolCode() matches DB ce_mcp_tool.tool_code
  • API not called:
    • confirm mockey running on 31333
    • confirm convengine.demo.mockey.base-url
  • tool not found:
    • re-run seed.sql
    • verify ce_mcp_tool.enabled=true