FAQ
What version is the current v2 documentation aligned to?
The current v2 docs are aligned to the active repo state and the current released line documented in this repo, which is 2.0.9.
That includes newer v2 additions such as:
CorrectionStepce_mcp_plannerce_verbose- extra rule phases like
POST_SCHEMA_EXTRACTIONandPRE_AGENT_MCP - shared Thymeleaf-backed prompt rendering
Is v1 still available?
Yes. v1 docs are still available from the version selector.
Use v1 only if you are maintaining an existing older integration. New work should target the current v2 line.
What is the fastest mental model for ConvEngine?
ConvEngine is a deterministic, database-configured conversation engine.
In practice:
- Java code provides the runtime pipeline
ce_*tables provide domain behavior- rules and scoped rows decide transitions
- the LLM is a bounded subsystem inside that pipeline, not the whole engine
Does the engine rely only on intent classification?
No.
Current v2 uses a broader turn-routing model:
DialogueActStepinterprets turn intent likeAFFIRM,NEGATE,EDIT,QUESTION,RESET, andANSWERInteractionPolicyStepapplies deterministic routingCorrectionStepcan keep a turn in-place for confirm/edit/retry flows- only then does the engine continue into intent, schema, tools, rules, and response resolution
How does the engine understand messages like "yes", "no", "retry", or "change email"?
It is not just keyword matching against business intents.
Those turns are primarily handled by:
DialogueActStepInteractionPolicyStepCorrectionStep- pending-action runtime state
- prompt-template interaction metadata
That is why modern v2 can confirm, reject, retry, or patch fields without forcing a full reclassification every time.
Does StateGraphStep mutate state?
No. The current implementation is validate-only.
It checks whether the current transition path is acceptable and can set state-graph validity signals for downstream behavior, but it does not directly rewrite the state by itself.
Can I configure behavior without changing Java code?
Yes. That is the normal operating model.
Most consumer behavior should be driven through configuration and data:
ce_intentce_intent_classifierce_output_schemace_prompt_templatece_rulece_responsece_pending_actionce_mcp_toolce_mcp_plannerce_verboseconvengine.flow.*convengine.mcp.*
Custom Java should be reserved for:
- LLM provider integration
- tool handlers / executors
- task execution
- app-specific transformers and policy hooks
Do ce_mcp_tool.intent_code and state_code still support null wildcard behavior?
No.
That older description is no longer correct for the current v2 line.
Current behavior:
intent_codeis requiredstate_codeis required- valid values are exact scope,
ANY, orUNKNOWN - invalid scope rows are blocked by startup validation
The same explicit-scope model also applies to ce_mcp_planner.
What is ce_mcp_planner for?
ce_mcp_planner is the scoped planner prompt source for MCP.
It lets the framework choose planner prompts by intent/state rather than relying only on legacy config keys.
Current fallback order is:
- exact
intent_code + state_code - exact
intent_code + ANY ANY + ANY- legacy fallback config when planner rows are unavailable
What is the difference between ToolOrchestrationStep and McpToolStep?
They are different execution models.
ToolOrchestrationStep:
- executes exactly one requested tool
- works from
tool_request - writes
tool_result,tool_status, andcontext.mcp.toolExecution.* - runs
POST_TOOL_EXECUTIONrules
McpToolStep:
- runs planner-driven MCP loops
- can call multiple tools across a bounded loop
- stores planner observations in
context.mcp.observations - stores planner answer in
context.mcp.finalAnswer - runs
PRE_AGENT_MCPandPOST_AGENT_MCPrule paths
When should I use ANY scope?
Only when the configuration is truly global.
Use exact intent/state scope first. Move to ANY only when:
- the behavior is genuinely shared
- the blast radius is acceptable
- the row still makes sense across all intended flows
Overusing ANY is one of the easiest ways to create broad but subtle misbehavior.
What is ce_verbose, and do I need it?
ce_verbose is a runtime progress/error messaging table.
It is strongly recommended in current v2 because it:
- gives the UI and QA a readable progress layer
- helps operators understand skipped or degraded paths
- complements raw audit events
It is not required for the engine to work, but most serious deployments should use it.
What are the most important rule phases now?
Current v2 phases are:
POST_DIALOGUE_ACTPOST_SCHEMA_EXTRACTIONPRE_AGENT_MCPPRE_RESPONSE_RESOLUTIONPOST_AGENT_INTENTPOST_AGENT_MCPPOST_TOOL_EXECUTION
Legacy names are still normalized, but new configurations should use the current phase names.
Can rules change more than state and intent?
Yes.
The rule engine can now do more than classic transition logic. Current actions include:
SET_STATESET_INTENTSET_DIALOGUE_ACTSET_INPUT_PARAMSET_JSONSET_TASKGET_CONTEXTGET_SCHEMA_JSONGET_SESSION
That makes rules more powerful, but also easier to misuse. Keep rule ownership disciplined.
How should I think about ce_prompt_template in current v2?
As both prompt content and runtime behavior metadata.
In 2.0.9+, prompt rows should also describe turn semantics using:
interaction_modeinteraction_contract
That is what allows the framework to safely interpret whether a state supports:
- affirm
- edit
- retry
- reset
- structured input collection
Is prompt rendering still just {{var}} substitution?
No.
Current v2 uses a shared Thymeleaf-backed renderer.
Supported patterns include:
{{var}}#{...}[${...}]
This rendering path is used across prompt templates and ce_verbose messages.
What are the most important runtime endpoints for debugging?
The core ones are:
POST /api/v1/conversation/messageGET /api/v1/conversation/audit/{conversationId}GET /api/v1/conversation/audit/{conversationId}/tracePOST /api/v1/cache/refreshGET /api/v1/cache/analyze
If experimental SQL generation is enabled:
POST /api/v1/conversation/experimental/generate-sqlPOST /api/v1/conversation/experimental/generate-sql/zip
What is the most important production safeguard the framework does not provide automatically?
Conversation-level concurrency control.
The framework does not give you built-in optimistic locking on ce_conversation, so you should prevent parallel active turns for the same conversationId.
This is still the most important operational safeguard for correctness.
How do we avoid 500s or broken user output for conversational mismatches?
The best protection is layered:
- make every reachable state have a valid response strategy
- keep exact response/rule coverage for active flows
- use
UNKNOWN/ANYintentionally, not accidentally - test correction, failure, and no-match paths
- inspect trace output before shipping config changes
The goal is not just "no exception." The goal is "no misleading fallback that looks valid."
Should new consumers enable everything immediately?
No.
The safest rollout path is:
- One narrow intent and one deterministic response path.
- Then schema collection.
- Then confirmation/correction behavior.
- Then pending actions or MCP.
- Then verbose polish and richer streaming behavior.
That sequence gives you a stable baseline before you widen the runtime surface.