Today, I sent a prompt to OpenClaw to build me a full paper-trading bot for XAUUSD. Without writing any code, I see how OpenClaw struggled at first but finally able to deliver the result. It's a great example of how OpenClaw can be used to build complex systems.
This post, written by OpenClaw itself, is a technical deep dive of what OpenClaw built, why we made certain choices, and what mattered in practice.
Goal
Build a bot that can:
- Run 24/5 continuously.
- Stay strictly in paper mode (no real order execution).
- Evaluate and iterate strategies quickly.
- Expose performance in a local/LAN dashboard.
- Integrate with MT5 chart overlays for visual review.
Architecture Overview
At a high level:
- Execution/orchestration: OpenClaw + PowerShell daemon scripts
- Trading runtime: Python +
MetaTrader5package (viauv) - Data source: MT5 terminal feed (
XAUUSD.m) - Persistence: CSV/JSON files in
paper_results/ - Visualization: Flask dashboard + MT5 custom indicator overlay
- Remote access: Tailscale + Tailscale Serve
Step 1 — MT5 connectivity and safety first
Before strategy logic, we validated the runtime:
uvinstalled and workingMetaTrader5Python module imported successfully- terminal and account detected
- symbol discovery found broker-specific instrument name (
XAUUSD.m, notXAUUSD)
Important safety rule from day one:
No real orders.
The paper engine never calls mt5.order_send(). We kept this as an explicit guardrail in code and summaries.
Step 2 — Paper trading engine
We created a reusable engine (paper_trader.py) with:
- periodic polling loops
- virtual position lifecycle (entry/SL/TP)
- trade logging (
trades_*.csv) - run-state snapshots (
state_*.json) - per-strategy stats in R-multiples
Why R-multiples? Because they normalize outcomes across volatility regimes and stop sizes.
Step 3 — Daemon mode for realism
We moved from simple scheduled runs to daemon operation for better market realism.
Daemon responsibilities:
- run trading cycles continuously
- restart next cycle automatically
- maintain heartbeat/status file
- avoid overlap with lock-file guard
This gave us a more realistic 24/5 simulation environment compared to isolated one-shot cron runs.
Step 4 — Dashboard and ops visibility
We built a lightweight web dashboard that shows:
- daemon running/stopped state
- total trades
- aggregate R
- per-strategy stats
- recent trades table
Operational fixes we had to make:
- Windows-specific process checks for daemon status
- heartbeat-based status (more reliable than PID-only checks)
- firewall rule for dashboard port access
Step 5 — Network access with Tailscale
LAN-only access was unreliable across SSIDs/VLAN boundaries, so we set up Tailscale.
Then we enabled:
- direct tailnet IP access
- HTTPS proxy via Tailscale Serve
This made the dashboard reachable from phone consistently.
Step 6 — MT5 chart integration
To make review more trader-friendly, we added MT5 overlay integration:
- exporter writes paper trades to
MQL5/Files/paper_trades_overlay.csv - custom indicator (
PaperTradeOverlay.mq5) reads the file - chart displays trade paths and R outcomes
This closed the loop between data and discretionary review.
Strategy Iteration: How we approached multiple strategies first
We intentionally started with a multi-strategy portfolio before narrowing anything down.
Why start this way?
- Gold behaves differently across sessions and volatility regimes.
- One strategy can look great in one week and fail the next.
- Running several uncorrelated ideas in parallel gives faster signal on what is actually robust.
Our first batch included different strategy families:
- liquidity sweep reversals,
- breakout/retest continuation,
- mean reversion variants,
- trend pullback models.
All of them shared the same execution constraints:
- same symbol and feed,
- same paper execution engine,
- same spread filters,
- same risk normalization in R,
- same logging/reporting path.
That let us compare strategy behavior fairly instead of mixing apples and oranges.
Only after collecting enough runs did we cut weak performers and move to one-by-one refinement.
We started with multiple strategies, then observed degradation in aggregate results. Instead of forcing optimization too early, we switched to an iterative process:
- reduce complexity
- test one strategy clearly
- add next strategy incrementally
- compare behavior under same runtime constraints
Then we implemented and ran concurrently:
- ICT-style liquidity sweep (H1/M15/M1)
- MA55 channel + Heiken Ashi scalping
- session breakout + liquidity sweep + MSS hybrid
The key lesson: most edge comes from filter quality and risk discipline, not from stacking many entry ideas.
Risk & execution principles we enforced
- spread filters before entry
- max open trades cap
- new-bar confirmation logic (avoid over-triggering intrabar)
- session-based constraints (Asian / London / NY windows depending on setup)
- strict paper-only execution boundary
What I'd improve next
If we continue this build, these are the highest leverage improvements:
- strategy config file (enable/disable, params) without code edits
- structured regime tags (trend/range/event) per trade
- better post-trade analytics (expectancy by session and setup quality)
- execution simulation realism (spread expansion and slippage models)
- automatic invalidation checks for stale assumptions
Closing thought
The hardest part for trading bot wasn't implementing the strategy logic anymore. It was prompting OpenClaw to build a safe, observable, and repeatable system that can run continuously while we learn from real market behavior.
That is exactly where OpenClaw helped most: orchestration, automation, and fast iteration with clear operational control.
We will get back with new updates and improvements to the bot. Stay tuned!
This post is written/assisted by AI and reviewed by human. Read more about it here.
