This release bundles substantial work since v0.2.3:
- Structured-output Research Manager, Trader, and Portfolio Manager
(canonical with_structured_output pattern, single LLM call per agent,
rendered markdown preserves the existing report shape).
- LangGraph checkpoint resume for crash recovery (--checkpoint flag).
- Persistent decision log replacing the per-agent BM25 memory, with
deferred reflection driven by yfinance returns + alpha vs SPY.
- DeepSeek, Qwen, GLM, and Azure OpenAI provider support; dynamic
OpenRouter model selection.
- Docker support; cache and logs moved to ~/.tradingagents/ to fix
Docker permission issues.
- Windows UTF-8 encoding fix on every file I/O site.
- 5-tier rating consistency (Buy / Overweight / Hold / Underweight / Sell)
across Research Manager, Portfolio Manager, signal processor, memory log.
Plus the small quality items in this commit:
1. Suppress noisy Pydantic serializer warnings from OpenAI Responses-API
parse path by defaulting structured-output to method="function_calling"
(root-cause fix, not a warnings filter — same typed result, no warnings).
2. Ship scripts/smoke_structured_output.py so contributors can verify
their provider's structured-output path with one command.
3. Add opt-in memory_log_max_entries config — when set, oldest resolved
memory log entries are pruned once the cap is exceeded; pending
entries (unresolved) are never pruned.
4. backend_url default changed from the OpenAI URL to None so the
per-provider client falls back to its native endpoint instead of
leaking OpenAI's URL into Gemini / other clients.
CHANGELOG.md added with the full v0.2.4 entry. 92 tests pass without API keys.