mirror of
https://github.com/TauricResearch/TradingAgents.git
synced 2026-05-01 14:33:10 +03:00
This release bundles substantial work since v0.2.3: - Structured-output Research Manager, Trader, and Portfolio Manager (canonical with_structured_output pattern, single LLM call per agent, rendered markdown preserves the existing report shape). - LangGraph checkpoint resume for crash recovery (--checkpoint flag). - Persistent decision log replacing the per-agent BM25 memory, with deferred reflection driven by yfinance returns + alpha vs SPY. - DeepSeek, Qwen, GLM, and Azure OpenAI provider support; dynamic OpenRouter model selection. - Docker support; cache and logs moved to ~/.tradingagents/ to fix Docker permission issues. - Windows UTF-8 encoding fix on every file I/O site. - 5-tier rating consistency (Buy / Overweight / Hold / Underweight / Sell) across Research Manager, Portfolio Manager, signal processor, memory log. Plus the small quality items in this commit: 1. Suppress noisy Pydantic serializer warnings from OpenAI Responses-API parse path by defaulting structured-output to method="function_calling" (root-cause fix, not a warnings filter — same typed result, no warnings). 2. Ship scripts/smoke_structured_output.py so contributors can verify their provider's structured-output path with one command. 3. Add opt-in memory_log_max_entries config — when set, oldest resolved memory log entries are pruned once the cap is exceeded; pending entries (unresolved) are never pruned. 4. backend_url default changed from the OpenAI URL to None so the per-provider client falls back to its native endpoint instead of leaking OpenAI's URL into Gemini / other clients. CHANGELOG.md added with the full v0.2.4 entry. 92 tests pass without API keys.
13 KiB
13 KiB
Changelog
All notable changes to TradingAgents are documented here.
The format is based on Keep a Changelog, and this project follows Semantic Versioning. Breaking changes within the 0.x line are called out explicitly.
0.2.4 — 2026-04-25
Added
- Structured-output decision agents. Research Manager, Trader, and Portfolio
Manager now use
llm.with_structured_output(Schema)on their primary call and return typed Pydantic instances. Each provider's native structured-output mode is used (json_schemafor OpenAI / xAI,response_schemafor Gemini, tool-use for Anthropic, function-calling for OpenAI-compatible providers). Render helpers preserve the existing markdown shape so memory log, CLI display, and saved reports keep working unchanged. (#434) - LangGraph checkpoint resume — opt-in via
--checkpoint. State is saved after each node so crashed or interrupted runs resume from the last successful step. Per-ticker SQLite databases under~/.tradingagents/cache/checkpoints/.--clear-checkpointsresets them. (#594) - Persistent decision log replacing the per-agent BM25 memory. Decisions
are stored automatically at the end of
propagate(); the next same-ticker run resolves prior pending entries with realised return, alpha vs SPY, and a one-paragraph reflection. Override path withTRADINGAGENTS_MEMORY_LOG_PATH. Optionalmemory_log_max_entriesconfig caps resolved entries; pending entries are never pruned. (#578, #563, #564, #579) - DeepSeek, Qwen (Alibaba DashScope), GLM (Zhipu), and Azure OpenAI providers, plus dynamic OpenRouter model selection.
- Docker support — multi-stage build with separate dev and runtime images.
scripts/smoke_structured_output.py— diagnostic that exercises the three structured-output agents against any provider so contributors can verify their setup with one command.- 5-tier rating scale (Buy / Overweight / Hold / Underweight / Sell) used consistently by Research Manager, Portfolio Manager, signal processor, and the memory log; Trader keeps 3-tier (Buy / Hold / Sell) since transaction direction is naturally ternary.
- Pytest fixtures — lazy LLM client imports plus placeholder API keys so the test suite runs cleanly without credentials. (#588)
Changed
backend_urldefault is nowNonerather than the OpenAI URL. Each provider client falls back to its native default. The previous default leaked the OpenAI URL into non-OpenAI clients (e.g. Gemini), producing malformed request URLs for Python users who switched providers without overridingbackend_url. The CLI flow is unaffected.- All file I/O passes explicit
encoding="utf-8"so Windows users no longer hitUnicodeEncodeErrorwith the cp1252 default. (#543, #550, #576) - Cache and log directories moved to
~/.tradingagents/to resolve Docker permission issues. (#519) SignalProcessorreads the rating from the Portfolio Manager's rendered markdown via a deterministic heuristic — no extra LLM call.- OpenAI structured-output calls default to
method="function_calling"to avoid noisyPydanticSerializationUnexpectedValuewarnings emitted by langchain-openai's Responses-API parse path. Same typed result, no warnings.
Fixed
- Empty memory no longer triggers fabricated past-lessons in agent prompts; the memory-log redesign makes this structurally impossible since only the Portfolio Manager consults memory and only when entries exist. (#572)
- Tool-call logging processes every chunk message, not just the last one, and memory score normalization handles empty score arrays. (#534, #531)
Removed
FinancialSituationMemory(the per-agent BM25 system) and the deadreflect_and_remember()plumbing; subsumed by the persistent decision log.- Hardcoded Google endpoint that caused 404 when
langchain-google-genaichanged its API path. (#493, #496)
Contributors
Thanks to everyone who shaped this release through code, design, and reports:
- @claytonbrown — checkpoint resume (#594), test fixtures (#588), design feedback on cost tracking (#582) and structured validation (#583)
- @Bcardo — memory-log redesign (#579), empty-memory hallucination report (#572), encoding fix proposal (#570)
- @voidborne-d — memory persistence design (#564), portfolio manager state fix (#503)
- @mannubaveja007 — structured-output feature request (#434)
- @kelder66 — RAM-only memory issue (#563)
- @Gujiassh — tool-call logging fix (#534), test stub PR (#533)
- @iuyup — memory score normalization fix (#531)
- @kaihg — Google base_url fix (#496)
- @32ryh98yfe — Gemini 404 report (#493)
- @uppb — OpenRouter dynamic model selection (#482)
- @guoz14 — OpenRouter limited-model report (#337)
- @samchenku — indicator name normalization (#490)
- @JasonOA888 — y_finance pandas import fix (#488)
- @tiffanychum — stale import cleanup (#499)
- @zaizou — Docker permission issue (#519)
- @Stosman123, @mauropuga, @hotwind2015 — Windows encoding bug reports (#543, #550, #576)
- @nnishad, @atharvajoshi01 — encoding fix proposals (#568, #549)
0.2.3 — 2026-03-29
Added
- Multi-language output for analyst reports and final decisions, with a CLI selector. Internal agent debate stays in English for reasoning quality. (#472)
- GPT-5.4 family models in the default catalog, with deep/quick model split.
- Unified model catalog as a single source of truth for CLI options and provider validation.
Changed
base_urlis forwarded to Google and Anthropic clients so corporate proxies work consistently across providers. (#427)- Standardised the Google
api_keyparameter to the unifiedapi_keyform.
Fixed
- Backtesting fetchers no longer leak look-ahead data when
curr_dateis in the middle of a fetched window. (#475) - Invalid indicator names from the LLM are caught at the tool boundary instead of crashing the run. (#429)
- yfinance news fetchers respect the same exponential-backoff retry as price fetchers. (#445)
Contributors
- @ahmedk20 — multi-language output (#472)
- @CadeYu — model catalog typing (#464)
- @javierdejesusda — unified Google API key parameter (#453)
- @voidborne-d — yfinance news retry (#445)
- @kostakost2 — look-ahead bias report (#475)
- @lu-zhengda — proxy/base_url support request (#427)
- @VamsiKrishna2021 — invalid indicator crash report (#429)
0.2.2 — 2026-03-22
Added
- Five-tier rating scale (Buy / Overweight / Hold / Underweight / Sell) introduced for the Portfolio Manager.
- Anthropic effort level support for Claude models.
- OpenAI Responses API path for native OpenAI models.
Changed
risk_managerrenamed toportfolio_managerto match the role description shown in the CLI display.- Exchange-qualified tickers (e.g.
7203.T,BRK.B) preserved across all agent prompts and tool calls. - Process-level UTF-8 default attempted for cross-platform consistency
(note: this approach did not actually take effect; replaced in v0.2.4 with
explicit per-call
encoding="utf-8"arguments).
Fixed
- yfinance rate-limit errors are retried with exponential backoff. (#426)
- HTTP client SSL customisation is supported for environments that need custom certificate bundles. (#379)
- Report-section writes handle list-of-string content gracefully.
Contributors
- @CadeYu — exchange-qualified ticker preservation (#413)
- @yang1002378395-cmyk — HTTP client SSL customisation (#379)
0.2.1 — 2026-03-15
Security
- Patched
langchain-corevulnerability (LangGrinch). (#335) - Removed
chainlitdependency affected by CVE-2026-22218.
Added
pyproject.tomlbuild-system configuration; the project now installs via modern packaging tooling.
Removed
setup.py— dependencies consolidated topyproject.toml.
Fixed
- Risk manager reads the correct fundamental report source. (#341)
- All
open()calls receive an explicit UTF-8 encoding (initial pass). get_indicatorstool handles comma-separated indicator names from the LLM. (#368)Propagationinitialises every debate-state field so risk debaters never see missing keys.- Stock data parsing tolerates malformed CSVs and NaN values.
- Conditional debate logic respects the configured round count. (#361)
Contributors
- @RinZ27 —
langchain-coresecurity patch (#335) - @Ljx-007 — risk manager fundamental-report fix (#341)
- @makk9 — debate-rounds config issue (#361)
0.2.0 — 2026-02-04
This is the largest release since the initial public version. The framework moved from single-provider to a multi-provider architecture and grew several production-ready surfaces.
Added
- Multi-provider LLM support (OpenAI, Google, Anthropic, xAI, OpenRouter, Ollama) via a factory pattern, with provider-specific thinking configurations.
- Alpha Vantage integration as a configurable primary data provider, with yfinance as a community-stability fallback.
- Footer statistics in the CLI: real-time tracking of LLM calls, tool calls, and token usage via LangChain callbacks.
- Post-analysis report saving — the framework writes per-section markdown files (analyst reports, debate transcripts, final decision) when a run completes.
- Announcements panel — fetches updates from
api.tauric.ai/v1/announcementsfor the CLI welcome screen. - Tool fallbacks so a single vendor outage does not stop the pipeline.
Changed
- Risky / Safe risk debaters renamed to Aggressive / Conservative for consistency with the displayed agent labels.
- Default data vendor switched to balance reliability and quota across community deployments.
- Ollama and OpenRouter model lists updated; default endpoints clarified.
Fixed
- Analyst status tracking and message deduplication in the live display.
- Infinite-loop guard in the agent loop; reflection and logging hardened.
- Various data-vendor implementation bugs and tool-signature mismatches.
Contributors
This release is the first with substantial outside contributions; many community PRs from late 2025 also landed here.
- @luohy15 — Alpha Vantage data-vendor integration (#235)
- @EdwardoSunny — yfinance fetching optimisations (#245)
- @Mirza-Samad-Ahmed-Baig — infinite-loop guard, reflection, and logging fixes (#89)
- @ZeroAct — saved results path support (#29)
- @Zhongyi-Lu —
.envgitignore (#49) - @csoboy — local Ollama setup (#53)
- @chauhang — initial Docker support attempt (#47, later reverted; the merged Docker support shipped in v0.2.4)
0.1.1 — 2025-06-07
Removed
- Static site assets that had been bundled with v0.1.0; the public site now lives separately.
0.1.0 — 2025-06-05
Added
- Initial public release of the TradingAgents multi-agent trading framework: market / sentiment / news / fundamentals analysts; bull and bear researchers; trader; aggressive, conservative, and neutral risk debaters; portfolio manager. LangGraph orchestration, yfinance data, per-agent BM25 memory, single-provider OpenAI integration, interactive CLI.