Extends the canonical structured-output pattern from the Portfolio Manager
to the other two decision-making agents. Each of the three agents now
returns a typed Pydantic instance via llm.with_structured_output() in a
single primary call, and a render helper turns the result into the same
markdown shape downstream agents and saved reports already consume.
- ResearchPlan: 5-tier recommendation, conversational rationale, concrete
strategic actions for the trader.
- TraderProposal: 3-tier action (transaction direction is naturally Buy /
Hold / Sell — position sizing happens later at the Portfolio Manager),
reasoning, and optional entry_price / stop_loss / position_sizing.
Rendered output preserves the trailing "FINAL TRANSACTION PROPOSAL:
**BUY/HOLD/SELL**" line for backward compatibility with the analyst
stop-signal text.
- PortfolioDecision: 5-tier rating, executive summary, investment thesis,
optional price_target / time_horizon (unchanged).
The shared try-structured-then-fallback pattern is extracted into
tradingagents/agents/utils/structured.py (bind_structured +
invoke_structured_or_freetext) so all three agents go through the same
code path and log the same warning when a provider lacks structured
output and the agent falls back to free-text generation.
Net effect for users: every saved markdown report (research/manager.md,
trading/trader.md, portfolio/decision.md) now has consistent section
headers across runs and providers, easier to scan.
Net effect for the runtime: the rating extraction round-trip is gone —
the rating comes from the structured response itself, not a second
LLM call. SignalProcessor was already simplified to a heuristic adapter
in the previous commit.
11 new tests in tests/test_structured_agents.py cover the Trader and
Research Manager render functions, structured-output happy paths, and
free-text fallback. Full suite: 88 tests pass in ~2s without API keys.
Three related changes that take the rating pipeline from heuristic-only
to type-safe at the source.
1) Research Manager prompt now uses the same 5-tier scale (Buy /
Overweight / Hold / Underweight / Sell) as the Portfolio Manager,
signal_processing, and the memory log. The prior 3-tier wording
(Buy / Sell / Hold) was the only remaining inconsistency in the
pipeline.
2) Centralise the 5-tier vocabulary and the heuristic prose-rating
parser into tradingagents/agents/utils/rating.py. Both the memory
log and the signal processor now share the same parser instead of
duplicating regex and word-walker logic.
3) Make structured output a first-class part of the Portfolio Manager's
primary call. The PM uses llm.with_structured_output(PortfolioDecision)
so each provider's native structured-output mode (json_schema for
OpenAI/xAI, response_schema for Gemini, tool-use for Anthropic,
function_calling for OpenAI-compatible providers) yields a typed
Pydantic instance directly. A render helper turns that instance back
into the same markdown shape downstream consumers (memory log, CLI
display, saved reports) already expect, so no other code has to know
the PM now produces structured output. Providers without structured
support fall back gracefully to free-text + the deterministic
heuristic.
The previous SignalProcessor had been making a second LLM call to
re-extract the rating from the PM's prose; that round-trip is now
eliminated. SignalProcessor is a thin adapter over parse_rating(),
makes zero LLM calls, and stays for backwards compatibility with
process_signal() callers.
Schema (PortfolioDecision) captures rating + executive_summary +
investment_thesis + optional price_target + time_horizon, with field
descriptions doubling as output instructions. Agent prose remains the
primary artifact; structured output is layered onto the PM only because
it is the one agent whose output has machine-readable downstream
consumers.
15 new tests cover the heuristic parser (markdown-bold edge cases that
had no coverage before), the structured PM happy path, the free-text
fallback path, and that SignalProcessor never invokes the LLM. Full
suite: 77 tests pass in ~2s without API keys.
Long analyses can take many minutes; a crash or interruption forced users
to re-run from scratch and re-pay every LLM call. This adds an opt-in
checkpoint layer backed by per-ticker SQLite databases so the graph
resumes from the last successful node.
How to use:
- CLI: tradingagents analyze --checkpoint
- CLI: tradingagents analyze --clear-checkpoints
- Python: config["checkpoint_enabled"] = True
Lifecycle:
- propagate() recompiles the graph with a SqliteSaver when enabled and
injects a deterministic thread_id derived from ticker+date so the
same ticker+date resumes while a different date starts fresh.
- On successful completion the per-thread checkpoint rows are cleared.
- The context manager is closed in a try/finally so a crash never
leaks the SQLite connection or leaves the graph in checkpoint mode.
Storage: ~/.tradingagents/cache/checkpoints/<TICKER>.db
(override via TRADINGAGENTS_CACHE_DIR).
The checkpointer module is new (tradingagents/graph/checkpointer.py)
and the GraphSetup now returns the uncompiled workflow so it can be
recompiled with a saver when needed.
Adds langgraph-checkpoint-sqlite>=2.0.0 dependency. 3 new tests verify
the crash/resume cycle and that a different date starts fresh.
The previous per-agent BM25 memory was effectively dead code — its only
caller was a commented-out line in main.py. Replace it with a single
append-only markdown decision log driven by the propagate() lifecycle.
Lifecycle:
- store_decision() appends a pending entry at the end of every run
- _resolve_pending_entries() runs at the start of the next same-ticker
run, fetches yfinance returns + alpha vs SPY, and writes one LLM
reflection per resolved entry through an atomic temp-file rename
- Portfolio Manager consumes state["past_context"] (5 most recent
same-ticker entries plus 3 cross-ticker reflection-only excerpts)
Storage at ~/.tradingagents/memory/trading_memory.md
(override: TRADINGAGENTS_MEMORY_LOG_PATH).
Tag schema:
- Pending: [YYYY-MM-DD | TICKER | Rating | pending]
- Resolved: [YYYY-MM-DD | TICKER | Rating | +X.X% | +Y.Y% | Nd]
Removes rank-bm25 dependency and the legacy reflect_and_remember()
plumbing across reflection.py, trading_graph.py, and the agent factories.
49 new tests in tests/test_memory_log.py cover the storage, deferred
reflection, prompt injection, and legacy-removal paths. Full suite
(58 tests) passes in under 2 seconds without API keys.
Apply review suggestions: use concise `or` pattern for API key
resolution, consolidate tests into parameterized subTest, move
import to module level per PEP 8.
GoogleClient now accepts the unified `api_key` parameter used by
OpenAI and Anthropic clients, mapping it to the provider-specific
`google_api_key` that ChatGoogleGenerativeAI expects. Legacy
`google_api_key` still works for backward compatibility.
Resolves TODO.md item #2 (inconsistent parameter handling).
Add effort parameter (high/medium/low) for Claude 4.5+ and 4.6 models,
consistent with OpenAI reasoning_effort and Google thinking_level.
Also add content normalization for Anthropic responses.
- Point requirements.txt to pyproject.toml as single source of truth
- Resolve welcome.txt path relative to module for CLI portability
- Include cli/static files in package build
- Extract shared normalize_content for OpenAI Responses API and
Gemini 3 list-format responses into base_client.py
- Update README install and CLI usage instructions
Enable use_responses_api for native OpenAI provider, which supports
reasoning_effort with function tools across all model families.
Removes the UnifiedChatOpenAI subclass workaround.
Closes#403
- Add http_client and http_async_client parameters to all LLM clients
- OpenAIClient, GoogleClient, AnthropicClient now support custom httpx clients
- Fixes SSL certificate verification errors on Windows Conda environments
- Users can now pass custom httpx.Client with verify=False or custom certs
Fixes#369
- OpenAI: add GPT-5.4, GPT-5.4 Pro; remove o-series and legacy GPT-4o
- Anthropic: add Claude Opus 4.6, Sonnet 4.6; remove legacy 4.1/4.0/3.x
- Google: add Gemini 3.1 Pro, 3.1 Flash Lite; remove deprecated
gemini-3-pro-preview and Gemini 2.0 series
- xAI: clean up model list to match current API
- Simplify UnifiedChatOpenAI GPT-5 temperature handling
- Add missing tradingagents/__init__.py (fixes pip install building)
Add _clean_dataframe() to normalize stock DataFrames before stockstats:
coerce invalid dates/prices, drop rows missing Close, fill price gaps.
Also add on_bad_lines="skip" to all cached CSV reads.