19 Commits

Author SHA1 Message Date
Yijia-Xiao
7c37249f80 chore: release v0.2.4 — structured agents, checkpoint, memory log, providers
This release bundles substantial work since v0.2.3:

- Structured-output Research Manager, Trader, and Portfolio Manager
  (canonical with_structured_output pattern, single LLM call per agent,
  rendered markdown preserves the existing report shape).
- LangGraph checkpoint resume for crash recovery (--checkpoint flag).
- Persistent decision log replacing the per-agent BM25 memory, with
  deferred reflection driven by yfinance returns + alpha vs SPY.
- DeepSeek, Qwen, GLM, and Azure OpenAI provider support; dynamic
  OpenRouter model selection.
- Docker support; cache and logs moved to ~/.tradingagents/ to fix
  Docker permission issues.
- Windows UTF-8 encoding fix on every file I/O site.
- 5-tier rating consistency (Buy / Overweight / Hold / Underweight / Sell)
  across Research Manager, Portfolio Manager, signal processor, memory log.

Plus the small quality items in this commit:

1. Suppress noisy Pydantic serializer warnings from OpenAI Responses-API
   parse path by defaulting structured-output to method="function_calling"
   (root-cause fix, not a warnings filter — same typed result, no warnings).
2. Ship scripts/smoke_structured_output.py so contributors can verify
   their provider's structured-output path with one command.
3. Add opt-in memory_log_max_entries config — when set, oldest resolved
   memory log entries are pruned once the cap is exceeded; pending
   entries (unresolved) are never pruned.
4. backend_url default changed from the OpenAI URL to None so the
   per-provider client falls back to its native endpoint instead of
   leaking OpenAI's URL into Gemini / other clients.

CHANGELOG.md added with the full v0.2.4 entry. 92 tests pass without API keys.
2026-04-25 22:16:09 +00:00
Yijia-Xiao
4cbd4b086f feat: add LangGraph checkpoint resume for crash recovery (#594)
Long analyses can take many minutes; a crash or interruption forced users
to re-run from scratch and re-pay every LLM call.  This adds an opt-in
checkpoint layer backed by per-ticker SQLite databases so the graph
resumes from the last successful node.

How to use:
- CLI:    tradingagents analyze --checkpoint
- CLI:    tradingagents analyze --clear-checkpoints
- Python: config["checkpoint_enabled"] = True

Lifecycle:
- propagate() recompiles the graph with a SqliteSaver when enabled and
  injects a deterministic thread_id derived from ticker+date so the
  same ticker+date resumes while a different date starts fresh.
- On successful completion the per-thread checkpoint rows are cleared.
- The context manager is closed in a try/finally so a crash never
  leaks the SQLite connection or leaves the graph in checkpoint mode.

Storage: ~/.tradingagents/cache/checkpoints/<TICKER>.db
(override via TRADINGAGENTS_CACHE_DIR).

The checkpointer module is new (tradingagents/graph/checkpointer.py)
and the GraphSetup now returns the uncompiled workflow so it can be
recompiled with a saver when needed.

Adds langgraph-checkpoint-sqlite>=2.0.0 dependency. 3 new tests verify
the crash/resume cycle and that a different date starts fresh.
2026-04-25 08:47:15 +00:00
Yijia-Xiao
ebd2e12e67 feat: replace per-agent BM25 memory with persistent decision log (#578, #563, #564, #579)
The previous per-agent BM25 memory was effectively dead code — its only
caller was a commented-out line in main.py. Replace it with a single
append-only markdown decision log driven by the propagate() lifecycle.

Lifecycle:
- store_decision() appends a pending entry at the end of every run
- _resolve_pending_entries() runs at the start of the next same-ticker
  run, fetches yfinance returns + alpha vs SPY, and writes one LLM
  reflection per resolved entry through an atomic temp-file rename
- Portfolio Manager consumes state["past_context"] (5 most recent
  same-ticker entries plus 3 cross-ticker reflection-only excerpts)

Storage at ~/.tradingagents/memory/trading_memory.md
(override: TRADINGAGENTS_MEMORY_LOG_PATH).

Tag schema:
- Pending:  [YYYY-MM-DD | TICKER | Rating | pending]
- Resolved: [YYYY-MM-DD | TICKER | Rating | +X.X% | +Y.Y% | Nd]

Removes rank-bm25 dependency and the legacy reflect_and_remember()
plumbing across reflection.py, trading_graph.py, and the agent factories.

49 new tests in tests/test_memory_log.py cover the storage, deferred
reflection, prompt injection, and legacy-removal paths. Full suite
(58 tests) passes in under 2 seconds without API keys.
2026-04-25 08:24:03 +00:00
Yijia-Xiao
f85f5d9f5d test: lazy-load LLM provider clients and add API-key fixtures so the test suite runs cleanly without credentials (#588) 2026-04-25 07:41:36 +00:00
Zhigong Liu
6abc768c1d feat: replace per-agent BM25 memory with persistent append-only decision log
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-19 22:43:14 -04:00
Yijia-Xiao
bdc5fc62d3 chore: bump langchain-google-genai minimum to 4.0.0 for thought signature support 2026-04-04 07:28:03 +00:00
Yijia-Xiao
4641c03340 TradingAgents v0.2.3 2026-03-29 19:50:46 +00:00
Yijia-Xiao
589b351f2a TradingAgents v0.2.2 2026-03-22 23:47:56 +00:00
Yijia-Xiao
77755f0431 chore: consolidate install, fix CLI portability, normalize LLM responses
- Point requirements.txt to pyproject.toml as single source of truth
- Resolve welcome.txt path relative to module for CLI portability
- Include cli/static files in package build
- Extract shared normalize_content for OpenAI Responses API and
  Gemini 3 list-format responses into base_client.py
- Update README install and CLI usage instructions
2026-03-22 21:38:01 +00:00
Yijia-Xiao
551fd7f074 chore: update model lists, bump to v0.2.1, fix package build
- OpenAI: add GPT-5.4, GPT-5.4 Pro; remove o-series and legacy GPT-4o
- Anthropic: add Claude Opus 4.6, Sonnet 4.6; remove legacy 4.1/4.0/3.x
- Google: add Gemini 3.1 Pro, 3.1 Flash Lite; remove deprecated
  gemini-3-pro-preview and Gemini 2.0 series
- xAI: clean up model list to match current API
- Simplify UnifiedChatOpenAI GPT-5 temperature handling
- Add missing tradingagents/__init__.py (fixes pip install building)
2026-03-15 23:34:50 +00:00
Yijia-Xiao
8a60662070 chore: remove unused chainlit dependency (CVE-2026-22218) 2026-03-15 16:16:42 +00:00
Yijia Xiao
5fec171a1e chore: add build-system config and update version to 0.2.0 2026-02-07 08:26:51 +00:00
Yijia Xiao
50c82a25b5 chore: consolidate dependencies to pyproject.toml, remove setup.py 2026-02-07 08:18:46 +00:00
RinZ27
66a02b3193 security: patch LangGrinch vulnerability in langchain-core 2026-02-05 11:01:53 +07:00
Yijia Xiao
b4b133eb2d fix: add typer dependency 2026-02-04 00:39:15 +00:00
Yijia Xiao
6cd35179fa chore: clean up dependencies and fix Ollama auth
- Remove unused packages: praw, feedparser, eodhd, akshare, tushare, finnhub
- Fix Ollama requiring API key
2026-02-03 23:08:12 +00:00
Yijia Xiao
d4dadb82fc feat: add multi-provider LLM support with thinking configurations
Models added:
- OpenAI: GPT-5.2, GPT-5.1, GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-4.1
- Anthropic: Claude Opus 4.5/4.1, Claude Sonnet 4.5/4, Claude Haiku 4.5
- Google: Gemini 3 Pro/Flash, Gemini 2.5 Flash/Flash Lite
- xAI: Grok 4, Grok 4.1 Fast (Reasoning/Non-Reasoning)

Configs updated:
- Add unified thinking_level for Gemini (maps to thinking_level for Gemini 3,
  thinking_budget for Gemini 2.5; handles Pro's lack of "minimal" support)
- Add OpenAI reasoning_effort configuration
- Add NormalizedChatGoogleGenerativeAI for consistent response handling

Fixes:
- Fix Bull/Bear researcher display truncation
- Replace ChromaDB with BM25 for memory retrieval
2026-02-03 22:27:20 +00:00
Edward Sun
a5dcc7da45 update readme 2025-10-06 20:33:12 -07:00
Edward Sun
da84ef43aa main works, cli bugs 2025-06-15 22:20:59 -07:00