fix: stop leaking OpenAI base_url into non-OpenAI provider clients

Default config had backend_url='https://api.openai.com/v1' which was
forwarded to every provider client, including Google. ChatGoogleGenerativeAI
constructed requests against that base, producing malformed URLs like
https://api.openai.com/v1/v1beta/models/gemini-2.5-flash:generateContent
that 404 with empty body.

Discovered while running propagate() against Gemini end-to-end. The
structured-output smoke worked because that path constructed the LLM
without going through the factory and without forwarding backend_url;
propagate() goes through TradingAgentsGraph.__init__ which forwards
config['backend_url'] to every provider.

Fix: default to None. Each provider client falls back to its own
endpoint (api.openai.com for OpenAI via _PROVIDER_CONFIG, Gemini's
default for Google, and so on). The CLI flow already sets backend_url
explicitly per provider when the user picks one, so that path is
unchanged.

Verified: full propagate() now passes end-to-end on both
OpenAI gpt-5.4-mini and Gemini gemini-3-flash-preview, with all nine
structure/log/signal checks green for each.
This commit is contained in:
Yijia-Xiao
2026-04-25 20:54:19 +00:00
parent bba147798f
commit 4016fd4efa

View File

@@ -11,7 +11,12 @@ DEFAULT_CONFIG = {
"llm_provider": "openai",
"deep_think_llm": "gpt-5.4",
"quick_think_llm": "gpt-5.4-mini",
"backend_url": "https://api.openai.com/v1",
# When None, each provider's client falls back to its own default endpoint
# (api.openai.com for OpenAI, generativelanguage.googleapis.com for Gemini, ...).
# The CLI overrides this per provider when the user picks one. Keeping a
# provider-specific URL here would leak (e.g. OpenAI's /v1 was previously
# being forwarded to Gemini, producing malformed request URLs).
"backend_url": None,
# Provider-specific thinking configuration
"google_thinking_level": None, # "high", "minimal", etc.
"openai_reasoning_effort": None, # "medium", "high", "low"