Retrieves a token-budgeted context window from the graph via
query_context(), builds a grounded system prompt, and sends a
message to an LLM. Requires ellmer, which handles provider
connections and the tool-call loop that implements RLM-Graph traversal.
Usage
chat_with_context(
graph,
message,
provider = c("openai", "ollama", "github", "anthropic"),
model = NULL,
budget_tokens = 6000L,
seed_node = NULL,
min_relevance = 0.1,
...
)Arguments
- graph
An
rrlm_graph/igraphobject.- message
Character(1). User message / question.
- provider
Character(1). LLM provider. One of
"openai"(default),"ollama","github","anthropic".- model
Character(1) or
NULL. Model identifier. WhenNULL(default) a sensible per-provider default is used:"gpt-4o-mini"(openai/github),"llama3.2"(ollama),"claude-3-5-haiku-latest"(anthropic).- budget_tokens
Integer(1). Context token budget passed to
query_context(). Default6000L.- seed_node
Character(1) or
NULL. Forwarded toquery_context().- min_relevance
Numeric(1). Forwarded to
query_context().- ...
Additional arguments forwarded to the ellmer
chat_*()constructor (e.g.base_urlfor ollama).
System prompt structure
The system prompt combines:
The assembled context string (code snippets relevant to the query).
A grounding constraint block that instructs the model to answer only from the provided context and to cite node names.
Authentication
"openai"Set
OPENAI_API_KEYenvironment variable."ollama"No key required (local daemon). Set
OLLAMA_BASE_URLto override the defaulthttp://localhost:11434."github"Set
GITHUB_PATenvironment variable. Requires access to the GitHub Models Marketplace (https://github.com/marketplace/models)."anthropic"Set
ANTHROPIC_API_KEYenvironment variable.
Examples
if (FALSE) { # \dontrun{
g <- build_rrlm_graph("mypkg")
# OpenAI (default) -- requires OPENAI_API_KEY
chat_with_context(g, "How does data_prep() work?")
# GitHub Models Marketplace -- requires GITHUB_PAT
chat_with_context(g, "How does data_prep() work?", provider = "github")
# Local Ollama
chat_with_context(g, "How does data_prep() work?",
provider = "ollama", model = "llama3.2")
} # }