Models
ProxyAI connects you to powerful large language models (LLMs) for chat and code generation.
Selecting a Model
You can choose your preferred model in two ways:
From the Chat Window:
Select directly from the dropdown in the chat interface.
From Settings:
Go to Settings/Preferences > Tools > ProxyAI > Providers. Select your provider and choose your model.
Available Models via ProxyAI Cloud
The models listed below are available through the default ProxyAI Cloud service. Model availability and usage limits depend on your ProxyAI Cloud plan (Free or Pro).
Chat Models
Model | Provider | Free | Pro |
---|---|---|---|
o3-mini | OpenAI | ✅ | |
gpt-4o | OpenAI | ✅ | |
gpt-4o-mini | OpenAI | ✅ | ✅ |
claude-3.7-sonnet | Anthropic | ✅ | |
gemini-pro-2.5 | ✅ | ||
gemini-flash-2.0 | ✅ | ✅ | |
qwen-2.5-coder-32b | Fireworks | ✅ | ✅ |
llama-3.1-405b | Fireworks | ✅ | ✅ |
deepseek-r1 | Fireworks | ✅ | |
deepseek-v3 | Fireworks | ✅ | ✅ |
Code Models
Model | Provider | Free | Pro | Type |
---|---|---|---|---|
gpt-3.5-turbo-instruct | OpenAI | ✅ | ✅ | Autocomplete |
codestral | Mistral | ✅ | ✅ | Autocomplete |
qwen-2.5-coder-32b | Fireworks | ✅ | ✅ | Autocomplete |
zeta | ProxyAI | ✅ | ✅ | Next Edits |
Note: Model availability may change over time. When using your own API key, availability depends on the provider's offerings.
Context Windows
A model's context window defines how much information (measured in tokens) it can process at once, including both your inputs and the model's responses.
ProxyAI Cloud
- Each chat session uses a managed context window up to 16,000 tokens
- ProxyAI automatically summarizes or removes older parts of the conversation to stay within this service-specific limit
- Keep your total input context (files, selections, etc.) under 200,000 tokens for optimal processing
Other Providers (OpenAI, Anthropic, Local, Custom)
- When using your own API key or running models locally, context window size is determined by the specific model and provider you choose
- ProxyAI passes your context to the provider, but the ultimate limit is set by the provider
- Check your chosen provider's documentation for their specific context window limitations
For complex or distinct tasks, regardless of the provider, starting a new chat session can improve performance and relevance.
Model Hosting and Privacy
All ProxyAI Cloud models are hosted by their original providers (OpenAI, Anthropic, etc.), trusted partners, or ProxyAI directly, primarily on US-based infrastructure.
When connecting to other providers or using local models, hosting location and privacy considerations follow those specific services or your local environment settings.