Models

Models

ProxyAI connects you to powerful large language models (LLMs) for chat and code generation.

Selecting a Model

You can choose your preferred model in two ways:

From the Chat Window:

Select directly from the dropdown in the chat interface.

From Settings:

Go to Settings/Preferences > Tools > ProxyAI > Providers. Select your provider and choose your model.

Available Models via ProxyAI Cloud

The models listed below are available through the default ProxyAI Cloud service. Model availability and usage limits depend on your ProxyAI Cloud plan (Free or Pro).

Chat Models

ModelProviderFreePro
o3-miniOpenAI
gpt-4oOpenAI
gpt-4o-miniOpenAI
claude-3.7-sonnetAnthropic
gemini-pro-2.5Google
gemini-flash-2.0Google
qwen-2.5-coder-32bFireworks
llama-3.1-405bFireworks
deepseek-r1Fireworks
deepseek-v3Fireworks

Code Models

ModelProviderFreeProType
gpt-3.5-turbo-instructOpenAIAutocomplete
codestralMistralAutocomplete
qwen-2.5-coder-32bFireworksAutocomplete
zetaProxyAINext Edits

Note: Model availability may change over time. When using your own API key, availability depends on the provider's offerings.

Context Windows

A model's context window defines how much information (measured in tokens) it can process at once, including both your inputs and the model's responses.

ProxyAI Cloud

  • Each chat session uses a managed context window up to 16,000 tokens
  • ProxyAI automatically summarizes or removes older parts of the conversation to stay within this service-specific limit
  • Keep your total input context (files, selections, etc.) under 200,000 tokens for optimal processing

Other Providers (OpenAI, Anthropic, Local, Custom)

  • When using your own API key or running models locally, context window size is determined by the specific model and provider you choose
  • ProxyAI passes your context to the provider, but the ultimate limit is set by the provider
  • Check your chosen provider's documentation for their specific context window limitations

For complex or distinct tasks, regardless of the provider, starting a new chat session can improve performance and relevance.

Model Hosting and Privacy

All ProxyAI Cloud models are hosted by their original providers (OpenAI, Anthropic, etc.), trusted partners, or ProxyAI directly, primarily on US-based infrastructure.

When connecting to other providers or using local models, hosting location and privacy considerations follow those specific services or your local environment settings.