Configuring an LLM connection
To enable AI and configure an LLM connection:- In the OpenOps left sidebar, click the Settings icon at the bottom:
- In the Settings view, click OpenOps AI:

- Under AI Connection, click the dropdown and select Create new connection. The Create AI Connection view opens:

- In the Provider dropdown, select one of the supported LLM providers. Anthropic, Azure OpenAI, Cerebras, Cohere, Deep Infra, DeepSeek, Google Generative AI, Google Vertex AI, Groq, Mistral, OpenAI, OpenAI-compatible providers, Perplexity, Together.ai, and xAI Grok are currently supported.
- In the Model dropdown, select one of the models your LLM provider supports. (If you’re configuring Azure OpenAI, select Custom instead of a model and complete the other Azure OpenAI-specific steps.)
- (Optional) If the model you’re looking for is not listed, specify a custom model in the Custom model field. This overrides whatever you’ve selected under Model.
- Enter your API key for the selected LLM provider in the API Key field.
- (Optional) Enter a Base URL for the selected model. This is useful if you want to use a proxy or if your LLM provider does not use the default base URL. If you selected OpenAI Compatible as the provider, then you are required to enter the base URL.
- (Optional) Use the Provider settings and Model settings fields to specify custom parameters as JSON. The JSON schema varies depending on the chosen provider and model:
- See the Azure OpenAI and Google Vertex AI instructions for custom provider settings required by these providers.
- If you’ve selected OpenAI, use Provider settings for JSON you’d normally pass to the
createOpenAIfunction, and Model settings for JSON you’d normally pass to thestreamTextfunction. For more details, see the OpenAI documentation.
- Click Save to apply your changes in the Create AI Connection view.
- (Optional) Back in the OpenOps AI section, if you’re working with AWS and you want your AI connection to have access to AWS MCP servers, go to the MCP section and select an AWS connection in the AWS Cost dropdown:
This enables access to AWS Cost Explorer MCP Server, AWS Pricing MCP Server, and AWS Billing and Cost Management MCP Server.
Provider-specific settings
Some LLM providers require additional configuration beyond the API key or may require non-standard settings. This section offers guidance for some of the most common providers in this category.Azure OpenAI
When configuring an AI connection where Azure OpenAI serves as the provider:- In the Model dropdown, select Custom.
- In the Custom model field, enter the name of a model deployment in your Azure OpenAI resource (for example,
my-gpt-5). - In the API Key field, enter the API key from a service principal that has access to your Azure OpenAI resource.
- In the Provider settings field, enter the following JSON:
The value of
resourceNamemust match the name of your Azure OpenAI resource — the same name that appears in the endpoint URL, e.g.https://<resourceName>.openai.azure.com/, or, if you’re using Azure AI Foundry,https://<resourceName>.services.ai.azure.com/.
Google Vertex AI
When configuring an AI connection where Google Vertex AI serves as the provider, add the following to the Provider settings field:your-google-cloud-project-idis the ID of the Google Cloud project where your API key was created. You can look up the project ID in the Google Cloud Console by opening the project selector:
- Use
globalforlocationor, if needed, specify one of the supported locations for your chosen model.
