Configuration of the Azure OpenAI agent in AI Agent manager not working

Hi,

I am attempting to configure gpt-5 from Azure OpenAI in cumulocity, but i am unable to set a resource name using the POST request.

The following POST request has been send, which yields a 201 response:

{

“resourceName”: “{{RESOURCE_NAME}}”,

“apiKey”: “{{API_KEY}}”,

“name”: “{{DEPLOYMENT_NAME}}”,

“model”: “{{MODEL_NAME}}

}

To the following URL:
{{BASE_URL}}/service/ai/provider

However, testing a new agent yields the following error:
Error: { “statusCode”: 400, “message”: “Error while prompting: AI_LoadSettingError: Azure OpenAI resource name setting is missing. Pass it using the ‘resourceName’ parameter or the AZURE_RESOURCE_NAME environment variable.” }

There must be an obvious step i am missing, but i cannot for the life of me figure out how to resolve this issue.

“resourceName“ is currently not supported by the UI. However you can set it by API as you do. Maybe the “{{deployment_name}}” is not correct. What is the result if you request your global provider again with `GET /service/ai/provider` ?

The following request we have tested and verified working:

POST /service/ai/provider

{
"resourceName": "<<resource-name>>",
"apiKey": "<<api key>>",
"name": "azure",
"model": "gpt-4.1"   
}

We are working on UI support for this.

Regards

Jan

The following result is received after the GET request:
{

"resourceName": "oa00openaisw",

"apiKey": "\*\*\*",

"name": "dunea_openAI_1",

"model": "gpt-5-chat"

}

You need to use “azure” as `name` parameter, so that the BE can choose the right proxy implementation.

I have set the name to both ‘azure‘ and ‘Azure‘ and neither were succesfull

Is there any other error message? The name property is internally to decide to use the azureProvider from the underlying Vercel AI SDK. Here is the documentation about it:

And there the Typescript interface:

interface AzureOpenAIProviderSettings {
    /**
  Name of the Azure OpenAI resource. Either this or `baseURL` can be used.
  
  The resource name is used in the assembled URL: `https://{resourceName}.openai.azure.com/openai/deployments/{modelId}{path}`.
       */
    resourceName?: string;
    /**
  Use a different URL prefix for API calls, e.g. to use proxy servers. Either this or `resourceName` can be used.
  When a baseURL is provided, the resourceName is ignored.
  
  With a baseURL, the resolved URL is `{baseURL}/{modelId}{path}`.
     */
    baseURL?: string;
    /**
  API key for authenticating requests.
       */
    apiKey?: string;
    /**
  Custom headers to include in the requests.
       */
    headers?: Record<string, string>;
    /**
  Custom fetch implementation. You can use it as a middleware to intercept requests,
  or to provide a custom fetch implementation for e.g. testing.
      */
    fetch?: FetchFunction;
    /**
  Custom api version to use. Defaults to `2024-10-01-preview`.
      */
    apiVersion?: string;
}

So using “name”: “azure“ (lowercase) is definitively right. Maybe you need to set a different baseUrl? Is there any different error message if you use name “azure”?

I have checked the microservice logs and i see that provider is undefined, which is odd right?

Furthermore i only get the resourceName not set error during the creation of an AI Agent.

It depends. There are basically two provider settings:

  • local provider → the one which is attached to the agent. This gets merged with the global provide.
  • global provider → the one which is configured via the POST request you shared.

If the logs show undefined for a provider, it could reference the “local provider” and still use the global provider. In the beginning, it is best to use one global provider and no local provider. The local provider can be used, if a different API key or model needs to be used for exactly one model.

I guess you are using 0.2.2 of the agent-manager? This was doing a test-prompt on each creation of an agent. Maybe something is wrong there? Can you still use it afterwards?

Had not actually saved the agent yet, this yields a new error in the microservice:
{
res: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: { body: [PassThrough], disturbed: true, error: null },
[Symbol(Response internals)]: {
url: ‘http://cumulocity:8111/tenant/options/ai/credentials.agent-agent-pwu1n6’,
status: 404,
statusText: ‘Not Found’,
headers: [Headers],
counter: 0
}
},
data: {
message: ‘Unable to find option by given key: ai/credentials.agent-agent-pwu1n6’,
error: ‘options/Not Found’,
info: ‘https://cumulocity.com/api/core/’
}
}

We are indeed using 0.2.2 for the agent manager

This is normal. The MS checks if the tenant-options exist already to not overwrite another agent. It is only a “debug” message and not an “error”:

As long as the MS is in private-preview, debug logging is activated by default and tells a lot. Not necessary everything you need to worry about. More important is, what the HTTP request responded.

If you mean the HTTP request to the azure resource, this is not performed since the resourceName variable is not set according to the system. Otherwise, i am unsure which http request you mean.

I have tested a local provider with the same JSON as the POST request, and this does infact work. I get a new error regarding the temperature value, which means that there is definitely a connection to the Azure Resource.

Error while prompting: AI_APICallError: Unsupported value: ‘temperature’ does not support 0 with this model. Only the default (1) value is supported.

I also cannot seem to set the temperature by simply passing it as a parameters in the JSON.

By adding the temperature variable to the Advanced setting and by setting a local provider, i am now able to use the AI model i provided.

Thank you for the support!