Skip to main content

Class LLMChatClient

A chat client for interacting with language models (LLMs).

The supported models at this time are: "gpt-4o", "gpt-4o-mini", "o1", "o3-mini".

Model specific parameters are expected to be provided via the "extra" dictionary parameter.

Please see the following parameter combinations below:

  • Model: "gpt-4o"
  • Model: "gpt-4o-mini"
  • Model: "o1"
  • Model: "o3-mini"

Namespace: Workspace.XBR.Xperiflow.LanguageModels

Assembly: Xperiflow.dll

Declaration
public class LLMChatClient

Methods

InteractAsync(string, List<LlmChatMessage>, int?, Dictionary<string, object>?, CancellationToken)

Sends a chat request to the OpenAI model with a detailed configuration.

Declaration
public Task<string> InteractAsync(string modelName, List<LlmChatMessage> messages, int? maxTokens = null, Dictionary<string, object>? extra = null, CancellationToken cancellationToken = default)
Returns

Task<System.String>

The response from the LLM service as a string.

Parameters
TypeNameDescription
System.StringmodelNameThe name of the OpenAI model.
System.Collections.Generic.List< Workspace.XBR.Xperiflow.Core.RestApi.AI.LlmChatMessage >messagesThe chat messages sent to the model.
System.Nullable<System.Int32>maxTokensThe maximum number of tokens allowed in the response. Optional.
System.Collections.Generic.Dictionary<System.String,System.Object>extraAdditional parameters for the request. Optional. These are expected to be model specific parameters.
System.Threading.CancellationTokencancellationTokenA token to monitor for cancellation requests.
Exceptions

System.NotSupportedException Thrown if the specified model is not supported. Workspace.XBR.Xperiflow.LanguageModels.LlmChatClientException Thrown when an error occurs during the request to the LLM service. OneStream.Shared.Common.XFException Thrown when an exception occurs during execution.

InteractAsync(string, string, string?, int?, Dictionary<string, object>?, CancellationToken)

Sends a chat request to the OpenAI model with a detailed configuration.

Declaration
public Task<string> InteractAsync(string modelName, string userPrompt, string? systemPrompt = null, int? maxTokens = null, Dictionary<string, object>? extra = null, CancellationToken cancellationToken = default)
Returns

Task<System.String>

The response from the LLM service as a string.

Parameters
TypeNameDescription
System.StringmodelNameThe name of the OpenAI model.
System.StringuserPromptThe user input or message to send to the model.
System.StringsystemPromptAn optional system-level prompt to guide the model's behavior.
System.Nullable<System.Int32>maxTokensThe maximum number of tokens allowed in the response. Optional.
System.Collections.Generic.Dictionary<System.String,System.Object>extraAdditional parameters for the request. Optional. These are expected to be model specific parameters.
System.Threading.CancellationTokencancellationTokenA token to monitor for cancellation requests.
Exceptions

System.NotSupportedException Thrown if the specified model is not supported. Workspace.XBR.Xperiflow.LanguageModels.LlmChatClientException Thrown when an error occurs during the request to the LLM service. OneStream.Shared.Common.XFException Thrown when an exception occurs during execution.

Inherited Members

  • System.Object.Equals(System.Object)
  • System.Object.Equals(System.Object,System.Object)
  • System.Object.GetHashCode
  • System.Object.GetType
  • System.Object.MemberwiseClone
  • System.Object.ReferenceEquals(System.Object,System.Object)
  • System.Object.ToString

Was this page helpful?