Playing with LLMs in AL is fun. I recently did a live-coding session at Directions, where I used my play-with-LLMs codeunit. Several people came up afterwards and asked if they could have a copy of the codeunit so they could join the fun. This codeunit supports Azure OpenAI, ChatGPT, and locally hosted LLMs via lmstudio.ai. Check out the video!

In this video, Erik walks us through his AI playground codeunit for Business Central — a lightweight, flexible tool he built for experimenting with different large language models (LLMs) directly from AL code. Fresh from a live coding session at Directions North America where he demonstrated formatting addresses using AI, Erik shares the codeunit that many attendees asked for after his session.
Why an AI Playground Codeunit?
Microsoft’s built-in AI capabilities in Business Central all route through Azure Copilot, which makes perfect sense for production scenarios. But what if you want to experiment with other providers — ChatGPT’s API directly, or even a local LLM running on your own machine? That’s exactly the gap Erik’s playground codeunit fills.
The codeunit supports three AI providers through a simple enum:
enum 50100 "AI Provider"
{
value(0; AzureOpenAI)
{
Caption = 'Azure Open AI';
}
value(1; ChatGPTOpenAI)
{
Caption = 'ChatGPT Open AI';
}
value(10; LMStudio)
{
Caption = 'LM Studio';
}
}
An important distinction Erik makes: Azure OpenAI and ChatGPT OpenAI are not the same thing. While they share the same underlying technology, there are subtle differences in how their REST APIs work — how you specify JSON mode, how credentials are passed, and what features are available. ChatGPT’s API appears to be a newer incarnation with more features, but they’re clearly different enough to warrant separate handling.
Getting Started in Three Lines
The beauty of this codeunit is its simplicity. You can get AI working in just three lines of AL code:
ai.Setup(Enum::"AI Provider"::ChatGPTOpenAI, 'https://api.openai.com/v1/chat/completions', 'your-api-key-here');
ai.AddUser('Hello');
message(ai.GetText());
The Setup procedure takes three parameters: the provider, the URL to the chat completion endpoint, and your credentials. It also initializes sensible defaults for temperature (0.7), top_p (0.95), and max tokens (4000):
procedure Setup(Provider: Enum "AI Provider"; URL: Text; AccessKey: Text)
begin
CurrentProvider := Provider;
_Key := AccessKey;
_URL := URL;
_temperature := 0.7;
_top_p := 0.95;
_maxtokens := 4000;
end;
System Messages and User Messages
If you’ve used ChatGPT or any similar tool, you know the pattern: you type something, hit enter, and get a reply. The codeunit mirrors this with two key procedures:
AddUser()— adds text that represents what the user is saying (your prompt)AddSystem()— adds the preamble that tells the AI what it is and how it should behave
Erik demonstrates this with a fun example:
ai.Setup(Enum::"AI Provider"::ChatGPTOpenAI, 'https://api.openai.com/v1/chat/completions', secretKey);
ai.AddSystem('You are a very rude personal assistant, whenever you get a chance, try to answer the question, but with an insult, perferrable in French');
ai.AddUser('Are you busy?');
message(ai.GetText());
The response? “Well, I’m not too busy for your nonsense, but I guess I have to make time for your endless questions. What do you want?” — demonstrating that system messages work exactly as expected.
Switching Models on the Fly
One advantage of using the ChatGPT OpenAI API directly (versus Azure OpenAI where you deploy specific models) is the ability to switch models with a single line:
ai.Model('gpt-4o');
If you don’t specify a model, it defaults to gpt-4o-mini. This is handled in the payload builder:
CurrentProvider::ChatGPTOpenAI:
begin
if _Model <> '' then
payload.add('model', _model)
else
Payload.Add('model', 'gpt-4o-mini');
end;
Under the Hood: How It Works
The codeunit is only a couple hundred lines of code. Both AddSystem and AddUser simply collect messages into lists of text. When you call GetText() or GetJson(), it triggers the actual API call.
The TryCall Function
The core HTTP communication happens in the TryCall procedure, which is a try function so you can handle errors gracefully. It builds an HTTP request, sets the appropriate headers based on the provider, sends the request, and parses the response:
[TryFunction]
procedure TryCall(var Result: Text)
var
Client: HttpClient;
Request: HttpRequestMessage;
Response: HttpResponseMessage;
Headers: HttpHeaders;
ResponseTxt: Text;
ResponseJson: JsonObject;
Choices: JsonArray;
Choice: JsonObject;
T: JsonToken;
begin
Request.SetRequestUri(_URL);
Request.Method('POST');
Request.GetHeaders(Headers);
case CurrentProvider of
CurrentProvider::AzureOpenAI:
if _Key <> '' then
Headers.Add('api-key', _Key);
CurrentProvider::ChatGPTOpenAI:
if _key <> '' then
Headers.Add('Authorization', 'bearer ' + _Key);
end;
Headers.Add('User-Agent', 'Business Central');
Request.Content(BuildContent(BuildPayload()));
Client.Timeout(300000);
if Client.Send(Request, Response) then begin
if Response.IsBlockedByEnvironment() then
error('"Allow HttpClient Requests" is turned off for this extension in Extension Management');
Response.Content().ReadAs(ResponseTxt);
if Response.IsSuccessStatusCode() then begin
ResponseJson.ReadFrom(ResponseTxt);
if ResponseJson.Get('choices', T) then begin
Choices := T.AsArray();
if Choices.Get(0, T) then begin
Choice := T.AsObject();
if Choice.Get('message', T) then
if T.AsObject().Get('content', T) then begin
Result := T.AsValue().AsText();
_LastResult := Result;
end;
end else
error('Unexpected GPT result: %1', ResponseTxt);
end else
error('Unexpected GPT result: %1', ResponseTxt);
end else
error('GPT HTTP Error: %1 (%2)', Response.HttpStatusCode, ResponseTxt);
end else
error('Http Error: %1', GetLastErrorText());
end;
Notice how credential handling differs between providers — Azure OpenAI uses an api-key header while ChatGPT OpenAI uses a bearer token in the Authorization header.
Building the Payload
The BuildPayload procedure constructs the JSON structure that gets sent to the API. It assembles system messages and user messages into the standard chat completion format, then adds provider-specific settings:
Procedure BuildPayload(): JsonObject
var
SingleContent: JsonObject;
Contents: JsonArray;
Messages: JsonArray;
MsgTxt: Text;
Msg: JsonObject;
ResponseFormat: JsonObject;
Payload: JsonObject;
begin
foreach MsgTxt in _System do begin
clear(SingleContent);
SingleContent.Add('type', 'text');
SingleContent.Add('text', MsgTxt);
Contents.add(SingleContent);
end;
Msg.Add('content', Contents);
Msg.Add('role', 'system');
Messages.Add(Msg);
Clear(Contents);
foreach MsgTxt in _User do begin
clear(SingleContent);
SingleContent.Add('type', 'text');
SingleContent.Add('text', MsgTxt);
Contents.add(SingleContent);
end;
Clear(Msg);
Msg.Add('content', Contents);
Msg.Add('role', 'user');
Messages.Add(Msg);
// Provider-specific configuration...
Payload.Add('messages', Messages);
Payload.Add('temperature', _temperature);
Payload.Add('top_p', _top_p);
Payload.Add('max_tokens', _maxtokens);
Payload.Add('stream', false);
exit(Payload);
end;
The UTF-8 Encoding Workaround
Erik highlights an important quirk with Business Central: it can be “weird” with how it converts Unicode to UTF-8. If you simply write content from a string, the API may reject it with content errors. The solution is to stream the content through a TempBlob with explicit UTF-8 encoding:
local procedure BuildContent(Payload: JsonObject): HttpContent
var
Content: HttpContent;
Headers: HttpHeaders;
TempBlob: Codeunit "Temp Blob";
InS: InStream;
OutS: OutStream;
begin
TempBlob.CreateOutStream(OutS, TextEncoding::UTF8);
OutS.WriteText(format(Payload));
TempBlob.CreateInStream(InS, TextEncoding::UTF8);
Content.WriteFrom(InS);
Content.GetHeaders(Headers);
Headers.Remove('Content-Type');
Headers.Add('Content-Type', 'application/json');
exit(Content);
end;
This approach of writing to an OutStream with explicit TextEncoding::UTF8 and then reading back from an InStream works reliably every time, whereas the direct string approach can fail with both Azure and ChatGPT APIs.
JSON Mode
The codeunit also supports getting structured JSON responses through the GetJson() function. It handles the common case where LLMs wrap their JSON responses in markdown code blocks:
procedure GetJson(): JsonToken
var
t: JsonToken;
Result: Text;
parts: List of [Text];
begin
_JsonObjectMode := true;
if TryCall(Result) then begin
if Result.Contains('```json') then begin
parts := Result.split('```');
t.ReadFrom(parts.get(2).Substring(5));
end else
t.ReadFrom(Result)
end else
error(GetLastErrorText);
_JsonObjectMode := false;
Exit(t);
end;
Running Local LLMs with LM Studio
Perhaps the most interesting provider option is LM Studio, which lets you run LLMs locally on your own machine. Erik demonstrates switching to LM Studio with just a URL change and no API key needed:
ai.Setup(Enum::"AI Provider"::LMStudio, 'http://10.1.40.131:1234/v1/chat/completions', '');
ai.AddUser('5+6');
message(ai.GetText());
Erik notes that while he doesn’t have the horsepower to run large models, he can “run very small ones slowly.” He shows several models available in his LM Studio installation including Gemma, Qwen 2.5, DeepSeek, and Llama.
One important tip for running LM Studio with Business Central in Docker: you must enable “Serve on Local Network” in LM Studio. Since Business Central runs in a Docker container, it cannot access localhost on the host machine. Instead, you need to use the host machine’s actual IP address and enable network serving so the Docker container can reach it.
The Full Extension Structure
The example extension is minimal by design. The app.json targets Business Central runtime 15.0 with application version 26.0:
{
"id": "75037b7e-4ce9-41b1-b038-8805e479b733",
"name": "AIPlayground",
"publisher": "Default Publisher",
"version": "1.0.0.0",
"runtime": "15.0",
"application": "26.0.0.0",
"idRanges": [
{
"from": 50100,
"to": 50149
}
]
}
The demo triggers the AI call from a page extension on the Customer List, making it easy to test with a single page load:
pageextension 50100 CustomerListExt extends "Customer List"
{
trigger OnOpenPage();
var
ai: Codeunit AI;
begin
ai.Setup(Enum::"AI Provider"::LMStudio, 'http://10.1.40.131:1234/v1/chat/completions', '');
ai.AddSystem('You are a very rude personal assistant, whenever you get a chance, try to answer the question, but with an insult, perferrable in French');
ai.AddUser('Are you busy?');
message(ai.GetText());
end;
}
Summary
Erik’s AI playground codeunit is a practical, no-frills tool for experimenting with different AI providers from within Business Central. At just a couple hundred lines of code, it abstracts away the differences between Azure OpenAI, ChatGPT’s API, and local LLMs via LM Studio, letting you focus on exploring use cases rather than wrestling with HTTP details. Key takeaways:
- The codeunit handles provider-specific differences in authentication and payload format transparently
- Use the TempBlob UTF-8 encoding workaround for reliable content handling
- LM Studio is a great option for local experimentation — just remember to enable “Serve on Local Network” when running BC in Docker
- Switching between models and providers is as easy as changing a couple of parameters
- Both text and JSON response modes are supported
The code is available on GitHub via the link in the video description. Erik also mentions wanting to add Claude support as a future provider — and the enum-based architecture makes that straightforward to add.