Using Machine Learning to talk with Business Central

Meet Al, my new personal assistant. Al lives inside Business Central and with the help of some clever machine learning, he’s able to understand what I’m saying to him and carry out all my commands.

https://youtu.be/0g5wNKPo5Fs

In this video, Erik demonstrates a fascinating proof-of-concept project that lets you interact with Business Central using natural language voice commands. By combining speech recognition, Azure’s Language Understanding (LUIS) service, and AL code running inside Business Central, Erik builds a pipeline where you can ask questions like “What’s the credit limit of the Canon Group?” or even give commands like “Order seven bicycles for the Canon Group” — all by speaking aloud.

The Demo in Action

The video opens with Erik talking directly to Business Central through a voice assistant he’s named “AL”:

  • “Hey AL, what’s the credit limit of the Canon Group?” — AL responds: “Credit Limit LCY is 50,000 on customer The Canon Group PLC.”
  • “Hey AL, what is the address of London Postmaster?” — AL responds: “Address is 120 Wellington Road on customer London Postmaster.”
  • “Hey AL, how many bikes in inventory?” — AL responds: “Inventory is 39 on item Bicycle.”
  • “Hey AL, order two bikes for the London Postmaster.” — AL responds: “You cannot use a customer without a credit limit.” (Business logic enforced!)
  • “Hey AL, order seven bicycles, 14 rims, and 504 spokes for the Canon Group.” — AL responds: “Order 1020 has been created with a total of $24,692.50.”

Everything shown is real — no cheating, no pre-recorded responses. It’s a live interaction with a running Business Central instance.

The Architecture Pipeline

Erik walks through the full pipeline of how this works, from spoken word to Business Central action:

1. Wake Word Detection (Offline Speech Recognition)

A small C# program (around 134 lines) runs locally on the machine. It uses the built-in Windows speech recognition engine in an offline capacity. This engine has a very simple grammar library loaded — it only knows one word: “AL” (with its phonetic pronunciation). The program loops continuously, listening for the wake word “Hey AL.” It also uses a speech synthesizer (Microsoft Mark Mobile voice) to speak responses back.

2. Azure Speech Recognition (Online)

Once the wake word is detected, the program activates a second engine — an Azure-based speech recognition service. This streams whatever you say next up to Azure’s Speech Service, which converts your spoken words into text. When text comes back, the program calls a function called AskBusinessCentral.

3. Business Central (OData Web Service)

The AskBusinessCentral function calls a codeunit exposed as an OData web service in Business Central. This codeunit receives the transcribed text and passes it to the next stage.

4. LUIS (Language Understanding Intelligent Service)

Inside the AL codeunit, the transcribed text is sent to Microsoft’s LUIS service — the Language Understanding module in Azure. LUIS is the real magic here. It analyzes the text and returns a structured JSON response containing:

  • Intent — What is the user trying to do? (e.g., “find” information, “sell” something)
  • Entities — What are the components of the request? These include:
    • Field names (e.g., “credit limit,” “address,” “city”)
    • Table references (e.g., customer, vendor, item)
    • Numbers (e.g., 7, 14, 504)
    • Specific customers, vendors, or items

5. Back to AL for Execution

Once the structured data comes back from LUIS, the AL code processes the intent. If the intent is “find,” it opens the appropriate table using a Record Reference, finds the specified field, reads the value, and constructs a response. If the intent is “sell,” it creates a sales order with the specified items and customer, returning the order number and total amount.

Training the LUIS Model

Erik shows the LUIS portal where he’s configured the model. The key components are:

Intents

Erik has created intents like “find” (for querying information) and “sell” (for creating orders). Each intent has example utterances — sample sentences that people might say. For example, the “find” intent includes variations like “What is the city of…” and “Show me the balance of customer 10000.”

Entities

The entities are populated from Business Central itself. Erik has built an export function in AL that generates lists of:

  • Customers — including synonyms (e.g., “Canon Group PLC” can also be referenced as just “Canon” or “Canon Group”)
  • Items — with their various names
  • Vendors — similarly with synonyms
  • Fields — including synonym mappings (e.g., “credit limit” maps to “Credit Limit LCY”)
  • Tables — what tables the system knows about

Training in Real-Time

Erik demonstrates training live. He types “What is the city of Lewis” (a vendor), and LUIS returns a 69% confidence score, correctly identifying “city” as a field and “Lewis” as a vendor. By confirming this as a correct example of the “find” intent, the model gets retrained and becomes more confident with similar queries. After publishing the updated model, subsequent queries benefit from the improved understanding.

The AL Source Code

While the video’s original code predates the source files provided, Erik has since evolved the project into a more general-purpose AI integration framework. Here’s the core AI codeunit that handles communication with various AI providers:

Codeunit 50105 AI
{
    procedure Setup(Provider: Enum "AI Provider"; URL: Text; AccessKey: Text)
    begin
        CurrentProvider := Provider;
        _Key := AccessKey;
        _URL := URL;
        _temperature := 0.7;
        _top_p := 0.95;
        _maxtokens := 4000;
    end;

    procedure AddSystem(Msg: Text)
    begin
        _System.Add(Msg);
    end;

    procedure AddUser(Msg: Text)
    begin
        _User.Add(Msg);
    end;

    procedure GetText(): Text
    var
        Result: Text;
    begin
        _JsonObjectMode := false;
        if TryCall(Result) then
            exit(Result)
        else
            error(GetLastErrorText);
    end;

    // ... additional methods for temperature, top_p, max tokens, model selection
}

The codeunit supports multiple AI providers through an enum:

enum 50100 "AI Provider"
{
    value(0; AzureOpenAI)
    {
        Caption = 'Azure Open AI';
    }
    value(1; ChatGPTOpenAI)
    {
        Caption = 'ChatGPT Open AI';
    }
    value(10; LMStudio)
    {
        Caption = 'LM Studio';
    }
}

The TryCall procedure handles the HTTP communication, sending a JSON payload with system and user messages to whichever provider is configured, then parsing the response to extract the AI’s reply from the choices array:

[TryFunction]
procedure TryCall(var Result: Text)
var
    Client: HttpClient;
    Request: HttpRequestMessage;
    Response: HttpResponseMessage;
    Headers: HttpHeaders;
    ResponseTxt: Text;
    ResponseJson: JsonObject;
    Choices: JsonArray;
    Choice: JsonObject;
    T: JsonToken;
begin
    Request.SetRequestUri(_URL);
    Request.Method('POST');
    Request.GetHeaders(Headers);
    case CurrentProvider of
        CurrentProvider::AzureOpenAI:
            if _Key <> '' then
                Headers.Add('api-key', _Key);
        CurrentProvider::ChatGPTOpenAI:
            if _key <> '' then
                Headers.Add('Authorization', 'bearer ' + _Key);
    end;
    Headers.Add('User-Agent', 'Business Central');
    Request.Content(BuildContent(BuildPayload()));
    Client.Timeout(300000);

    if Client.Send(Request, Response) then begin
        Response.Content().ReadAs(ResponseTxt);
        if Response.IsSuccessStatusCode() then begin
            ResponseJson.ReadFrom(ResponseTxt);
            if ResponseJson.Get('choices', T) then begin
                Choices := T.AsArray();
                if Choices.Get(0, T) then begin
                    Choice := T.AsObject();
                    if Choice.Get('message', T) then
                        if T.AsObject().Get('content', T) then begin
                            Result := T.AsValue().AsText();
                            _LastResult := Result;
                        end;
                end;
            end;
        end;
    end;
end;

And here’s a fun example of how it’s wired up — a page extension on the Customer List that uses the AI codeunit with a local LM Studio instance and a playful system prompt:

pageextension 50100 CustomerListExt extends "Customer List"
{
    trigger OnOpenPage();
    var
        ai: Codeunit AI;
    begin
        ai.Setup(Enum::"AI Provider"::LMStudio, 'http://10.1.40.131:1234/v1/chat/completions', '');
        ai.AddSystem('You are a very rude personal assistant, whenever you get a chance, try to answer the question, but with an insult, perferrable in French');
        ai.AddUser('Are you busy?');
        message(ai.GetText());
    end;
}

Practical Applications

Erik emphasizes that while this is a proof-of-concept, the architecture opens up real possibilities:

  • Smart speakers — The same backend could power an Amazon Echo or Google Home skill that talks to Business Central
  • Chatbots — Build a chat interface where customers can ask about their accounts, check inventory, or place orders
  • Hands-free operation — Warehouse workers or other users who can’t easily use a keyboard could interact with the system by voice
  • Business logic enforcement — As demonstrated, the system respects Business Central’s validation rules (like the credit limit check)

Conclusion

This project showcases how multiple technologies — offline speech recognition, Azure Speech Services, LUIS for natural language understanding, and AL code in Business Central — can be woven together into a compelling voice interface for an ERP system. The LUIS service is the unsung hero here: while image recognition and other ML capabilities tend to get the spotlight, language understanding is incredibly powerful for business applications. The fact that you can train it with examples, upload entity lists from your actual Business Central data, and have it parse complex multi-item orders with high accuracy makes it a genuinely useful tool. Whether this evolves into a chatbot, a voice assistant integration, or simply remains a fascinating experiment, it demonstrates the expanding possibilities at the intersection of AI and business applications.