From the Wizard’s Lair, an Agentic Future!

There’s nothing like a holiday idle time to experiment with technology, and this holiday I experimented with agents in Business Central, my own agents that it. Check out the video:

https://youtu.be/GlbtdWWapWY

In this holiday-season video, Erik explores what an agentic future might look like for Business Central. Rather than just adding AI as a thin layer on top of a 30-year-old application, he envisions a fundamentally different interaction model — one where users simply tell an agent what they need, and the agent figures out how to make it happen. He demonstrates two compelling use cases: asking an agent to find and run reports using natural language, and asking an agent to generate and deploy AL code to add new fields and validations to Business Central — all from within the client itself.

The Agentic Past: A History Lesson

Erik begins with a fascinating historical observation: ERP systems used to be agentic. In the 1970s and the early mainframe era, if a user needed a report — say the top customer report — they wouldn’t navigate a menu system themselves. Instead, they’d make a request to a human operator. That operator would schedule the job on the mainframe, and when the job was done, someone would deliver the printout.

From the user’s perspective, this was a completely agentic experience. The operator served as the intelligent intermediary — the “agent” — who understood the user’s intent, translated it into something the computer could execute, and delivered the results. The operator knew that when you asked for “the biggest customers report,” you probably meant a specific report in the ERP system, regardless of its exact technical name.

This is precisely the kind of translation — from human intent to computer action — that AI is now positioned to perform.

Business Central Today: AI as a Thin Layer

Looking at Business Central as it stands today, Erik notes that the application is essentially the same experience it has been for six years (and in many ways, for 30 years). Yes, there’s a Copilot chat interface on the side. Yes, there are agents emerging at the top. Yes, AI has infused some suggested values here and there. But holistically, it’s still the same Business Central — the on-premises version is identical, just without the AI features.

Erik sketches the current state as roughly 95% traditional Business Central with about 5% AI and agents layered on top. He doesn’t envy Microsoft’s position — maintaining backwards compatibility with a decades-old application while trying to modernize is genuinely hard.

The Vision: A Truly Agentic Role Center

Setting aside the constraints of the current product, Erik presents his vision for what Business Central could look like in a truly agentic future. He’s built a prototype role center that has only two elements:

  1. An agent — a natural language interface where you describe what you need
  2. A work section — where results and outputs appear as the agent completes tasks

The idea is simple but profound: whatever you need to do, you ask the agent. If that request results in some output — a report, a document, a record — it shows up in your work section. Tasks could run in the foreground or be sent to the background, with results appearing whenever they’re ready.

Demo 1: Finding and Running Reports with Natural Language

In his first demonstration, Erik types “run the top 10 customer report” into the agent. There is no report in Business Central called “top 10 customer report” — the actual report is called “Customer Top 10 List.” But that’s a semantic distinction that the AI can resolve, just like a human operator would have done in the mainframe days.

The agent identifies the correct report, prompts for any needed parameters (filters, dates), and produces the result. Erik demonstrates this again by asking the agent to “run the revenue report” — another report that doesn’t exist by that name. The agent identifies that this is likely a financial report and locates the appropriate one.

How the Report Lookup Works

Under the hood, the agent uses a tool called get reports. When the AI calls this tool, it passes in whatever search text it has extracted from the user’s request. The AL code then performs a fuzzy search against Business Central’s report metadata:

// The parameter comes in as text from the AI tool call.
// Split the search text into individual words by space, dash, and slash.
// For example, "top 10 customer" becomes three separate search parts: "top", "10", "customer"

// For each search part, set a separate filter group (in the hundreds range
// to avoid touching special filter groups) on the report metadata table.
// Each filter group applies a filter like: *top*, *10*, *customer*
// using the pattern: '@*' + SearchPart + '*'

// Because each word is in its own filter group, this creates an OR filter —
// the report name must contain "customer" OR "top" OR "10" (or whatever terms were searched).

// Processing-only reports are excluded since we want actual printable/viewable reports.

// The matching results are returned as JSON back to the AI.

The key insight is that the tool does not try to find the exact report. Instead, it returns a subset of reports that probably contains the right one. The AI then examines the results, identifies the best match (in this case, “Customer Top 10 List,” report ID 111), and makes a second tool call to get the report’s parameters so it can run it.

Erik applies the same approach for financial reports. If the standard report search doesn’t find anything, the system returns all available financial reports to the AI, letting it determine which one matches the user’s intent.

The filter group technique is worth highlighting. By placing each search term in its own filter group, the code effectively creates an OR condition across all terms. If you’re unfamiliar with filter groups in AL, this is a powerful pattern for building flexible search logic.

Demo 2: Vibe Coding — Adding Fields and Validations

Erik’s second demonstration is even more ambitious. He types into the agent: “add a sales email field to the customer table and card.”

The agent proceeds to:

  1. Write a customer table extension with a new email field (using extended data type)
  2. Write a customer card page extension to surface the field
  3. Compile the AL extension
  4. Deploy it to Business Central

When the first compilation attempt encounters an error, the agent autonomously fixes the issue, recompiles, and successfully deploys. Navigating to the customer card confirms the new “Sales Email” field is present.

Erik then pushes further: “add a validation so you can only type proper emails into the field.” The agent modifies the code to include email validation logic (using a regex pattern), recompiles, and redeploys. Testing the field confirms the validation is working — invalid email formats are rejected.

This entire flow leverages Erik’s existing components, including what appears to be related to the Simple Object Designer. Notably, the dev deployment is faster than what the Simple Object Designer typically achieves.

The Source Code Foundation

The provided source code gives a glimpse of the AL project structure underpinning these experiments:

// app.json - Extension manifest
{
  "id": "123ee954-c08f-47c2-a55a-9440d0487199",
  "name": "Agents",
  "publisher": "Default Publisher",
  "version": "1.0.0.0",
  "platform": "1.0.0.0",
  "application": "26.0.0.0",
  "idRanges": [
    {
      "from": 50100,
      "to": 50149
    }
  ],
  "runtime": "15.0",
  "features": [
    "NoImplicitWith"
  ]
}

The project targets Business Central application version 26.0 with runtime 15.0, and reserves object IDs 50100–50149 for the agent-related objects. The starter AL file references several relevant namespaces:

namespace DefaultPublisher.Agents;

using Microsoft.Sales.Customer;
using System.Utilities;
using System.Agents;

The System.Agents namespace is particularly noteworthy — this is the newer agent framework in Business Central that Erik’s experiments build upon, alongside his own AI framework that provides the tool-calling infrastructure.

Agents vs. Structured Automation

Erik draws an important distinction between two flavors of agentic behavior in Business Central:

  • Structured agents — What Microsoft is currently building and what Erik’s own Simple Agent Designer app provides. These are flow-oriented: data comes in, gets processed through defined steps, and results come out. They’re organized, predictable, and well-suited for repetitive business processes.
  • Ad-hoc agents — What Erik demonstrates in this video. These respond to unstructured, on-demand requests: “I need something — agent, go figure it out.” This is closer to the original mainframe operator model, where the user expresses intent and the agent determines how to fulfill it.

Both models have their place, but the ad-hoc model represents a more fundamental reimagining of how users interact with an ERP system.

Conclusion

Erik’s holiday experiments paint a compelling picture of what Business Central could become when AI moves from being a 5% overlay to a fundamental interaction paradigm. The two demos — natural language report discovery and in-client vibe coding — illustrate different dimensions of the same vision: users express intent, and intelligent agents handle the translation to computer action.

The historical parallel to mainframe operators is apt. We’ve spent decades training users to navigate menus, memorize report names, and learn the specific vocabulary of their ERP systems. An agentic future flips that relationship — the system learns to understand the user instead. The key technical building blocks are already available: tool-calling frameworks, report metadata tables, filter groups for fuzzy searching, and dev deployment for rapid code generation. The question now is how far and how fast this vision can be realized within the constraints of a production ERP platform.