In this video, I take another look at the state of AL in the world of AI. I use Claude Code to create an app and discuss the new role of the AL developer. Check out the video:

In this video, Erik demonstrates how AL developers can adapt and thrive in an AI-powered world by using AI coding assistants — specifically Claude Code — to build a fully functional chess game inside Business Central. The key message: you’re still the essential person, but your tools are changing. Instead of writing every line of code by hand, you may spend more time writing instructions, prompts, and plans while the AI handles the brute-force coding.
The Core Message: You Have to Change with the World
If you’ve been in the Dynamics/Business Central ecosystem for any length of time, you’ve already lived through major transitions — from the classic client to the Role Tailored Client, from C/AL to AL, from on-premises to cloud. Each time, the way you built solutions changed fundamentally. AI is the next evolution in that same continuum.
The key insight Erik shares is straightforward: you might write way less AL code in the future, and way more instructions and prompts. But at the end of the day, you’re still building software — you’re just using different tools to do it.
Why Chess? A Well-Known Problem vs. Real Customer Problems
Erik deliberately chose chess as his example, and he’s transparent about why it’s both a great and a completely unrealistic demonstration. Chess is a well-known, well-documented problem. There are likely hundreds of thousands of chess engine implementations across the internet, which means LLMs have enormous amounts of training data and reference material to draw from.
Your customers’ problems are the opposite. If you ask an AI to “build me a reservation system for some company,” you won’t get anything useful because that’s far too little information. The AI doesn’t know your customer’s specific business rules, edge cases, workflows, or integration requirements. It’s still your job to interpret the problem, design the solution, and make sure the AI knows what to do.
The Workflow: From Empty Project to Working Chess Game
Step 1: Initialize the Project
Erik started with an empty AL extension project — just the standard app.json and a Hello World file. He launched Claude Code in a terminal window alongside VS Code and ran the /init command to let Claude analyze the project structure.
Claude created a CLAUDE.md file — essentially a project context file that describes what the project is, how to build and deploy it, and the architecture conventions. For an empty project, this was minimal, but it serves as the foundation for all subsequent interactions.
Step 2: Define the Requirements in CLAUDE.md
Rather than jumping straight into prompting, Erik edited the CLAUDE.md file to establish the project’s purpose and architecture. He defined:
- A control add-in for the UI chessboard, allowing both user and computer moves
- Events triggered from the control add-in when the user wants to move a piece
- Procedures to accept moves and trigger computer responses
- A chess game engine as a codeunit — explicitly stating not to write the engine in JavaScript, keeping the logic in AL
- A target playing strength of ELO 1000
- A host page for the control add-in where users can start games and resign
This is the critical architectural step that mirrors traditional software design — you’re establishing boundaries, responsibilities, and interfaces before any code gets written.
Step 3: Let the AI Plan
Erik activated Claude’s plan mode with /plan and asked it to create a development plan. During planning, Claude performed web searches — looking up control add-in documentation, chess engine architecture, and even hitting community members’ blogs and GitHub repositories for AL-specific implementation patterns.
The resulting plan included:
- File structure: a page, a control add-in, and a codeunit
- Control add-in specifications: required procedures, events, and UI approach using Unicode symbols for pieces
- Chess engine architecture: board representation, move encoding, move generation, position evaluation, and move search algorithms
- A public API definition for the engine codeunit
- Host page design with game flow: user clicks, board ready, user moves, engine thinks
- Implementation order and verification steps
At this point, Claude presented the plan and offered options: accept it as-is, or edit it. Erik accepted, but he emphasized that the more realistic workflow would be to review and refine the plan — reordering steps, adding constraints, noting that a report shouldn’t run before a certain condition is met, or specifying additional fields needed on a table.
Step 4: Autonomous Implementation
After accepting the plan, Claude began building. It created the control add-in with HTML/CSS/JavaScript for the board rendering, the AL host page with proper event wiring, and a 1,200-line chess engine codeunit — all in approximately 9 minutes.
One of the advantages Erik highlighted about using Claude Code in a separate terminal (rather than exclusively within VS Code) is the ability to watch the agent work in parallel. Claude spawned multiple sub-tasks simultaneously — creating new files while updating existing ones, implementing the control add-in JavaScript while building the AL engine codeunit.
Step 5: Build and Test
The first compilation succeeded without errors. The chess board rendered correctly in Business Central, pieces were displayed, and the game was playable. Claude had gone out, found community implementations and best practices, and produced working code on the first try.
Iterating: Adding Features and Fixing Bugs
Requesting Changes
After the initial success, Erik committed the working code (he prefers to control Git commits himself rather than letting Claude manage them) and then requested two enhancements:
- Expand the UI with a FactBox containing move history
- Add a one-second delay before the computer makes its move, so the player can see the transition
Claude created a mini-plan for these changes: add a move history table, create a move history page, update the control add-in JavaScript with engine move delay, add move notation to the chess engine codeunit, and update the host page with the FactBox. It executed these tasks in parallel where possible.
Finding and Fixing a Bug
After the changes compiled and deployed, Erik discovered a bug: after the computer made its first move, the board stopped responding to clicks. He reported the issue to Claude in plain language:
“After the first computer move, the board is not responding to my clicks on valid pieces to move. Please fix.”
Claude immediately identified the root cause: the waitingForEngine flag was being set to true when requesting the engine move but was never being cleared after the response came back. The old code had handled this correctly, but the new delay-related update had broken the flow. Claude fixed the issue, and the game worked correctly on the next run.
The Source Code: AI Utility Codeunit
The project also includes an AI utility codeunit that demonstrates how to call various LLM providers (Azure OpenAI, ChatGPT, LM Studio) directly from AL. This is a standalone piece separate from the chess game, but it illustrates the broader theme of integrating AI capabilities into Business Central extensions:
Codeunit 50105 AI
{
procedure Setup(Provider: Enum "AI Provider"; URL: Text; AccessKey: Text)
begin
CurrentProvider := Provider;
_Key := AccessKey;
_URL := URL;
_temperature := 0.7;
_top_p := 0.95;
_maxtokens := 4000;
end;
procedure AddSystem(Msg: Text)
begin
_System.Add(Msg);
end;
procedure AddUser(Msg: Text)
begin
_User.Add(Msg);
end;
procedure GetText(): Text
var
Result: Text;
begin
_JsonObjectMode := false;
if TryCall(Result) then
exit(Result)
else
error(GetLastErrorText);
end;
// ... additional methods for JSON responses, model selection, etc.
}
The codeunit supports multiple AI providers through an enum:
enum 50100 "AI Provider"
{
value(0; AzureOpenAI)
{
Caption = 'Azure Open AI';
}
value(1; ChatGPTOpenAI)
{
Caption = 'ChatGPT Open AI';
}
value(10; LMStudio)
{
Caption = 'LM Studio';
}
}
And a page extension demonstrates how to use it — in this case, calling a local LM Studio instance with a custom system prompt:
pageextension 50100 CustomerListExt extends "Customer List"
{
trigger OnOpenPage();
var
ai: Codeunit AI;
begin
ai.Setup(Enum::"AI Provider"::LMStudio,
'http://10.1.40.131:1234/v1/chat/completions', '');
ai.AddSystem('You are a very rude personal assistant, whenever you get a chance,
try to answer the question, but with an insult,
preferably in French');
ai.AddUser('Are you busy?');
message(ai.GetText());
end;
}
The Bigger Picture: Your Role Is Evolving, Not Disappearing
Erik draws a parallel to his own product, the Simple Object Designer, which he created several years ago. Some partners were initially upset because it allowed customers to build things themselves that would have previously been quoted at $30,000 or even six-figure amounts. But the Simple Object Designer doesn’t replace developers — it handles the simple, deterministic work (create a table, add fields, wire up pages) while developers focus on complex business logic and design.
AI coding assistants follow the same pattern at a much larger scale. The key principles Erik emphasizes:
- It’s your code and your responsibility. The commit has your name on it. You need to understand what was generated and be accountable for it.
- You are still the essential person. You understand the customer’s business, you know the right patterns for Business Central, you design the solution architecture.
- One-shot generation isn’t how you’d build for customers. The chess demo was impressive but unrealistic. Real projects require iterative refinement, domain expertise, and careful review.
- Control what matters. Erik keeps control of Git commits rather than letting the AI manage them, so he can review, revert, and understand every change.
- The AI doesn’t know your customer’s business. It can generate chess engines because that knowledge is everywhere. Building something that’s never been built before — that’s where your expertise is irreplaceable.
Practical Tips for Getting Started
- Write detailed plans. The more specific your requirements and architectural decisions, the better the AI output. Erik notes that in another project, he went into “painstaking detail” describing what he wanted, and the AI simply did the brute-force coding without designing anything itself.
- Use the plan-review-refine cycle. Don’t just accept the first plan. Edit it, reorder priorities, add constraints.
- Use CLAUDE.md (or equivalent) as your project context file. Define purpose, architecture, conventions, and constraints upfront.
- Keep the AI focused on well-defined tasks. Break complex problems into clear, bounded pieces of work.
- Explore MCP tools and extensions. Claude Code supports various plugins that can extend its capabilities — Erik hints at a project that exposes MCP tools for Claude Code to use.
Conclusion
The future for AL developers isn’t about being replaced by AI — it’s about evolving your workflow to leverage AI as a powerful tool. Just as you adapted from C/AL to AL, from classic client to web client, and from on-premises to cloud, you now need to embrace AI-assisted development. Your deep knowledge of Business Central, your understanding of customer requirements, and your ability to design robust solutions remain as valuable as ever. The difference is that the tedious work of translating those designs into thousands of lines of code is becoming dramatically faster. Embrace the change, stay in control of the process, and continue to add value where it matters most — understanding the problem and designing the right solution.