Join me as I brainstorm/prototype an idea for a variable-length record system in Business Central and AL. Instead of needing to create hundreds of staging tables, what if a single table could contain everything? Check out the video:

In this video, Erik explores an experimental concept for handling staging tables in Business Central using AL. Instead of creating hundreds of individual tables for data imports, he prototypes a single “document-style” table that stores records as JSON objects — essentially bringing a NoSQL-like approach to AL development. He walks through the prototype code, demonstrates how fields are dynamically accessed, and discusses the trade-offs involved.
The Problem: Too Many Staging Tables
When working on data migration or integration projects, it’s common to need staging tables — intermediate tables where imported data lands before being processed into proper Business Central records. The challenge Erik faces across multiple projects is the sheer volume: potentially 100, 200, or even 300 different tables worth of data need to be imported into Business Central.
Creating a dedicated AL table for each source table is tedious, hard to maintain, and wasteful — especially when the staging tables are temporary by nature. Erik wants an efficient, flexible staging area that doesn’t require individual table definitions for every data source, so he can spend his time on what actually matters: the business logic that transforms staged data into proper BC records.
The Concept: One Table to Rule Them All
The core idea is a single table called “Document” that can hold any record from any source. The table structure is minimal:
- Table Key — identifies which logical table the record belongs to (e.g., “Vendor”, “Customer”)
- Row Key — the unique identifier for the record within that logical table
- Storage — the actual data, stored as a JSON object
This is essentially implementing a NoSQL document-store pattern inside AL. Erik notes that this is conceptually similar to how tables work in Power BI as well — you have records with flexible schemas rather than rigid column definitions.
Storage Strategy: Text Fields vs. Blob
For storage, Erik implements a dual approach based on record size. If the JSON representation of a record fits within approximately 6,144 characters (with a caveat about Unicode potentially requiring half that threshold due to byte-width considerations), the data is stored directly in text fields on the record. This keeps the data on the same SQL Server page and avoids the overhead of an extra lookup.
If the JSON exceeds that threshold, it falls back to blob storage. The blob approach works but requires an additional read operation, making it slightly less performant for large records.
Overloading Built-in Functions
One interesting discovery Erik shares is that AL allows you to overload built-in functions. He creates a custom Insert procedure on the document table that accepts a table key, row key, and JSON object:
This means the table now has four insert variants available: the standard parameterless insert, insert with run trigger, insert with run trigger and system ID, and the new custom overload that accepts structured data. This makes the API feel natural and AL-like.
Accessing Fields Dynamically
The real magic is in how fields are read back from the stored JSON. Erik implements functions like FieldText and FieldDecimal that take a field name as a parameter and extract the corresponding value from the JSON object.
Internally, a LoadValues function handles lazy loading: it checks whether the JSON object for the current record has already been deserialized. If the loaded row key differs from the current record’s key (because the user navigated to a different record), it reloads from either the text fields or the blob, depending on where the data was stored.
On a page, this means columns can be defined like this — calling Rec.FieldText('Name') or Rec.FieldDecimal('Outstanding Orders') — and the appropriate value is extracted from the JSON on the fly. Erik demonstrates adding a new “Outstanding Orders” decimal column to the test page in just a few lines of code, and it works immediately.
Consuming the Data
Erik demonstrates how a consumer codeunit would process the staged data. The pattern is straightforward and reads almost like normal AL record processing:
procedure ImportVendors()
var
Vendor: Record Vendor;
Doc: Record Document;
begin
Doc.SetRange("Table Key", 'Vendor');
if Doc.FindSet() then
repeat
Vendor.Init();
Vendor."No." := Doc.FieldText('Number');
Vendor.Name := Doc.FieldText('Name');
Vendor.Insert();
until Doc.Next() = 0;
end;
While the syntax isn’t identical to standard AL record access, it’s close enough to be immediately readable and understandable to any AL developer.
Importing Data
The import process accepts a JSON file containing an array of records. Erik iterates through the JSON array, extracts each token as a JSON object, and calls the custom insert function. A request page allows the user to specify the table key and which field in the JSON should serve as the row key.
The beauty of this approach is that it doesn’t care about the structure or number of fields in the JSON. Whether a record has 5 fields or 50, the import process is the same single line: pass the table key, the row key value, and the entire JSON object.
Sorting and Filtering
One of the main challenges with this approach is filtering. You can’t easily filter on values buried inside a JSON structure at the SQL level. Erik’s solution is to designate specific fields from the JSON as “sorting fields” that get promoted to actual AL table fields.
A helper codeunit remembers which JSON field should be the sorting field for each table key (e.g., “City” for vendors). During insert, the designated field’s value is extracted from the JSON and stored in a real AL field that has a key defined on it. This makes that field available for filtering and sorting through normal BC mechanisms.
Erik also uses the caption class system to dynamically set the column header for the sorting field, so it displays “City” (or whatever the designated field name is) rather than a generic label.
Handling Schema Variations
A particularly compelling benefit of this approach is handling schema variations across source system versions. If you’re importing data from a product at version 18.5, and that product had a slightly different record layout at version 18.4 or will have changes in 18.6, traditional staging tables would require maintenance every time the schema changes.
With the JSON document approach, schema changes are largely irrelevant to the staging layer. New fields simply appear in the JSON, and missing fields simply aren’t there. The consuming code can check whether a field exists before trying to read it, handling version differences gracefully without any table schema changes.
Modifying Records
Erik also sketches out how modification would work, implementing SetFieldText, SetFieldDate, and SetFieldDecimal functions that either replace an existing field value in the JSON object or add a new one:
procedure SetFieldText(FieldName: Text; Value: Text)
begin
if Object.Contains(FieldName) then
Object.Replace(FieldName, Value)
else
Object.Add(FieldName, Value);
end;
The modify function would then serialize the updated JSON object back to storage, also updating the sorting field if applicable.
Generic List Pages
Erik briefly touches on the idea of creating more generic list pages that could display any document type. The concept involves defining a comma-separated list of field names for a given table key, and then dynamically populating five or six columns on the page with those fields.
The limitation here is that all dynamically rendered columns would have to be text type (and therefore left-justified), since column types in AL are determined at compile time. There’s no way to dynamically change a column’s type at runtime without resorting to unconventional workarounds.
Bonus: Source Code — Adding a Field to G/L Entry
As a separate example of AL development techniques, the source code provided shows how to add a custom “Accounting Period” field to the G/L Entry table. This is a more traditional AL extension pattern that demonstrates table extensions, page extensions, and event subscribers:
tableextension 51200 "My G/L Entry" extends "G/L Entry"
{
fields
{
field(51200; AccountingPeriod; Integer)
{
Caption = 'Account Period';
}
}
keys
{
key(AP; AccountingPeriod)
{
}
}
}
The accounting period is calculated based on 28-day periods within the year:
codeunit 51200 "Odd Accounting"
{
Permissions = tabledata "G/L Entry" = RM;
internal procedure UpdateAccountingPeriods()
var
Entry: Record "G/L Entry";
begin
if Entry.FindSet() then
repeat
Entry.AccountingPeriod := CalcPeriod(Entry."Posting Date");
Entry.Modify();
until Entry.Next() = 0;
end;
local procedure CalcPeriod(PostingDate: Date): Integer
var
DayOfYear: Integer;
Period: Integer;
begin
DayOfYear := PostingDate - DMY2Date(1, 1, Date2DMY(PostingDate, 3));
Period := DayOfYear div 28 + 1;
if Period = 14 then
exit(13)
else
exit(Period);
end;
[EventSubscriber(ObjectType::Codeunit, Codeunit::"Gen. Jnl.-Post Line", 'OnBeforeInsertGlEntry', '', true, true)]
local procedure MyProcedure(var GLEntry: Record "G/L Entry")
begin
GLEntry.AccountingPeriod := CalcPeriod(GLEntry."Posting Date");
end;
}
An event subscriber on OnBeforeInsertGlEntry automatically calculates the period for new entries, while a batch function can backfill existing entries.
Conclusion
Erik’s NoSQL-inspired document table approach for AL staging data offers a compelling solution to a common problem in Business Central data migration projects. Instead of creating and maintaining hundreds of rigid staging tables, a single flexible table stores records as JSON documents with dynamic field access. The trade-offs are real — you lose native filtering (mitigated by promoted sorting fields), type-safe columns on pages, and some performance on large records — but for the specific use case of temporary staging data that will be consumed and transformed by AL code, the simplicity and flexibility gains are significant. Combined with tools like BCCL for direct import, this pattern could dramatically reduce the overhead of data migration projects.