In this video, I demonstrate how effective Insert() can be if you use Bulk Insert, check it out:

In this video, Erik demonstrates the significant performance difference between regular inserts and bulk inserts in AL for Business Central. He walks through a practical example showing how a simple coding pattern—wrapping an Insert() call in an if statement—can inadvertently disable bulk insert buffering and nearly triple your execution time. Understanding when and how bulk inserts work is key to writing performant AL code.
What SQL Server Loves (and Hates)
SQL Server thrives when it can optimize operations performed together within the same transaction or statement. What SQL Server hates is receiving one tiny task, waiting for it to complete, then receiving another tiny task, and waiting again. AL and Business Central actually support this perspective when it comes to inserts—you can insert data into the database in two different ways: one at a time, or via bulk inserts. The challenge is that it’s not always clear from looking at the code which approach is actually being used.
The Test Setup
To demonstrate the difference, Erik created a very simple table with three fields: an integer primary key and two Text[1000] data fields.
table 50100 Bulk
{
Caption = 'Bulk';
DataClassification = ToBeClassified;
fields
{
field(1; PK; Integer)
{
Caption = 'PK';
DataClassification = ToBeClassified;
}
field(2; Data; Text[1000])
{
Caption = 'Data';
DataClassification = ToBeClassified;
}
field(3; Data2; Text[1000])
{
Caption = 'Data2';
DataClassification = ToBeClassified;
}
}
keys
{
key(PK; PK)
{
Clustered = true;
}
}
}
He also created a list page with an action that inserts 10,000 records and measures the elapsed time:
action(Insert)
{
Caption = 'Insert';
ApplicationArea = all;
trigger OnAction()
var
B: Record Bulk;
I: Integer;
Start: DateTime;
begin
B.DeleteAll();
Start := CurrentDateTime();
for i := 1 to 10000 do begin
B.Init();
B.PK := i;
B.Data := '...'; // large text data
B.Data2 := '...'; // large text data
B.Insert();
end;
Commit();
Message('Total time for %1 = %2', i, CurrentDateTime() - Start);
end;
}
The code starts by deleting all records to ensure a clean slate, records the start time, loops through 10,000 iterations performing Init(), assigning the primary key and data fields, then calls Insert(). After the loop, a Commit() is called and the elapsed time is displayed. The Commit() is important here—it ensures we’re measuring the actual database write time, not just the time to fill an insert buffer.
The Bulk Insert Result
Running this code with a plain B.Insert() call (no return value check), the result was approximately 1 second and 196 milliseconds to insert 10,000 records. Not bad at all.
Breaking Bulk Insert with a Simple if
Now here’s where things get interesting. Erik made one small change—wrapping the insert in an if statement to check its return value:
if not B.Insert() then
error('Big trouble!!!!');
This is a pattern you’ll see frequently in AL code. It doesn’t look strange at all—you might even want to do an if not Insert then Modify pattern. But the performance impact is dramatic.
With this single change, the same 10,000 record insert took approximately 3 seconds and 152 milliseconds—nearly three times slower.
Why Does This Happen?
When you call B.Insert() without capturing the return value, Business Central can buffer the insert operations and send them to SQL Server in bulk. This is massively more efficient because SQL Server can optimize the entire batch.
As soon as you use the return value—if not B.Insert() then—you are forcing Business Central to execute each insert individually and wait for a response. The system must send the record to SQL Server, wait for confirmation of success or failure, and only then proceed to the next record. This row-by-row approach is exactly the pattern SQL Server hates.
Bulk Insert Constraints from Microsoft Documentation
Microsoft’s documentation on bulk inserts under SQL Server performance outlines several constraints. These apply equally whether you’re on-premises or in the cloud:
- Using the return value from
Insert()— If you check the boolean return value, bulk insert is disabled (as we just demonstrated). - Flushing the buffer — Buffered records are sent to the database when you:
- Call
Commit() - Perform a
ModifyorDeleteon the same table - Call
FindFirst,FindLast,FindSet, or anyFind('*')variant on the same table - Call
CalcFieldsorCalcSums
- Call
- AutoIncrement primary keys — Tables with auto-increment primary keys cannot use bulk inserts.
- Blob fields — If the table contains Blob fields, bulk inserts won’t work.
A common pitfall Microsoft highlights is calling FindLast inside a loop to get the next entry number. This flushes the insert buffer on every iteration. Instead, you should call FindLast once before the loop and increment the counter manually within the loop.
If you’re running on-premises, there’s also an option in customsettings.config on the server to disable bulk inserts entirely, though this is really only useful for debugging and troubleshooting.
Practical Advice
Erik shares a real-world example from his own work on the Simple Object Designer, where he downloads symbols for all objects and builds data structures from JSON. One of the inserts was wrapped in an if statement, and the process was slow. By removing the if around the insert and ensuring the data was cleaned up before the process ran, performance improved dramatically.
Consider this: if you need to insert 10,000 records but some might already exist, it might actually be faster to delete the existing records first and then do a clean bulk insert, rather than checking each insert with an if statement.
Design Considerations
- Don’t wrap inserts in
ifstatements unless they genuinely serve a purpose. - Think about table design — Adding a Blob field to a high-traffic table will disable bulk inserts. Consider using
MediaorMediaSetfields instead. - Avoid touching the same table mid-loop — Don’t call
FindLast,Modify, orCalcFieldson the table you’re inserting into within your insert loop. - Write clean code — The goal is to keep your code structured so that if the system determines bulk insert will work, it can take advantage of it automatically.
Summary
Bulk inserts in Business Central can be nearly three times faster than individual inserts, and the difference comes down to subtle coding choices. A simple if not Record.Insert() then pattern—something that looks perfectly reasonable—forces row-by-row execution and kills performance. By understanding the constraints (no return value checks, no Blob fields, no auto-increment keys, no same-table reads mid-loop), you can write AL code that lets the system optimize your database operations automatically. Always think about whether your insert pattern supports bulk operations, especially when dealing with large data volumes.