Handle any CSV file with the CSV Buffer

In this video, I show to use the CSV Buffer to import an “unusual” CSV File using the CSV Buffer functionality.

https://youtu.be/cp91N4xFqlg

In this video, Erik demonstrates how to use the CSV Buffer table in AL to handle CSV files that don’t conform to standard formats — files with varying numbers of fields per line, different delimiters, and other quirks that make traditional XMLport-based approaches difficult. As a practical example, he builds a complete solution that reads a CSV file and creates Sales Invoices with lines from the parsed data.

Why Use the CSV Buffer?

You might wonder: why do we need a programmatic interface for CSV files when we already have XMLports that can read and write CSV? The answer lies in the chaotic nature of real-world CSV files.

A “CSV” file is supposedly a comma-separated values file, but even the “comma” part might not be true. It could be:

  • A TSV file (tab-separated values)
  • A semicolon-separated values file
  • Something else entirely

The only thing we can really say about these files is that there’s some separator and each line represents something. Beyond that, there are no rules. When you encounter CSV files where each line has a different number of fields — such as order files with a header section and varying numbers of line items — traditional features might not be enough. That’s where the CSV Buffer comes in.

The CSV Buffer Table (Table 1234)

The CSV Buffer lives in the base application (though Erik notes it’s a prime candidate for being moved to the system application). The table has a beautifully simple structure with only three fields:

  • Line No. — identifies which line of the CSV file
  • Field No. — identifies which field within that line
  • Value — the actual content (Text[250])

The primary key is Line No., Field No., meaning each cell in the CSV file becomes one record — the same pattern used by the Excel Buffer.

Always Use Streams, Not Files

When looking at the CSV Buffer source code, you’ll find two sets of functions: file-based functions (scoped for on-prem only) and stream-based functions. Erik strongly recommends always using the stream version, even if you’re on-prem. The reason is encoding: the file-based version forces you into the default encoding (basic Unicode), so if your file is UTF-8 or uses another encoding with international characters, you won’t get correct results. The stream-based function lets you control the encoding yourself.

Always Use It as a Temporary Table

The CSV Buffer should always be treated as a temporary table. If you look at the source code, there are functions that copy lines around, and without temporary usage you risk multi-user clashes. On a page, you set this with SourceTableTemporary = true. Erik notes this is another reason Microsoft should mark it as an always-temporary table when they eventually move it to the system application.

The Example CSV File

To demonstrate, Erik created a deliberately tricky CSV file that represents orders. Each line has a customer number, a date, and then a variable number of item/quantity pairs:

"10000","11/22/2021","1900-S",10,"1920-S",20,"1953-W",30
"20000","11/24/2021","1960-S",10,"1996-S",20
"30000","11/25/2021","2000-S",10,"1920-S",20,"1953-W",30,"1996-S",30
"40000","11/26/2021","1972-S",10,"2000-S",20,"1953-W",30

Notice that each line has a different number of fields — the first line has 8 fields, the second has 6, the third has 10, and the fourth has 8. This is exactly the kind of file that causes headaches with XMLports but is handled perfectly by the CSV Buffer.

Building the Solution

The Page Setup

The solution is built as a List page on the CSV Buffer table. The key page properties ensure the table is used as temporary:

page 50116 "CSV Test"
{
    ApplicationArea = All;
    Caption = 'CSV Test';
    PageType = List;
    SourceTable = "CSV Buffer";
    SourceTableTemporary = true;
    UsageCategory = Lists;

    layout
    {
        area(content)
        {
            repeater(General)
            {
                field("Line No."; Rec."Line No.")
                {
                    ApplicationArea = All;
                }
                field("Field No."; Rec."Field No.")
                {
                    ApplicationArea = All;
                }
                field(Value; Rec.Value)
                {
                    ApplicationArea = All;
                }
            }
        }
    }

Importing the CSV File

The Import action uses UploadIntoStream to get the file from the user, then calls LoadDataFromStream on the CSV Buffer, passing the stream and the separator character:

action(Import)
{
    Caption = 'Import';
    Image = Import;
    ApplicationArea = all;
    Promoted = true;
    PromotedCategory = Process;
    PromotedIsBig = true;
    PromotedOnly = true;
    trigger OnAction()
    var
        InS: InStream;
        FileName: Text;
    begin
        if UploadIntoStream('CSV File', '', '', Filename, InS) then begin
            Rec.LoadDataFromStream(InS, ',');
            Message('Lines = %1', Rec.GetNumberOfLines());
        end;
    end;
}

After importing, the page displays all parsed data — one record per cell. You can immediately see how many fields each line contains, confirming the CSV Buffer handles lines with different field counts perfectly.

Handling Quoted Values

The CSV Buffer doesn’t have built-in knowledge of text qualifiers (quotes), so the imported values include the surrounding double quotes. Erik handles this with a simple approach using TrimStart and TrimEnd in the OnAfterGetRecord trigger for display purposes:

trigger OnAfterGetRecord()
begin
    Rec.Value := Rec.Value.TrimEnd('"').TrimStart('"');
end;

Erik notes that if Microsoft moves this to the system application, adding a text qualifier parameter would be a worthwhile pull request.

Helper Functions for Data Access

To make the data easily accessible, Erik creates helper functions that retrieve typed values from specific positions in the CSV data:

procedure GetText(LineNo: Integer; FieldNo: Integer): Text
begin
    if Rec.GET(LineNo, FieldNo) then
        exit(Rec.Value.TrimEnd('"').TrimStart('"'));
end;

local procedure GetDate(LineNo: Integer; FieldNo: Integer): Date
var
    D: Date;
begin
    if Rec.GET(LineNo, FieldNo) then begin
        evaluate(D, Rec.Value.TrimEnd('"').TrimStart('"'));
        exit(d);
    end;
end;

local procedure GetDecimal(var LineNo: Integer; FieldNo: Integer): Decimal
var
    D: Decimal;
begin
    if Rec.GET(LineNo, FieldNo) then begin
        evaluate(D, Rec.Value.TrimEnd('"').TrimStart('"'));
        exit(d);
    end;
end;

Note that in AL, if you don’t specify an exit value, the function returns the default for that type — blank for Text, zero for Integer/Decimal, etc. This provides safe fallback behavior.

Creating Sales Invoices from CSV Data

The Main Loop

The Create Orders action loops through each line and calls a procedure to create an invoice:

action(CreateOrders)
{
    Caption = 'Create Orders';
    Image = Import;
    ApplicationArea = all;
    Promoted = true;
    PromotedCategory = Process;
    PromotedIsBig = true;
    PromotedOnly = true;
    trigger OnAction()
    var
        LineNo: Integer;
    begin
        for LineNo := 1 to Rec.GetNumberOfLines() do begin
            CreateInvoice(LineNo);
        end;
    end;
}

Creating the Invoice Header

Each CSV line becomes one Sales Invoice. The header creation follows the pattern of emulating how a user would type — Initialize, Insert (to get a number), Validate fields, then Modify:

local procedure CreateInvoice(var LineNo: Integer)
var
    SH: Record "Sales Header";
begin
    SH.Init();
    SH."Document Type" := SH."Document Type"::Invoice;
    SH.Insert(true);
    SH.Validate("Sell-to Customer No.", GetText(LineNo, 1));
    SH.Validate("Document Date", GetDate(LineNo, 2));
    SH.Modify(true);
    CreateLines(LineNo, SH);
end;

An important gotcha Erik encountered: if you don’t extract this into a separate procedure, the Init call doesn’t clear the primary key fields. This means the second iteration would try to insert a record with a number that already exists. By extracting it into its own procedure with local variables, the Sales Header record is clean each time.

Creating the Invoice Lines

Since the CSV has a variable number of item/quantity pairs starting at field 3, the line creation uses a repeat loop that increments by 2 (for each pair) and checks whether the field exists:

local procedure CreateLines(var LineNo: Integer; SH: Record "Sales Header")
var
    FieldNo: Integer;
    Done: Boolean;
begin
    FieldNo := 3;
    repeat
        if Rec.Get(LineNo, FieldNo) then begin
            CreateLine(LineNo, SH, FieldNo);
        end else
            Done := true;
        FieldNo += 2;
    until Done;
end;

Creating Individual Lines

Each line is created by initializing a Sales Line, setting the key fields, and then validating Type, No., and Quantity. A clever trick is used for line numbering — multiplying the field number by 10,000 ensures unique, well-spaced line numbers:

local procedure CreateLine(var LineNo: Integer; var SH: Record "Sales Header"; var FieldNo: Integer)
var
    SL: Record "Sales Line";
begin
    SL.Init();
    SL."Document Type" := SH."Document Type";
    SL."Document No." := SH."No.";
    SL."Line No." := 10000 * FieldNo;
    SL.Insert(true);
    SL.Validate(Type, SL.Type::Item);
    SL.Validate("No.", GetText(LineNo, FieldNo));
    SL.Validate(Quantity, GetDecimal(LineNo, FieldNo + 1));
    SL.Modify(true);
end;

The first item field (field 3) gets line number 30,000, the next pair (field 5) gets 50,000, and so on — leaving plenty of room for insertions between lines.

The Complete Source Code

Here’s the full page object bringing everything together:

page 50116 "CSV Test"
{
    ApplicationArea = All;
    Caption = 'CSV Test';
    PageType = List;
    SourceTable = "CSV Buffer";
    SourceTableTemporary = true;
    UsageCategory = Lists;

    layout
    {
        area(content)
        {
            repeater(General)
            {
                field("Line No."; Rec."Line No.")
                {
                    ApplicationArea = All;
                }
                field("Field No."; Rec."Field No.")
                {
                    ApplicationArea = All;
                }
                field(Value; Rec.Value)
                {
                    ApplicationArea = All;
                }
            }
        }
    }
    actions
    {
        area(Processing)
        {
            action(Import)
            {
                Caption = 'Import';
                Image = Import;
                ApplicationArea = all;
                Promoted = true;
                PromotedCategory = Process;
                PromotedIsBig = true;
                PromotedOnly = true;
                trigger OnAction()
                var
                    InS: InStream;
                    FileName: Text;
                begin
                    if UploadIntoStream('CSV File', '', '', Filename, InS) then begin
                        Rec.LoadDataFromStream(InS, ',');
                        Message('Lines = %1', Rec.GetNumberOfLines());
                    end;
                end;
            }
            action(CreateOrders)
            {
                Caption = 'Create Orders';
                Image = Import;
                ApplicationArea = all;
                Promoted = true;
                PromotedCategory = Process;
                PromotedIsBig = true;
                PromotedOnly = true;
                trigger OnAction()
                var
                    LineNo: Integer;
                begin
                    for LineNo := 1 to Rec.GetNumberOfLines() do begin
                        CreateInvoice(LineNo);
                    end;
                end;
            }
        }
    }

    procedure GetText(LineNo: Integer; FieldNo: Integer): Text
    begin
        if Rec.GET(LineNo, FieldNo) then
            exit(Rec.Value.TrimEnd('"').TrimStart('"'));
    end;

    local procedure GetDate(LineNo: Integer; FieldNo: Integer): Date
    var
        D: Date;
    begin
        if Rec.GET(LineNo, FieldNo) then begin
            evaluate(D, Rec.Value.TrimEnd('"').TrimStart('"'));
            exit(d);
        end;
    end;

    local procedure GetDecimal(var LineNo: Integer; FieldNo: Integer): Decimal
    var
        D: Decimal;
    begin
        if Rec.GET(LineNo, FieldNo) then begin
            evaluate(D, Rec.Value.TrimEnd('"').TrimStart('"'));
            exit(d);
        end;
    end;

    local procedure CreateInvoice(var LineNo: Integer)
    var
        SH: Record "Sales Header";
    begin
        SH.Init();
        SH."Document Type" := SH."Document Type"::Invoice;
        SH.Insert(true);
        SH.Validate("Sell-to Customer No.", GetText(LineNo, 1));
        SH.Validate("Document Date", GetDate(LineNo, 2));
        SH.Modify(true);
        CreateLines(LineNo, SH);
    end;

    local procedure CreateLines(var LineNo: Integer; SH: Record "Sales Header")
    var
        FieldNo: Integer;
        Done: Boolean;
    begin
        FieldNo := 3;
        repeat
            if Rec.Get(LineNo, FieldNo) then begin
                CreateLine(LineNo, SH, FieldNo);
            end else
                Done := true;
            FieldNo += 2;
        until Done;
    end;

    local procedure CreateLine(var LineNo: Integer; var SH: Record "Sales Header"; var FieldNo: Integer)
    var
        SL: Record "Sales Line";
    begin
        SL.Init();
        SL."Document Type" := SH."Document Type";
        SL."Document No." := SH."No.";
        SL."Line No." := 10000 * FieldNo;
        SL.Insert(true);
        SL.Validate(Type, SL.Type::Item);
        SL.Validate("No.", GetText(LineNo, FieldNo));
        SL.Validate(Quantity, GetDecimal(LineNo, FieldNo + 1));
        SL.Modify(true);
    end;

    trigger OnAfterGetRecord()
    begin
        Rec.Value := Rec.Value.TrimEnd('"').TrimStart('"');
    end;
}

Key Takeaways

  • The CSV Buffer (Table 1234) is the “cousin” of the Excel Buffer and the sibling of the XML Buffer — all follow the same one-record-per-cell pattern.
  • Always use the stream-based functions (LoadDataFromStream) rather than file-based ones, even on-prem, to maintain control over encoding.
  • Always use the table as temporary (SourceTableTemporary = true) to avoid multi-user issues.
  • The CSV Buffer gracefully handles files with varying numbers of fields per line — something that’s very difficult with XMLports.
  • The Value field is Text[250], so keep that limitation in mind.
  • Quote handling must be done manually since the buffer doesn’t support text qualifier parameters (yet).
  • When creating documents programmatically, remember to emulate user input: Init → Insert (true) → Validate fields → Modify (true).

Summary

The CSV Buffer is a powerful and flexible tool for handling non-standard CSV files in Business Central. While XMLports work great for well-structured files, the CSV Buffer shines when you need to deal with the messy reality of real-world data — variable field counts, unusual separators, and other quirks. Combined with simple AL table operations, it provides a clean, code-first approach to CSV parsing that gives you complete control over the import process.