Dynamics 365 Business Central: introducing the new Table Key module

One of the announced features for Dynamics 365 Business Central 2024 Wave 1 release is the possibility to enable or disable indexes via AL code. For that scope (at the moment unfortunately only available for on-premises scenarios) Microsoft is introducing a new AlterKey method in the Database object that permits you to select a non -clustered index and enable or disable it.

The new syntax will be the following:

Database.AlterKey(KeyToDisable, Enable);

where:

  • KeyToDisable is the reference to the non-clustered index.
  • Enable is a boolean indicating if the index must be enabled (true) or disabled (false).

AlterKey allows for disabling (and re-enabling) of indices in the context of the current transaction and with AlterKey all non-unique non-clustered indices can be disabled.

Please remember that the end of the transaction or after an explicit re-enabling, the index will be rebuilt. This feature does not allow the permanent disabling (or enabling) of indices, only possible through AL code.

Together with this new method, a new Table Key module in the System Application is introduced.

This new module (previewed by Mads some days ago) introduced a new Table Key codeunit providing functionalities for disabling and re-enabling table indexes easily. More in details, the provided methods (all scoped to OnPrem) are the following:

  • procedure DisableAll(TableNo: Integer): Boolean : Disables all keys on the provided table. The method returns true if the keys were disabled successfully, false otherwise.
  • procedure EnableAll(TableNo: Integer): Boolean : Re-enables all keys that have been disabled on the provided table. The method returns true, if the keys were re-enabled successfully, false otherwise.

Why these methods are a good addition in terms of performance?

Disabling keys before bulk table write operations can significantly improve performance. I suggest to read this post by Mads talking about that.

An example of usage of this feature is the following. Imagine that I need to delete records from a very big Change Log Entry table (lots of millions of records). I want to delete all logs related to Item Ledgr Entry table.

P.S. If you’re in an on-premises environment, you can go to the SQL table and do a TRUNCATE operation, but this is out of scope here.

To do that in AL, you need to execute the following code:

local procedure DeleteChangeLogEntry()
    var
        ChangeLogEntry: Record "Change Log Entry";
    begin
        ChangeLogEntry.SetRange("Table No.", Database::"Item Ledger Entry");
        ChangeLogEntry.DeleteAll();
    end;

But in a table with millions of records, this code takes time…

With the new Table Key module, you will be able to do something like the following:

local procedure DeleteChangeLogEntry()
    var
        ChangeLogEntry: Record "Change Log Entry";
    begin
        DisableAll(Database::"Change Log Entry");
        ChangeLogEntry.SetRange("Table No.", Database::"Item Ledger Entry");
        ChangeLogEntry.DeleteAll();
        EnableAll(Database::"Change Log Entry");
    end;

Here I disable all the table’s non-clustered indexes just before the record deletion, and then I re-enable it at the end of the process. This increases performances of this delete operation a lot!

Where could be a possible problem in the above code?

If you check the code, I’ve not deleted the entire Change Log Entry table but only all records that belongs to the Item Ledger Entry table. If this filter applied to the Change Log Entry table leaves many record on it (records that don’t satisfy the filter), when you re-enable the indexes with:

EnableAll(Database::”Change Log Entry”);

this operation can take time because re-enabling (rebuilding) the index is a compute intensive operation and this time grows proportionally to the number of records in the table.

So this code:

local procedure DeleteChangeLogEntry()
    var
        ChangeLogEntry: Record "Change Log Entry";
    begin
        DisableAll(Database::"Change Log Entry");
        ChangeLogEntry.DeleteAll();
        EnableAll(Database::"Change Log Entry");
    end;

where all records in the N-million rows Change Log Entry table are deleted, can be faster that the previous code. Just remember that if you will need to use this feature in the future…

I personally love this feature and I would like (from ages) also implemented a native way to execute a TRUNCATE table operation via AL. But remmeber that unfortunately this feature will be released initially only for on-premises scenarios.

1 Comment

  1. It is not that useful as it seems.

    Firstly – how often a bulk operations on a really large tables are done in any ERP environment? How often you would delete entire Change Log entry table, and one that have a few million entries in it?

    Secondly even with disabled keys deleting millions of rows in one go would hit another huge performance bottleneck – transaction log expansion.

    Third – even for smaller operations rebuilding an index after it would lock the entire table. The operation would be fast, but the speed will be ‘compensated’ by time when the table would be not accessible for any write (and if table gets large quicly it means there are amny writes to it…) . Not sure in BC can issue online index rebuild, and if so it would only work on Enterprise version, but even if online rebuild is not the fastest operation and requires a lot of log space.

    On top of it – If this only works on prem… Why bother with AL when direct SQL access is possible…

    Overall – it is good to have such a tool in the tool belt, but, imho, it only deserves a lukewarm ‘like’ rather than ‘love’

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.