The Complete Guide to kbmUnitTest
This is the final part of a 6-part series. Parts 1–5 covered the testing framework and mock data system individually.
Contents
A Complete Test Project
Now let us bring everything together into a realistic test project that exercises production code with mock data, runs under TestInsight during development, produces JUnit XML for CI, and detects memory leaks.
The DPR
program OrderSystemTests;{$APPTYPE CONSOLE}{$STRONGLINKTYPES ON}uses FastMM5, // memory manager — must be first kbmTestFastMM5, // enriched leak detection (auto-registers) kbmTestFramework, kbmTestRunner, kbmTestInsight, kbmTestDataSet, kbmUnitTest.Mock, kbmUnitTest.Mock.Generators, kbmUnitTest.Mock.DataSet, // Test data registration Test.MockData in 'Test.MockData.pas', // Test fixtures Test.OrderValidation in 'Test.OrderValidation.pas', Test.OrderProcessing in 'Test.OrderProcessing.pas', Test.OrderReporting in 'Test.OrderReporting.pas';begin {$IFDEF TESTINSIGHT} RunTestInsight; {$ELSE} RunTestsForCI('test-results.xml'); {$ENDIF}end.
The Mock Data Unit
A dedicated unit registers all scenarios. This keeps test data separate from test logic:
unit Test.MockData;interfaceimplementationuses kbmUnitTest.Mock, kbmUnitTest.Mock.Generators;initialization // ------------------------------------------------------- // Customer scenarios // ------------------------------------------------------- TkbmMockRegistry.Scenario('customer_standard') .Field('CustomerID', 1001) .Field('Name', 'Alice Johnson') .Field('Email', 'alice@example.com') .Field('Country', 'DK') .Field('VATNumber', 'DK12345678') .Field('Active', True) .Field('CreditLimit', 10000.00); TkbmMockRegistry.Scenario('customer_vip') .InheritsFrom('customer_standard') .Field('CustomerID', 1002) .Field('Name', 'Bob Enterprise A/S') .Field('CreditLimit', 100000.00); TkbmMockRegistry.Scenario('customer_suspended') .InheritsFrom('customer_standard') .Field('CustomerID', 1003) .Field('Name', 'Charlie Bankrupt Ltd') .Field('Active', False) .Field('CreditLimit', 0.0); // ------------------------------------------------------- // Order line items // ------------------------------------------------------- TkbmMockRegistry.Scenario('order_lines') .AddRow .Field('ProductID', 101) .Field('ProductName', 'Widget Pro') .Field('Quantity', 5) .Field('UnitPrice', 29.95) .EndRow .AddRow .Field('ProductID', 102) .Field('ProductName', 'Gadget Lite') .Field('Quantity', 2) .Field('UnitPrice', 49.99) .EndRow .AddRow .Field('ProductID', 103) .Field('ProductName', 'Service Pack') .Field('Quantity', 1) .Field('UnitPrice', 199.00) .EndRow; // ------------------------------------------------------- // Generator-backed scenario for fuzz testing // ------------------------------------------------------- TkbmMockRegistry.SetSeed(42); TkbmMockRegistry.Scenario('random_order') .Field('OrderID', Gen.Sequential(10000)) .Field('Customer', Gen.OneOf(['Alice', 'Bob', 'Charlie'])) .Field('Total', Gen.FloatRange(10.0, 5000.0)) .Field('Currency', Gen.OneOf(['EUR', 'USD', 'GBP', 'DKK'])) .Field('CreatedAt', Gen.DateRelative(-90)); TkbmMockRegistry.Seal;end.
Testing with Mock Data — Practical Patterns
Pattern 1: Testing Business Logic with AsRecord
Your production code operates on records and objects. Mock data feeds them:
unit Test.OrderValidation;interfaceuses kbmTestFramework, kbmUnitTest.Mock, OrderTypes, // your production TOrderLine record OrderValidation; // your production Validate functiontype [TestFixture('Order.Validation')] [Category('Unit')] TTestOrderValidation = class public [Test('Active customer passes validation')] procedure TestActiveCustomerValid; [Test('Suspended customer fails validation')] procedure TestSuspendedCustomerFails; [Test('Order total within credit limit passes')] procedure TestCreditLimitOK; [Test('Order total exceeding credit limit fails')] procedure TestCreditLimitExceeded; end;implementationprocedure TTestOrderValidation.TestActiveCustomerValid;var LCustomer: TCustomerRec; LResult: TValidationResult;begin LCustomer := TkbmMockRegistry.Get('customer_standard') .AsRecord<TCustomerRec>; LResult := TOrderValidator.ValidateCustomer(LCustomer); Assert.IsTrue(LResult.IsValid); Assert.That(LResult.ErrorMessage).IsEmpty;end;procedure TTestOrderValidation.TestSuspendedCustomerFails;var LCustomer: TCustomerRec; LResult: TValidationResult;begin LCustomer := TkbmMockRegistry.Get('customer_suspended') .AsRecord<TCustomerRec>; LResult := TOrderValidator.ValidateCustomer(LCustomer); Assert.IsFalse(LResult.IsValid); Assert.That(LResult.ErrorMessage).Contains('suspended');end;procedure TTestOrderValidation.TestCreditLimitOK;var LCustomer: TCustomerRec; LLines: TArray<TOrderLineRec>;begin LCustomer := TkbmMockRegistry.Get('customer_standard') .AsRecord<TCustomerRec>; LLines := TkbmMockRegistry.Get('order_lines') .AsList<TOrderLineRec>; // Total = 5*29.95 + 2*49.99 + 1*199.00 = 448.73 // Credit limit = 10000 — well within bounds Assert.IsTrue(TOrderValidator.CheckCreditLimit(LCustomer, LLines));end;procedure TTestOrderValidation.TestCreditLimitExceeded;var LCustomer: TCustomerRec; LLines: TArray<TOrderLineRec>;begin LCustomer := TkbmMockRegistry.Get('customer_suspended') .AsRecord<TCustomerRec>; // credit limit = 0 LLines := TkbmMockRegistry.Get('order_lines') .AsList<TOrderLineRec>; Assert.IsFalse(TOrderValidator.CheckCreditLimit(LCustomer, LLines));end;end.
Pattern 2: Testing Database Code with AsDataSet
When your production code expects a TDataSet — whether it works with TClientDataSet, TkbmMemTable, a kbmMW dataset, or any other descendant — you can materialize the scenario as a dataset. AsDataSet returns a TClientDataSet; if your code under test requires a TkbmMemTable or kbmMW dataset instead, you can copy the data across or load the TkbmMemTable directly from the scenario’s fields:
unit Test.OrderReporting;interfaceuses kbmTestFramework, kbmTestDataSet, kbmUnitTest.Mock, kbmUnitTest.Mock.DataSet, Data.DB, Datasnap.DBClient;type [TestFixture('Order.Reporting')] [Category('Unit')] TTestOrderReporting = class public [Test('Report calculates correct line totals')] procedure TestLineTotals; [Test('Report produces correct grand total')] procedure TestGrandTotal; end;implementationuses OrderReporting; // your production TOrderReport classprocedure TTestOrderReporting.TestLineTotals;var LDS: TClientDataSet; LReport: TOrderReport;begin LDS := TkbmMockRegistry.Get('order_lines').AsDataSet; try LReport := TOrderReport.Create(LDS); try ThatDataSet(LReport.ResultSet) .IsActive .HasRowCount(3) .HasField('LineTotal') .AtRow(0).FieldEquals('LineTotal', 5 * 29.95, 0.01) .AtRow(1).FieldEquals('LineTotal', 2 * 49.99, 0.01) .AtRow(2).FieldEquals('LineTotal', 1 * 199.00, 0.01); finally LReport.Free; end; finally LDS.Free; end;end;procedure TTestOrderReporting.TestGrandTotal;var LDS: TClientDataSet; LReport: TOrderReport;begin LDS := TkbmMockRegistry.Get('order_lines').AsDataSet; try LReport := TOrderReport.Create(LDS); try Assert.AreEqual(448.73, LReport.GrandTotal, 0.01); finally LReport.Free; end; finally LDS.Free; end;end;end.
Pattern 3: Fuzz Testing with Generators
Generator-backed scenarios let you run the same test with varied inputs:
[Test('Random orders do not crash the processor')]procedure TestRandomOrdersNoCrash;var I: Integer; LScenario: TkbmMockScenario;begin LScenario := TkbmMockRegistry.Get('random_order'); for I := 1 to 100 do begin Assert.WillNotRaise( procedure var LOrderID: Integer; LTotal: Double; begin LOrderID := LScenario.GetFieldValue('OrderID').AsInteger; LTotal := LScenario.GetFieldValue('Total').AsExtended; TOrderProcessor.QuickValidate(LOrderID, LTotal); end); end;end;
Because we set TkbmMockRegistry.SetSeed(42), this test is fully deterministic — if it fails on the 73rd iteration, it will fail on the 73rd iteration every time.
The Publish/Require Pattern
Some tests produce data that other tests depend on. The mock data system supports this through [TestScenarioPublish] and [TestScenarioRequire] attributes, combined with runtime publishing:
[TestFixture]TTestOrderPipeline = classpublic [Test] [TestScenarioPublish('processed_order')] procedure TestProcessOrder; [Test] [TestScenarioRequire('processed_order')] procedure TestShipOrder;end;procedure TTestOrderPipeline.TestProcessOrder;var LOrder: TOrderRec;begin // Process the order... LOrder.OrderID := 1001; LOrder.Status := 'Processed'; LOrder.Total := 448.73; // Publish the result for dependent tests TkbmMockRegistry.Publish<TOrderRec>('processed_order', LOrder); Assert.AreEqual('Processed', LOrder.Status);end;procedure TTestOrderPipeline.TestShipOrder;var LOrder: TOrderRec;begin LOrder := TkbmMockRegistry.Get('processed_order') .AsRecord<TOrderRec>; Assert.AreEqual('Processed', LOrder.Status); Assert.AreEqual(1001, LOrder.OrderID);end;
The framework’s dependency graph ensures that TestProcessOrder runs before TestShipOrder. If the publisher fails, the dependent test is skipped with a clear message.
PublishObject<T> works similarly for class instances:
TkbmMockRegistry.PublishObject<TOrderReport>('order_report', LReport);
Diagnostics
The kbmUnitTest.Mock.Diagnostics unit provides introspection tools:
Near-miss Detection
If a test tries to Get a scenario name that does not exist, the diagnostics system can suggest similarly-named scenarios (typo detection):
uses kbmUnitTest.Mock.Diagnostics;var LMisses: TArray<TkbmMockNearMiss>;begin TkbmMockRegistry.EnableAccessLog(True); // ... run tests ... LMisses := TkbmMockDiagnostics.FindNearMisses(2); // Returns entries like: 'custmer_alice' -> did you mean 'customer_alice'?end;
Unused Scenario Detection
Find scenarios that were registered but never accessed by any test:
var LUnused: TArray<string>;begin LUnused := TkbmMockDiagnostics.FindUnusedScenarios; // ['obsolete_test_data', 'legacy_scenario_3']end;
Usage Statistics
See which scenarios are most heavily used:
var LStats: TArray<TkbmMockUsageInfo>;begin LStats := TkbmMockDiagnostics.GetUsageStats; for var S in LStats do WriteLn(Format('%s: accessed %d times by %d test methods', [S.ScenarioName, S.AccessCount, Length(S.AccessingMethods)]));end;
Full Report
GenerateReport produces a complete text summary:
WriteLn(TkbmMockDiagnostics.GenerateReport);
Building a CI/CD Pipeline
A complete CI integration looks like this:
The DPR (CI mode)
begin {$IFDEF TESTINSIGHT} RunTestInsight; {$ELSE} RunTestsForCI('test-results.xml', '', True, True, lrmFailure); // ^ XML output ^ all cats ^ verbose // ^ detect leaks // ^ leaks = failure {$ENDIF}end.
GitHub Actions Example
name: Testson: [push, pull_request]jobs: test: runs-on: windows-latest steps: - uses: actions/checkout@v4 - name: Build test project run: | msbuild OrderSystemTests.dproj /p:Config=Release /p:Platform=Win64 - name: Run tests run: Win64\Release\OrderSystemTests.exe - name: Publish test results if: always() uses: dorny/test-reporter@v1 with: name: kbmUnitTest Results path: test-results.xml reporter: java-junit
The exit code tells the CI system whether to fail the build: 0 = all passed, 1 = failures, 2 = fatal error.
Custom Loggers
The IkbmTestLogger interface is simple enough to implement for any reporting needs:
type TMySlackLogger = class(TInterfacedObject, IkbmTestLogger) public procedure OnBeginRun; procedure OnEndRun(const AResults: TArray<TkbmTestResult>); procedure OnBeginFixture(const AFixtureName: string); procedure OnEndFixture(const AFixtureName: string); procedure OnTestResult(const AResult: TkbmTestResult); end;procedure TMySlackLogger.OnEndRun(const AResults: TArray<TkbmTestResult>);var LFailed: Integer;begin LFailed := 0; for var R in AResults do if R.Status in [tsFail, tsError] then Inc(LFailed); if LFailed > 0 then PostToSlack(Format('🔴 %d tests failed!', [LFailed])) else PostToSlack('✅ All tests passed.');end;
Add it to the runner alongside the built-in loggers:
LRunner.AddLogger(TkbmConsoleTestLogger.Create(True));LRunner.AddLogger(TkbmJUnitXMLLogger.Create('results.xml'));LRunner.AddLogger(TMySlackLogger.Create);
Best Practices
One concern per fixture. Group related tests in a fixture. Name fixtures after the thing they test, not the test technique.
Name scenarios descriptively. Use customer_vip, order_empty_cart, product_out_of_stock — not data1, data2. Prefixes help avoid collisions between teams: billing_customer_standard, shipping_address_domestic.
Keep test data close to tests. A dedicated Test.MockData.pas unit (or a few organized by domain) is easier to maintain than scenarios scattered across 30 test units.
Use inheritance to reduce duplication. A “base” scenario with common fields and specialized scenarios that override only what differs.
Use categories for speed. Tag fast unit tests as 'Unit' and slow integration tests as 'Integration'. Run only 'Unit' during development; run everything on CI.
Enable leak detection early. Set lrmWarning during development, lrmFailure on CI. The retry mechanism handles false positives from lazy initialization.
Commit your test data. JSON fixture files should be in version control alongside your tests. They are as much a part of the project as the source code.
Use the wizard for initial setup, then maintain by hand. The Mock Data Wizard is great for bootstrapping scenarios from type declarations or database queries. Once generated, the code is yours to edit.
Quick Reference — Unit Map
| Unit | When to use |
|---|---|
kbmTestFramework | Always — attributes, Assert, fluent API |
kbmTestRunner | Always — test runner, console/JUnit loggers |
kbmTestInsight | IDE development with TestInsight |
kbmTestDataSet | Asserting on any TDataSet — TClientDataSet, TkbmMemTable, kbmMW datasets, etc. |
kbmTestFastMM5 | Enriched leak detection with FastMM5 |
kbmTestFastMM4 | Enriched leak detection with FastMM4 |
kbmUnitTest.Mock | Mock data: registry, scenarios, builder |
kbmUnitTest.Mock.Generators | Value generators: Gen.* |
kbmUnitTest.Mock.DataSet | Scenario → TDataSet bridge (returns TClientDataSet; use with TkbmMemTable / kbmMW via data copy) |
kbmUnitTest.Mock.Diagnostics | Usage stats, near-miss detection |
kbmUnitTest.Mock.Graph | Publish/Require dependency graph |
Series Recap
Over six parts we have covered the full kbmUnitTest framework:
- Getting Started — First test project,
Assertclass,RunTests. - TestInsight — Real-time IDE feedback with
RunTestInsight. - Advanced Testing — Fluent API, constraints, parameterized tests, categories, TDataSet assertions, memory leak detection, JUnit XML.
- Introduction to Mock Data — Named scenarios, materialization into records and objects, JSON/CSV loading, RTTI attributes.
- Advanced Mocking — Inheritance, tabular data, generators,
AsDataSet, the Mock Data Wizard. - Putting It All Together — Complete project structure, practical patterns, Publish/Require, diagnostics, CI/CD pipelines, best practices.
The framework is designed to grow with your project — start with Assert.AreEqual in Part 1 and add fluent chains, mock data, and CI integration as your needs evolve. Every piece is opt-in: use only the units you need, and the rest stays out of your way.
Happy testing!
kbmUnitTest is developed by Kim Bo Madsen at Components4Developers. Visit www.components4developers.com for the latest version and documentation.
![]()







