{"id":3645,"date":"2026-03-27T12:30:28","date_gmt":"2026-03-27T12:30:28","guid":{"rendered":"https:\/\/www.infragistics.com\/blogs\/?p=3645"},"modified":"2026-03-27T12:32:15","modified_gmt":"2026-03-27T12:32:15","slug":"engineering-fast-data-grids","status":"publish","type":"post","link":"https:\/\/www.infragistics.com\/blogs\/engineering-fast-data-grids","title":{"rendered":"Engineering Fast Data Grids: Lessons from Optimizing Ignite UI for\u00a01M+ Data Records\u00a0"},"content":{"rendered":"\n<p>For developers building finance, banking,&nbsp;ERP,&nbsp;and other data-heavy systems, the data grid is often the primary performance boundary&nbsp;&#8211;&nbsp;the \u201chot loop\u201d where sorting and filtering across large datasets compete for main-thread time. In these cases,&nbsp;small inefficiencies quickly become user-visible and break interaction.&nbsp;<\/p>\n\n\n\n<p>But we found a solution.&nbsp;This post&nbsp;will&nbsp;demonstrate&nbsp;how we&nbsp;optimized&nbsp;sorting and filtering to keep Ignite UI fast at&nbsp;1M+&nbsp;rows across frameworks (Angular, React, Blazor, Web Components).&nbsp;We\u2019ll&nbsp;focus on the concrete&nbsp;data grid&nbsp;sorting&nbsp;and&nbsp;filtering changes that worked&nbsp;and the ones that&nbsp;didn\u2019t.&nbsp;<\/p>\n\n\n\n<p>Let\u2019s&nbsp;see what we did.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"the-reality-before-optimization-where-things-started-to-break\">The Reality Before Optimization: Where Things Started to Break&nbsp;<\/h2>\n\n\n\n<p>Every performance problem starts the same way&nbsp;&#8211;&nbsp;an architecture that was reasonable&nbsp;at&nbsp;one scale becomes a bottleneck at another.&nbsp;Features like&nbsp;Ignite UI&#8217;s sorting, grouping, and filtering&nbsp;were&nbsp;no exception.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Sorting: The Hidden Cost of Value Resolution&nbsp;<\/h3>\n\n\n\n<p>The core sorting pipeline worked recursively,&nbsp;processing each sorting expression in sequence. For multi-column sorting, after&nbsp;sorting by&nbsp;the primary expression, it grouped equal-value records and recursively sorted each group by the next expression. Clean, correct, and completely reasonable for small datasets.&nbsp;<\/p>\n\n\n\n<p>The problem was the value resolver.&nbsp;<\/p>\n\n\n\n<p>Because the grid supports multiple column data types&nbsp;&#8211;&nbsp;date&nbsp;portions of Date objects, time&nbsp;portions of Date objects, strings, numbers,&nbsp;hierarchical&nbsp;key-value&nbsp;objects&nbsp;&#8211;&nbsp;every value comparison required resolving the field value at runtime. The value resolver handled path traversal,&nbsp;date parsing, and time normalization,&nbsp;number parsing,&nbsp;all on every single comparison. It was called twice per comparison&nbsp;operation&nbsp;&#8211;&nbsp;once for each side:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">compare(recordA, recordB): \n\n    valA = resolveValue(recordA, field)  \/\/ path traversal + date parsing + type coercion \n\n    valB = resolveValue(recordB, field)  \/\/ same cost, every single comparison \n\n    return compareValues(valA, valB) <\/pre>\n\n\n\n<p>For a standard comparison sort, that&#8217;s&nbsp;<math data-latex=\"O(n log n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n)<\/annotation><\/semantics><\/math> comparisons, with the resolver called twice per comparison. At 100K rows: 3.4&nbsp;million resolver calls per sorted column. At 1M rows: 40 million resolver calls. Each one&nbsp;doing&nbsp;runtime path resolution and potential date parsing, with no caching between calls.&nbsp;<\/p>\n\n\n\n<p>But the&nbsp;sort&nbsp;comparer&nbsp;wasn&#8217;t&nbsp;the only place the value resolver was invoked. For multi-column sorting, after sorting by expression&nbsp;<strong>i<\/strong>, the algorithm needed to find groups of equal values before sorting by expression&nbsp;<strong>i+1<\/strong>. This group detection iterated over every record, calling the resolver once per record &#8211; an&nbsp;additional&nbsp;<math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> pass on top of the sort.&nbsp;<\/p>\n\n\n\n<p>So,&nbsp;for a two-column sort over 1M rows, the value resolver was invoked on the order of&nbsp;<math data-latex=\"O(n log n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n)<\/annotation><\/semantics><\/math> + <math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> times for the first expression alone &#8211; before the second expression was even touched.&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>At 10K rows: imperceptible.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>At 100K rows: a noticeable lag, but tolerable.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>At 1M rows: the main thread froze for several seconds. In rare cases, deep recursive call stacks&nbsp;caused&nbsp;a stack overflow.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Grouping: Same Root, Compounded Cost&nbsp;<\/h3>\n\n\n\n<p>Grouping extends the same recursive pattern and requires the data to be sorted first. This way,&nbsp;the resolver cost was paid once during sort, then again during group boundary detection.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">groupDataRecursive(data, state, level): \n\n    while i &lt; data.length: \n\n        group = groupByExpression(data, i, expressions[level]) \n\n            \/\/ resolver called once for group anchor value \n\n            \/\/ resolver called again for every subsequent record in the group \n\n  \n\n        if level &lt; expressions.length - 1: \n\n            groupDataRecursive(group, state, level + 1)  \/\/ recurse into subgroups \n\n        else: \n\n            result = result.concat(...)    \/\/ array allocation per group boundary <\/pre>\n\n\n\n<p>Two compounding costs here:&nbsp;<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>The value resolver was invoked repeatedly for values that had already been resolved during sorting,&nbsp;with no shared cache between the two phases.&nbsp;<\/li>\n<\/ol>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li>Each group boundary produced new arrays via&nbsp;<strong>concat&nbsp;<\/strong>and&nbsp;<strong>slice<\/strong>,&nbsp;i.e.<strong>&nbsp;<\/strong>allocations that added measurable GC pressure at scale across potentially thousands of groups&nbsp;<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Excel-Style Filtering: Paying the Full Cost Twice&nbsp;<\/h3>\n\n\n\n<p>Quick filtering and advanced filtering were fast. Excel-style filtering (ESF) was&nbsp;not,&nbsp;and the reason was architectural.&nbsp;<\/p>\n\n\n\n<p>When the ESF dialog opened, it triggered a full initialization pipeline synchronously on the main thread:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1009\" height=\"52\" src=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image.png\" alt=\"\" class=\"wp-image-3646\" srcset=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image.png 1009w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-300x15.png 300w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-768x40.png 768w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-480x25.png 480w\" sizes=\"auto, (max-width: 1009px) 100vw, 1009px\" \/><\/figure>\n\n\n\n<p>The dialog&#8217;s opening animation was effectively paused until all four operations&nbsp;were&nbsp;completed. With large datasets this was a user-visible&nbsp;freeze,&nbsp;the dialog&nbsp;didn&#8217;t&nbsp;appear janky.&nbsp;It simply&nbsp;didn&#8217;t&nbsp;appear at all until the pipeline finished.&nbsp;<\/p>\n\n\n\n<p>The more critical problem: this entire pipeline ran again when the user clicked Apply&nbsp;even though the underlying data&nbsp;hadn&#8217;t&nbsp;changed between open and apply:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">onApplyClick(): \n    filter data \n\n    re-run full ESF initialization  \/\/ same 4 steps, same cost, same blocking \n\n    close dialog <\/pre>\n\n\n\n<p>This is why ESF was significantly slower than advanced filtering in practice: it was doing the same <math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> work twice per operation, blocking the main thread both times.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why &#8220;Just Virtualize More&#8221; Wasn&#8217;t the Answer&nbsp;<\/h3>\n\n\n\n<p>Virtualization ensures only&nbsp;the number of visible&nbsp;rows to be&nbsp;rendered&nbsp;as&nbsp;DOM nodes regardless of dataset size.&nbsp;That&#8217;s&nbsp;what makes scrolling through 1M rows&nbsp;feasible. But the data operations that&nbsp;determine&nbsp;what those rows&nbsp;contain&nbsp;&#8211;&nbsp;sorting, filtering, grouping&nbsp;&#8211;&nbsp;run against the full dataset every time. Virtualization&nbsp;can&#8217;t&nbsp;help there.&nbsp;Every bottleneck above lived in the data pipeline, before a single row was&nbsp;rendered:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The resolver was called&nbsp;<math data-latex=\"O(n log n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n)<\/annotation><\/semantics><\/math> + <math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> times per&nbsp;sort&nbsp;expression, regardless of how many rows were visible.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Grouping paid the resolution cost again on top of sorting, plus&nbsp;concat\/slice&nbsp;allocation&nbsp;pressure across group boundaries.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ESF&#8217;s entire initialization pipeline iterated the full dataset synchronously,&nbsp;on&nbsp;open and again on apply.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>Virtualization is the right tool for making large grids scrollable. It does nothing&nbsp;for making&nbsp;sorting, filtering, and grouping&nbsp;fast. Those&nbsp;required&nbsp;a different&nbsp;type&nbsp;of fix.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"measuring-the-problem\">Measuring the Problem: How We Benchmarked Grid Performance&nbsp;<\/h2>\n\n\n\n<p>Anecdotes like &#8220;it feels slow&#8221;&nbsp;and \u201cit feels fast\u201d&nbsp;are a starting point, not a diagnosis. To&nbsp;optimize with&nbsp;confidence, we needed reproducible numbers&nbsp;instead of&nbsp;impressions.&nbsp;<\/p>\n\n\n\n<p>It&#8217;s&nbsp;tempting to rely on DevTools flame graphs or FPS counters to diagnose grid performance. But those measure the full rendering pipeline&nbsp;&#8211;&nbsp;change detection, DOM updates, layout,&nbsp;which can obscure&nbsp;that&nbsp;the time is&nbsp;actually&nbsp;spent&nbsp;in the data pipeline.&nbsp;<\/p>\n\n\n\n<p>To pinpoint the algorithm cost specifically, we instrumented the sorting, grouping, and filtering logic directly using a lightweight wrapper around the native&nbsp;<a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Performance\" target=\"_blank\" rel=\"noreferrer noopener\">Performance API<\/a>&nbsp;:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">startMeasure(\u2018sorting\u2019) \n\n        -> run sorting algorithm \n\ngetMeasures(\u2018sorting\u2019) \/\/ returns the duration <\/pre>\n\n\n\n<p>This gave us sub-millisecond timing on&nbsp;algorithms&nbsp;in isolation&nbsp;without&nbsp;rendering noise&nbsp;or&nbsp;change detection overhead.&nbsp;Just the raw data pipeline cost. Worth&nbsp;noting: all numbers below were recorded in Angular dev mode. Production builds would be faster, but dev mode overhead is consistent across runs,&nbsp;so the relative differences hold.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Datasets&nbsp;<\/h3>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Rows: \n        10K \/ 100K \/ 1,000,000 \nColumns:   \n        string - names, categories (with duplicates) \n        number - IDs, prices, quantities (with duplicates) \n        date - formatted date strings (require parsing) \n        time - HH:mm:ss formatted strings (require parsing) <\/pre>\n\n\n\n<p>The presence of duplicate values in sort and group columns was intentional &#8211; it reflects realistic data distributions and directly&nbsp;impacts&nbsp;grouping cost, since more duplicate values mean more group boundary detections and deeper recursive calls. Date and time columns used formatted string representations. This is important for interpreting the results: every comparison involving these columns&nbsp;requires&nbsp;parsing the string into a comparable value at runtime.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Scenarios and Results&nbsp;<\/h3>\n\n\n\n<p>At 10K and 100K rows, most operations were acceptable. At 1 million rows, the picture changed dramatically:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Scenario&nbsp;<\/td><td>Time (1M rows)&nbsp;<\/td><\/tr><tr><td>Single column sort &#8211; string&nbsp;<\/td><td>3.38s&nbsp;<\/td><\/tr><tr><td>Single column sort &#8211; number&nbsp;<\/td><td>1.50s&nbsp;<\/td><\/tr><tr><td>Multi-column sort &#8211; string \u2192 number&nbsp;<\/td><td>3.88s&nbsp;<\/td><\/tr><tr><td>Grouping &#8211; single string column (sort + group)&nbsp;<\/td><td>3.31s&nbsp;<\/td><\/tr><tr><td>Grouping algorithm only (after sort)&nbsp;<\/td><td>0.50s&nbsp;<\/td><\/tr><tr><td>Grouping&nbsp;&#8211;&nbsp;two columns on grid load&nbsp;<\/td><td>3.86s&nbsp;<\/td><\/tr><tr><td>Grouping&nbsp;&#8211;&nbsp;two columns (after sort)&nbsp;<\/td><td>1.01s&nbsp;<\/td><\/tr><tr><td>ESF open&nbsp;&#8211;&nbsp;number column (15K unique values)&nbsp;<\/td><td>1.60s&nbsp;<\/td><\/tr><tr><td>ESF open&nbsp;&#8211;&nbsp;date column (274 unique values)&nbsp;<\/td><td>5.20s&nbsp;<\/td><\/tr><tr><td>ESF open&nbsp;&#8211;&nbsp;time column (86K unique values)&nbsp;<\/td><td>6.60s&nbsp;<\/td><\/tr><tr><td>ESF apply&nbsp;&#8211;&nbsp;number column&nbsp;<\/td><td>1.37s&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Reading the Numbers&nbsp;<\/h3>\n\n\n\n<p>Several patterns&nbsp;emerge&nbsp;immediately, and each one points directly at a specific architectural problem.&nbsp;<\/p>\n\n\n\n<p><strong>Sorting dominates grouping&nbsp;cost<\/strong>. The grouping algorithm alone took&nbsp;<strong>0.50s<\/strong>. Full sort + group took 3.31s&nbsp;&#8211;&nbsp;a 6.6x&nbsp;difference. The grouping logic itself was never the bottleneck. Sorting was, and specifically the value resolver being called&nbsp;<math data-latex=\"O(n log n) \"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n) <\/annotation><\/semantics><\/math>times inside the&nbsp;sort&nbsp;comparator.&nbsp;<\/p>\n\n\n\n<p>String sorting is more than twice as slow as number sorting (3.38s vs 1.50s). Numbers compare with a simple subtraction. Strings go through the value resolver, potential normalization for case-insensitive sorts, and a&nbsp;string&nbsp;comparison. That difference compounds across ~20 million comparisons at 1M rows.&nbsp;<\/p>\n\n\n\n<p>The ESF date anomaly is the most revealing data point. The date column had only 274 unique values &#8211; a tiny list compared to&nbsp;15K in&nbsp;the number column. Yet opening the ESF dialog took 5.20s vs 1.60s for the number column. The culprit&nbsp;wasn&#8217;t&nbsp;iteration&nbsp;count. It was&nbsp;date&nbsp;parsing cost per item. The full dataset was iterated during ESF initialization, and every value went through string-to-date parsing. Fewer unique values&nbsp;didn&#8217;t&nbsp;help because the parsing happened across all records, not just the unique ones. The time column (6.60s with 86K unique values + time string parsing) confirms the same pattern: formatted string columns are expensive regardless of cardinality.&nbsp;<\/p>\n\n\n\n<p>ESF open + ESF apply = the full cost paid twice. For a number column,&nbsp;the cheapest case&nbsp;&#8211;&nbsp;that&#8217;s&nbsp;1.60s + 1.37s = ~3s of blocking per filter operation. For date or time&nbsp;columns&nbsp;the combined cost would be significantly worse.&nbsp;<\/p>\n\n\n\n<p>The numbers confirmed what the architecture review suggested: the value resolver, the recursive grouping passes, and the ESF double-initialization were the bottlenecks. Now we&nbsp;had&nbsp;the data to prove it.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"optimization-1-rethinking-the-sorting-pipeline\">Optimization #1: Rethinking the Sorting Pipeline&nbsp;<\/h2>\n\n\n\n<p>With a clear baseline&nbsp;established, the focus shifted to the data pipeline itself. Three changes drove&nbsp;the majority of&nbsp;the improvement: applying the&nbsp;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Schwartzian_transform\" target=\"_blank\" rel=\"noreferrer noopener\">Schwartzian transform<\/a>&nbsp;to&nbsp;sorting, refactoring multi-column sorting from recursive to iterative, and reworking the grouping algorithm to eliminate both recursion and redundant array allocations.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #1: The&nbsp;Schwartzian&nbsp;Transform&nbsp;<\/h3>\n\n\n\n<p>The original&nbsp;sort&nbsp;comparator resolved field values inside the comparison function itself &#8211; meaning for every pair of records compared, the value resolver ran twice.&nbsp;<\/p>\n\n\n\n<p>The&nbsp;Schwartzian&nbsp;transform is a classic optimization for expensive sort keys: resolve each value once upfront, sort on the cached values, then map back to the original records. This&nbsp;improves&nbsp;field resolution from&nbsp;&nbsp;<math data-latex=\"O(n log n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n)<\/annotation><\/semantics><\/math> to&nbsp;<math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math>:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before: resolve inside comparator - O(n log n) resolver calls \n\nsort(data, field): \n\n    data.sort((a, b) => compare(resolveValue(a), resolveValue(b))) \n\n  \n\n\/\/ After: Schwartzian transform - O(n) resolver calls \n\nsort(data, field): \n\n    prepared = data.map(record => [record, resolveValue(record, field)])  \/\/ O(n) - resolve once \n\n    prepared.sort(([, valA], [, valB]) => compareValues(valA, valB))      \/\/ O(n log n) \u2014 compare only \n\n    return prepared.map(([record]) => record)                              \/\/ O(n) - unwrap <\/pre>\n\n\n\n<p>The&nbsp;comparer&nbsp;becomes a pure value comparison with no field resolution, no path traversal, no date parsing. For&nbsp;ignoreCase, the string normalization call moves into the map phase &#8211; resolved once per record, not once per comparison side.&nbsp;<\/p>\n\n\n\n<p>For date and time&nbsp;columns,&nbsp;the impact is especially significant: string-to-date parsing moves from inside the hot&nbsp;comparer&nbsp;loop to a single upfront pass. At 1M rows&nbsp;that&#8217;s&nbsp;the difference between ~40 million parse calls and exactly 1 million, which is <math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> with a constant multiplier&nbsp;of 1,&nbsp;regardless of column type.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #2: Iterative Multi-Column Sorting&nbsp;<\/h3>\n\n\n\n<p>The original multi-column sort was recursive: sort by expression 0, find&nbsp;same-value groups, recursively sort each group by expression 1, and so on. Correct, but with two problems: recursive call stack depth, and the value resolver being called again inside group detection for every record on every pass.&nbsp;<\/p>\n\n\n\n<p>The&nbsp;new approach&nbsp;iterates backwards through expressions, which is&nbsp;a deliberate choice to&nbsp;maintain&nbsp;sort stability, matching the behavior of the original recursive implementation:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before: recursive \n\nsortDataRecursive(data, expressions, index): \n\n    sort by expressions[index] \n\n    for each equal-value group: \n\n        sortDataRecursive(group, expressions, index + 1)  \/\/ recursive \n\n  \n\n\/\/ After: iterative - reverse pass maintains stability \n\nsortData(data, expressions): \n\n    for i = expressions.length - 1 down to 0: \n\n        data = expressions[i].strategy.sort(data)    \/\/ iterative, no recursion <\/pre>\n\n\n\n<p>Iterating in reverse means the most significant&nbsp;sort&nbsp;key is applied last. It becomes the final tiebreaker, and the overall order&nbsp;remains&nbsp;stable. No recursive call stack, no intermediate group detection passes between expressions<s>;<\/s>,&nbsp;no&nbsp;additional&nbsp;resolver calls. The&nbsp;Schwartzian&nbsp;transform applies independently to each expression pass.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #3: Iterative Grouping with a Stack&nbsp;<\/h3>\n\n\n\n<p>The grouping algorithm had two independent cost sources: the recursive call structure and&nbsp;<strong>concat<\/strong>\/<strong>slice&nbsp;<\/strong>array allocations at every group boundary. Both were addressed together.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before: recursive with concat\/slice \n\ngroupDataRecursive(data, state, level): \n\n    group = data.slice(start, end)                \/\/ allocation per group \n\n    result = result.concat(groupRow, group)        \/\/ allocation per group \n\n    groupDataRecursive(group, state, level + 1)   \/\/ recursive \n\n  \n\n\/\/ After: iterative with explicit stack + direct push \n\ngroupData(data, state): \n\n    stack = [{ data, level: 0 }] \n\n    while stack.length > 0: \n\n        { data, level } = stack.pop() \n\n        for each group boundary in data: \n\n            result.push(groupRow)                 \/\/ no intermediate allocation \n\n            result.push(...groupRecords)         \/\/ no intermediate allocation \n\n            if level &lt; expressions.length - 1: \n\n                stack.push({ data: groupRecords, level: level + 1 }) <\/pre>\n\n\n\n<p>Array pre-allocation&nbsp;wasn&#8217;t&nbsp;feasible&nbsp;here&nbsp;because&nbsp;the number of groups&nbsp;isn&#8217;t&nbsp;known upfront. But switching from&nbsp;<strong>concat<\/strong>\/<strong>slice<\/strong>&nbsp;to direct push&nbsp;eliminated&nbsp;intermediate array allocations at every group boundary. At scale, across potentially thousands of group boundaries, this made a measurable difference in both execution time and GC pressure.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The results&nbsp;<\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"936\" height=\"412\" src=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-2.png\" alt=\"\" class=\"wp-image-3654\" srcset=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-2.png 936w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-2-300x132.png 300w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-2-768x338.png 768w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-2-480x211.png 480w\" sizes=\"auto, (max-width: 936px) 100vw, 936px\" \/><\/figure>\n\n\n\n<p>Raw milliseconds tell one part of the story. The more important metric is perceived responsiveness:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A single-column string sort at 1M rows went from&nbsp;<strong>3.38s &#8211;&nbsp;<\/strong>a visible, jarring freeze &#8211; to&nbsp;<strong>0.42s<\/strong>, imperceptible to most users&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Multi-column sort dropped from&nbsp;<strong>3.88s<\/strong>&nbsp;to&nbsp;<strong>0.57s &#8211;&nbsp;<\/strong>users applying sequential sorts no longer experience compounding delays&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Two-column grouping on grid load went from 3.86s to 0.88s&nbsp;&#8211;&nbsp;the grid feels ready almost&nbsp;immediately&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>The gains compound in real usage: a user who sorts, then groups, then re-sorts is no longer waiting several seconds&nbsp;for each of&nbsp;those operations. The pipeline runs fast enough that interaction feels continuous rather than punctuated by freezes.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"optimization-2-excel-style-filtering-at-scale\">Optimization #2: Excel-Style Filtering at Scale&nbsp;<\/h2>\n\n\n\n<p>Sorting and grouping were the most visible bottlenecks, but Excel-style filtering had its own set of problems.&nbsp;Quick filtering and advanced filtering&nbsp;operate&nbsp;on the data directly: a predicate runs against each record and returns a match. Simple, linear, predictable.&nbsp;<\/p>\n\n\n\n<p>Excel-style filtering is different. Before the dialog can show anything, it needs to build a complete picture of the data&nbsp;with&nbsp;every unique value in the column, formatted for display, sorted, and cross-referenced against the current filter state.&nbsp;That&#8217;s&nbsp;not&nbsp;just&nbsp;a filtering&nbsp;operation.&nbsp;That&#8217;s&nbsp;a full data pipeline, and it ran synchronously on the main thread every time the dialog&nbsp;opens.&nbsp;<\/p>\n\n\n\n<p>As mentioned above the original Excel-style filtering initialization did four sequential passes over the data:&nbsp;&nbsp;<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Filter the dataset if there are applied filters beforehand \u2013 <math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> pass&nbsp;<\/li>\n<\/ol>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li>Sort the filtered values \u2013&nbsp;<math data-latex=\"O(n logn)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n logn)<\/annotation><\/semantics><\/math>&nbsp;&nbsp;<\/li>\n<\/ol>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li>Extract labels + format values \u2013&nbsp;<math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> pass&nbsp;<\/li>\n<\/ol>\n\n\n\n<ol start=\"4\" class=\"wp-block-list\">\n<li>Deduplicate -&gt; build unique items list \u2013&nbsp;<math data-latex=\"O(n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n)<\/annotation><\/semantics><\/math> pass&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>The Apply re-initialization was the most wasteful part: the underlying data&nbsp;hadn&#8217;t&nbsp;changed between open and apply, but the entire pipeline ran again from scratch regardless.&nbsp;<\/p>\n\n\n\n<p>Beyond the double cost, the pipeline itself had an inefficiency: steps 2, 3, and 4 were all&nbsp;operating&nbsp;on the full filtered dataset. Sorting happened before deduplication,&nbsp;meaning the grid was sorting potentially millions of records when it only needed to sort the unique values. Label extraction and deduplication were also separate passes over the same data, visiting every value twice unnecessarily.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Date and Time Anomaly&nbsp;<\/h3>\n\n\n\n<p>The inefficiency was most visible with date and time columns. From the benchmarks in&nbsp;<a href=\"http:\/\/measuring-the-problem\" target=\"_blank\" rel=\"noreferrer noopener\">Measuring the Problem<\/a>:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Column&nbsp;<\/td><td>Unique values&nbsp;<\/td><td>ESF open time&nbsp;<\/td><\/tr><tr><td>Number&nbsp;<\/td><td>15k&nbsp;<\/td><td>1.60s&nbsp;<\/td><\/tr><tr><td>Date&nbsp;<\/td><td>274&nbsp;<\/td><td>5.20s&nbsp;<\/td><\/tr><tr><td>Time&nbsp;<\/td><td>86k&nbsp;<\/td><td>6.60s&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>The date column had 274 unique values&nbsp;&#8211;&nbsp;far fewer than the number column&#8217;s 15K&nbsp;&#8211;&nbsp;yet took 3\u00d7 longer to open. The reason: label extraction and value formatting involved date parsing across the entire dataset, not just the unique values. Every record was visited, and every visit triggered string-to-date conversion. Fewer unique values&nbsp;didn&#8217;t&nbsp;help because the parsing happened during the full-data pass, not after deduplication.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #1:&nbsp;Eliminate&nbsp;the Double Initialization&nbsp;<\/h3>\n\n\n\n<p>The most impactful change was structural: ESF no longer re-initializes on Apply. The unique values list built on open is reused directly when the user clicks Apply. The second full pipeline run is gone entirely.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before \n\nonApplyClick(): \n\n    re-run full ESF initialization    \/\/ O(n) - redundant \n\n    close dialog \n\n  \n\n\/\/ After \n\nonApplyClick(): \n\n    apply filter using existing list  \/\/ O(1) - list already built \n\n    close dialog <\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #2: Single-Pass Deduplication with Deferred Sorting&nbsp;<\/h3>\n\n\n\n<p>The second change restructured the pipeline entirely,&nbsp;collapsing label extraction and deduplication into a single pass, then sorting only the deduplicated result:&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before: separate passes \n\nfilteredData \u2192 sort \u2192 extract labels (pass 1) \u2192 deduplicate (pass 2) \n\n  \n\n\/\/ After: deduplicate in single pass \u2192 sort unique list only \n\nfilteredData (n records) \n\n    \u2192 single pass: \n\n        resolve + normalize + deduplicate inline   \/\/ O(n), parse only for new unique values \n\n    \u2192 unique list (m items) \n\n    \u2192 sort unique list                             \/\/ O(m log m) where m &lt;= n <\/pre>\n\n\n\n<p>Two compounding improvements here:&nbsp;<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Label formatting and date parsing now only run for unique values, not for every record in the dataset. For a date column with 274 unique values in a 1M row dataset,&nbsp;that&#8217;s&nbsp;the difference between 1M parse calls and 274.&nbsp;<\/li>\n<\/ol>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li>Sorting now&nbsp;operates&nbsp;over the deduplicated list, not the full filtered dataset. At 274 unique values, sorting is effectively instantaneous. Even for the time column with 86K unique values, sorting 86K items is orders of magnitude cheaper than sorting 1M &#8211; and since each comparison in that sort involves a time string parse, shrinking the&nbsp;sort&nbsp;input compounds the savings further.&nbsp;<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #3: Non-Blocking Dialog Open&nbsp;<\/h3>\n\n\n\n<p>The third change addressed perceived performance directly: the dialog now opens&nbsp;immediately, before the data pipeline runs. A loading indicator is shown while initialization&nbsp;completes. This means the UI is never frozen waiting for a dialog that&nbsp;hasn&#8217;t&nbsp;appeared yet. Even if initialization takes time, the user sees immediate feedback&nbsp;&#8211;&nbsp;the dialog is&nbsp;open&nbsp;and&nbsp;something is happening.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fix #4: Debounced Quick Filtering&nbsp;<\/h3>\n\n\n\n<p>A smaller but meaningful improvement on the quick filtering side: previously, the filtering pipe&nbsp;triggered on&nbsp;every keystroke,&nbsp;meaning a user typing &#8220;Finance&#8221; would trigger 7 filter operations in rapid succession, each one iterating the full dataset.&nbsp;<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/\/ Before: filter on every keystroke \n\ninput: \"F\"       \u2192 filter            \/\/ O(n) \n\ninput: \"Fi\"      \u2192 filter            \/\/ O(n) \n\ninput: \"Fin\"     \u2192 filter            \/\/ O(n) \n\n... \n\n  \n\n\/\/ After: debounced \n\ninput: \"F\", \"Fi\", \"Fin\", \"Fina\", \"Finan\", \"Financ\", \"Finance\" \n\n\u2192 pause detected \u2192 filter once       \/\/ O(n) - only when user stops typing <\/pre>\n\n\n\n<p>For large datasets, this alone reduces the number of main-thread filter operations for a typical search from 5\u201310 down to 1\u20132.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Results&nbsp;<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"688\" src=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1-1024x688.png\" alt=\"\" class=\"wp-image-3647\" srcset=\"https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1-1024x688.png 1024w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1-300x201.png 300w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1-768x516.png 768w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1-480x322.png 480w, https:\/\/www.infragistics.com\/blogs\/wp-content\/uploads\/2026\/03\/image-1.png 1029w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The ESF&nbsp;apply&nbsp;number is particularly significant: at 90ms,&nbsp;it&#8217;s&nbsp;now in the same performance range as quick filtering and advanced filtering. The three filtering modes are now cost-comparable for the first time.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What This Means in Practice&nbsp;<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The ESF dialog appears&nbsp;immediately&nbsp;on click. No more waiting for a dialog that&nbsp;doesn&#8217;t&nbsp;show up.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The overall time for data to load inside the ESF dialog is faster across all column types. Users spend less time staring at a loading indicator even when the dataset is large.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Applying a filter no longer repeats the full initialization cost.&nbsp;It&#8217;s&nbsp;effectively free compared to before.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Quick filtering no longer hammers the main thread on fast typing. Debouncing ensures the pipeline runs only when the user has finished or paused.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"why-these-changes-work-across-frameworks\">Why These Changes Work Across Frameworks&nbsp;<\/h2>\n\n\n\n<p>The performance improvements covered&nbsp;above&nbsp;were made in the Angular codebase. But they&nbsp;don&#8217;t&nbsp;stay there.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">One Core, Multiple Frameworks&nbsp;<\/h3>\n\n\n\n<p>Ignite UI&#8217;s grid is built in Angular &#8211; usable directly as a native Angular&nbsp;component&nbsp;with full access to&nbsp;Angular&#8217;s&nbsp;template syntax, DI system, and change detection. It is also packaged as a Web Component using Angular Elements, making it available outside Angular entirely. React and Blazor consume that Web Component through thin framework-specific wrappers that bridge the custom element&#8217;s API into React props and Blazor&nbsp;parameters&nbsp;respectively.&nbsp;<\/p>\n\n\n\n<p>The data pipeline&nbsp;&#8211;&nbsp;sorting, grouping, filtering&nbsp;&#8211;&nbsp;lives entirely in the Angular base. Angular Elements&nbsp;packages it&nbsp;into the Web Component as-is. React and Blazor never touch it. Every algorithmic improvement made in the Angular codebase&nbsp;propagates through&nbsp;the full chain automatically.&nbsp;It&#8217;s&nbsp;worth being precise about what &#8220;wrapper&#8221; means here.&nbsp;It&#8217;s&nbsp;a thin integration layer, not a reimplementation.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why the Algorithm Improvements Are Framework-Agnostic&nbsp;<\/h3>\n\n\n\n<p>The&nbsp;Schwartzian&nbsp;transform, the iterative grouping stack, and the single-pass ESF deduplication are pure data operations. They take an array in and return a transformed array out. They have no knowledge of&nbsp;Angular&#8217;s&nbsp;change detection,&nbsp;React&#8217;s&nbsp;reconciler, or Blazor&#8217;s render tree &#8211; and&nbsp;that&#8217;s&nbsp;precisely why they propagate so cleanly across all four platforms.&nbsp;<\/p>\n\n\n\n<p>The improvements are JavaScript engine gains:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Fewer resolver calls per&nbsp;sort&nbsp;operation.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Fewer intermediate array allocations per group boundary.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Less GC pressure across the full pipeline.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Shorter main thread blocking time&nbsp;on&nbsp;every data operation.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>None of these are framework concepts.&nbsp;A faster sort improves performance regardless of whether the result is&nbsp;rendered&nbsp;by Angular, React, Web Components, or Blazor&nbsp;because the optimization occurs in the data layer before the UI framework&nbsp;renders&nbsp;it.&nbsp;<\/p>\n\n\n\n<p>For developers evaluating which grid to use: the performance story is the same across frameworks because the engine is the same across frameworks. The numbers in this post&nbsp;aren&#8217;t&nbsp;Angular numbers.&nbsp;They&#8217;re&nbsp;data pipeline numbers, and the data pipeline is shared.&nbsp;<br>&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"what-this-means-for-enterprise-teams\">What This Means for Enterprise Teams&nbsp;<\/h2>\n\n\n\n<p>Engineering performance wins are easy to measure in milliseconds. Their business impact is harder to quantify but far more significant,&nbsp;especially at enterprise scale, where data grids&nbsp;aren&#8217;t&nbsp;decorative&nbsp;UI elements but the primary interface through which analysts, traders, and operations teams do their work.&nbsp;<\/p>\n\n\n\n<p>Performance issues in data grids generate a specific and frustrating category of support&nbsp;tickets: ones that are hard to reproduce, hard to diagnose, and hard to close. &#8220;The grid freezes when I sort&#8221; is not a bug with a stack trace.&nbsp;It&#8217;s&nbsp;a symptom of a pipeline that blocks the main thread for several seconds under real-world data volumes.&nbsp;<\/p>\n\n\n\n<p>Ignite UI supports remote data binding&nbsp;with&nbsp;sorting and filtering&nbsp;that&nbsp;can be delegated to a server rather than executed client-side. For teams that adopted remote operations primarily because client-side performance was inadequate, these optimizations change the calculus. Client-side sorting at 1M rows now completes in under half a second. For many enterprise datasets that previously pushed teams toward server-side delegation, the client-side pipeline is now fast enough to reconsider that decision.&nbsp;<\/p>\n\n\n\n<p>In enterprise environments&nbsp;&#8211;&nbsp;particularly financial services&nbsp;&#8211;&nbsp;perceived responsiveness directly influences platform adoption. Moving a sort from 3.38s to 0.42s&nbsp;isn&#8217;t&nbsp;just an 8\u00d7 improvement in isolation.&nbsp;It&#8217;s&nbsp;the difference between an interaction that interrupts a workflow and one that&nbsp;doesn&#8217;t&nbsp;register as a delay at all. That distinction matters when an end user is deciding whether the tool is worth using.&nbsp;<br>&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"lessons-learned-what-wed-do-again-and-differently\">Lessons Learned: What We&#8217;d Do Again (and&nbsp;Differently)&nbsp;<\/h2>\n\n\n\n<p>The before and after numbers in this post are clean. The process that produced them&nbsp;wasn&#8217;t.&nbsp;Here&#8217;s what that process actually looked like.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Nothing Was Guaranteed Upfront&nbsp;<\/h3>\n\n\n\n<p>Going into this work, there&nbsp;was no certainty that any of these optimizations would produce meaningful results. The&nbsp;Schwartzian&nbsp;transform is a well-known technique. However,&nbsp;&#8220;well-known&#8221;&nbsp;doesn&#8217;t&nbsp;mean &#8220;guaranteed to help in this context.&#8221; The iterative grouping stack looked promising on paper, but recursive-to-iterative refactors have a history of introducing subtle edge cases that only appear under specific data shapes.&nbsp;<\/p>\n\n\n\n<p>The approach was deliberately incremental:&nbsp;tackling&nbsp;one problem at a time, measure, then&nbsp;deciding&nbsp;whether to continue. The sorting pipeline came first. When the numbers came back&nbsp;&#8211;&nbsp;3.38s to 0.42s on a string sort&nbsp;&#8211;&nbsp;it&nbsp;validated&nbsp;the direction and justified continuing into grouping and filtering. If the first optimization had shown marginal gains, the strategy would have changed.&nbsp;<\/p>\n\n\n\n<p>This matters because performance work is often planned as if the outcomes are known in advance. They&nbsp;aren&#8217;t. The right posture is hypothesis, measurement, decision,&nbsp;repeat.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Memory Trade-off&nbsp;<\/h3>\n\n\n\n<p>The&nbsp;Schwartzian&nbsp;transform&nbsp;isn&#8217;t&nbsp;free. It&nbsp;allocates&nbsp;an intermediate array of [record, value] pairs upfront &#8211; one entry per record. At 1M rows,&nbsp;that&#8217;s&nbsp;a non-trivial memory overhead before the sort even begins.&nbsp;<\/p>\n\n\n\n<p>This was a conscious trade-off: accept higher peak memory usage in exchange for&nbsp;eliminating&nbsp;<math data-latex=\"O(n log n)\"><semantics><mrow><mi>O<\/mi><mo form=\"prefix\" stretchy=\"false\">(<\/mo><mi>n<\/mi><mi>l<\/mi><mi>o<\/mi><mi>g<\/mi><mi>n<\/mi><mo form=\"postfix\" stretchy=\"false\">)<\/mo><\/mrow><annotation encoding=\"application\/x-tex\">O(n log n)<\/annotation><\/semantics><\/math> resolver calls. For the use cases this library targets,&nbsp;i.e.&nbsp;enterprise grids running in modern browsers on capable hardware,&nbsp;the speed gains are significant, and the memory cost is acceptable.&nbsp;<\/p>\n\n\n\n<p>But&nbsp;it&#8217;s&nbsp;worth naming explicitly: if memory-constrained environments ever become a primary target, the&nbsp;Schwartzian&nbsp;transform would need to be revisited. Speed and memory pull in opposite directions here, and the current implementation chose speed.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Benchmarks Must Reflect Real Usage&nbsp;<\/h3>\n\n\n\n<p>The benchmark suite for this work used synthetic datasets at 1M rows&nbsp;(generated records with controlled column types and value distributions.)&nbsp;That&#8217;s&nbsp;the right starting point for isolating algorithmic performance, but it has a ceiling.&nbsp;<\/p>\n\n\n\n<p>The two issues that&nbsp;actually prompted&nbsp;this work came from a real customer: ESF dialog open time and ESF apply time were reported as blocking problems in production. When those tickets arrived, the synthetic benchmarks confirmed the problem.&nbsp;The problem existed before the ticket. It took a real-world usage pattern to surface it.&nbsp;<\/p>\n\n\n\n<p>The lesson is straightforward: synthetic benchmarks are good at measuring scenarios you already know to test. Customer data finds the ones you&nbsp;didn&#8217;t&nbsp;think to include. Both are necessary, and the benchmark suite should evolve to incorporate real-world usage patterns as they surface,&nbsp;not just&nbsp;synthetic&nbsp;worst cases.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Performance Work Is Never Done&nbsp;<\/h3>\n\n\n\n<p>The improvements in this post are real and significant.&nbsp;They&#8217;re&nbsp;also&nbsp;a snapshot. The data pipeline is faster today than it was six months ago. Six months from now, there are known areas, such as&nbsp;date parsing, virtualization, etc.&nbsp;that will look like the sorting pipeline looked before this work. They will be&nbsp;functional,&nbsp;but with room for improvement that&nbsp;hasn&#8217;t&nbsp;been addressed yet.&nbsp;<\/p>\n\n\n\n<p>That&#8217;s&nbsp;not a failure of the current work.&nbsp;It&#8217;s&nbsp;the nature of performance engineering. The baseline&nbsp;moves,&nbsp;customer data volumes grow, and the definition of &#8220;fast enough&#8221; shifts with it. The value of this round of optimizations&nbsp;isn&#8217;t&nbsp;just the milliseconds saved.&nbsp;It&#8217;s&nbsp;the process&nbsp;established&nbsp;for finding and closing the next gap.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"whats-next-for-ignite-ui-grid-performance\">What&#8217;s Next for Ignite UI Grid Performance&nbsp;<\/h2>\n\n\n\n<p>The optimizations in this post represent one focused round of performance work&nbsp;and&nbsp;not a closing statement on the topic. Several areas are already in motion, and more are being actively explored.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What&#8217;s Already Improved&nbsp;<\/h3>\n\n\n\n<p>Virtualization performance has seen improvements alongside the sorting, grouping, and filtering work covered in this post. Row and column virtualization is the foundation that makes large dataset&nbsp;rendering&nbsp;feasible. All the improvements&nbsp;there&nbsp;compound with the data pipeline gains, meaning the grid is faster both at processing data and at&nbsp;rendering&nbsp;it.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What&#8217;s Still Being Worked On&nbsp;<\/h3>\n\n\n\n<p>Date parsing&nbsp;remains&nbsp;an area with known room for improvement. The sorting and ESF results for date and time columns are dramatically better than before, but&nbsp;they&#8217;re&nbsp;still slower than number columns in ways that trace back to how date strings are parsed. More targeted work on the parsing layer is the logical next step.&nbsp;<\/p>\n\n\n\n<p>Bundle size is an ongoing focus. A faster grid that ships more JavaScript than necessary works against itself,&nbsp;particularly for teams where&nbsp;initial&nbsp;load time is as important as runtime performance. Reducing the footprint of the grid without sacrificing capability is a continuous balancing act.&nbsp;<\/p>\n\n\n\n<p>Grid API refinement continues in parallel.&nbsp;It\u2019s&nbsp;not a performance concern&nbsp;directly but&nbsp;connected to it. A cleaner API reduces the surface area where performance-sensitive code paths get invoked in unintended ways.&nbsp;<\/p>\n\n\n\n<p>Runtime performance more broadly, including&nbsp;rendering cost, change detection pressure, interaction responsiveness under high-frequency updates,&nbsp;remains&nbsp;an open area&nbsp;of exploration. No specific claims, but&nbsp;it&#8217;s&nbsp;on the radar.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Share Your Feedback on Performance&nbsp;<\/h3>\n\n\n\n<p>Each performance improvement raises the baseline and expectations. What was once&nbsp;slow&nbsp;becomes fast, and new bottlenecks eventually appear.&nbsp;<\/p>\n\n\n\n<p>That\u2019s&nbsp;why we value&nbsp;useful feedback&nbsp;from&nbsp;real-world usage. If&nbsp;you&#8217;re&nbsp;using Ignite UI grids in production and hit performance issues, open an issue on&nbsp;<a href=\"https:\/\/github.com\/IgniteUI\/igniteui-angular\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub<\/a>. Real scenarios and reproducible cases help us&nbsp;identify&nbsp;the next opportunities for improvement.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"closing-performance-as-a-promise-not-a-bullet-point\">Closing: Performance as a Promise,&nbsp;Not&nbsp;a Bullet Point&nbsp;<\/h2>\n\n\n\n<p>Every grid library lists performance as a feature. &#8220;Handles millions of rows&#8221; appears in comparison tables alongside other features&nbsp;like&nbsp;a checkbox, not a commitment.&nbsp;<\/p>\n\n\n\n<p>There is a difference between a grid that technically handles large datasets and one that handles them without making users wait. That difference&nbsp;doesn&#8217;t&nbsp;show up in a feature&nbsp;list. It shows up when a user clicks a column header or opens a filter dialog and either gets an immediate response or watches the UI freeze.&nbsp;<\/p>\n\n\n\n<p>The work in this post was driven by that distinction. Not by a marketing requirement &#8211; by a real customer hitting real performance walls, and by the recognition that &#8220;it works&#8221; and &#8220;it&#8217;s fast&#8221; are not the same claim. The&nbsp;Schwartzian&nbsp;transform, the iterative grouping stack, the single-pass ESF pipeline &#8211; none of it was obvious upfront, none of it was guaranteed to work, and all of it required measurement to justify.&nbsp;<\/p>\n\n\n\n<p>Performance&nbsp;isn&#8217;t&nbsp;a feature you ship and move on from.&nbsp;It&#8217;s&nbsp;a continuous obligation to the developers and end users who depend on these components to do real work, at real scale, without the UI getting in the way.&nbsp;<\/p>\n\n\n\n<p>We intend to keep meeting it.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Grid performance\u00a0isn\u2019t\u00a0just about speed\u00a0here.\u00a0It&#8217;s\u00a0about consistency under\u00a0heavy data\u00a0load. When a grid freezes during data operations, it\u00a0feels\u00a0slow\u00a0and\u00a0unreliable. In real-time decision-making workflows, that unreliability becomes a liability.\u00a0<\/p>\n","protected":false},"author":175,"featured_media":3673,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[17],"tags":[],"class_list":["post-3645","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-how-to"],"_links":{"self":[{"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/posts\/3645","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/users\/175"}],"replies":[{"embeddable":true,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/comments?post=3645"}],"version-history":[{"count":14,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/posts\/3645\/revisions"}],"predecessor-version":[{"id":3672,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/posts\/3645\/revisions\/3672"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/media\/3673"}],"wp:attachment":[{"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/media?parent=3645"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/categories?post=3645"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.infragistics.com\/blogs\/wp-json\/wp\/v2\/tags?post=3645"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}