Ignite Grids – Performance Tips and Tricks

Alexander Todorov / Friday, March 21, 2014

In this blog post, I would like to briefly outline some basic guidelines that can help you dramatically increase the page load speed and decrease the rendering time for the Ignite Grid components. We already have a performance guide which is part of our documentation, but I would like to extend on it, as well as talk about some new features that weren’t originally present when the performance guide was developed. This is where you can find the original guide:

http://help.infragistics.com/Help/Doc/jQuery/2013.2/CLR4.0/HTML/igGrid_Performance_Guide.html

I will try not to repeat things that are already covered in the guide.

Optimizing the Initial Page Load

Let’s start with the more basic things first. When you load a web page, there are several main things that you need to be concerned with:

  • What’s the size of my requests – i.e. am I downloading a 10k thing or a 10MB thing
  • How many requests am I making
  • How many of those are data / scripts / CSS / images, etc.
  • Am I using any optimizations on the server, such as gzip,  in order to compress data 

This is all related to optimizing your initial page load. If you don’t do that, it will make little difference if your 1000 records in the Ignite Grid load for 100 or 1000 milliseconds, because the overall page load experience will not be good. So how can we improve that before we focus on the more specific aspects of performance? Well, there are a couple of straightforward solutions:

  • Use combined and minified versions of the Ignite Scripts and CSS
  • Avoid using the $.ig.loader component
  • Only include the scripts that you will actually need. For instance, do not reference infragistics.dv.js in case you don’t use any of the Data Visualization components
  • Use sprites for your own images. For example, this is a really great site which generates a sprite image and the corresponding CSS for you, when you input a list of icons/images:

http://instantsprite.com/

  • Think about ways to avoid loading resources that aren’t immediately needed. Instead, try to load them on demand. For instance let’s say you have a tabbed home UI, and different tabs require different scripts to be loaded. You can use RequireJS in order to load modules on demand when users change the tabs. This has trade-offs, of course, because you don’t want your tab switching to get dramatically slowed down. If your resources are large enough to generate more than a 100-200 ms delay, it is probably better to put a nice loading indicator on your home page and load those resources upfront.
  • Enable gzip compression on the web server. This optimization alone can easily make your payload 40-50 % smaller. If you’re using Apache, this can be done in the following way:

http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

If you are using IIS, this can be done in the following way:

http://blogs.msdn.com/b/vivekkum/archive/2009/02/18/http-compression-in-iis-6-and-iis-7-using-service-account.aspx

http://www.iis.net/configreference/system.webserver/httpcompression

There are also a number of ways to enable dynamic compression in IIS (as opposed to compression for static content). It’s outlined in the second URL above. 

For a node.JS app, enabling gzip can be done in a pretty straightforward way:

http://nodejs.org/docs/v0.6.0/api/zlib.html#examples

  • Load your (minified & combined) Javascript/CSS resources from a Content Delivery Network (CDN). Some popular recommended CDNs include:

 

    • Amazon CloudFront
    • MaxCDN
    • Google Cloud Storage
    • Windows Azure CDN 

  This has the advantage that end users will retrieve the resources from the fastest/closest location to them, and will offload your primary servers.

Optimizing Data Transfer

The next thing you may look at is what kind of data your app transfers, once the page is already loaded. This is not so much about the compression and data format, but about the structure and contents of the data. For instance, you should not send data that you won’t need on the client. Let’s say your grid has 4 columns, and none of them are hidden. But you bind to data which has objects that have extra properties - this can dramatically increase your JSON payload, and transforming / filtering the data before sending it on the wire can help a lot with respect to that. An easy rule of them is not to use autoGenerateColumns in a production environment, to begin with. This not only guards you from sending lots of unnecessary object properties, but also saves time because the grid doesn’t have to analyze its data source in order to infer column types, keys, etc. Also, if you are using some Entity framework on the server, do not bind directly to auto-generated types, but instead create your own, simple types which only include properties that will be sent and used / rendered in the grid. For instance, let’s say you have an Order entity. You can simply create a class with the following properties:

public class Order

{

   public int productID { get; }

   public int quantity { get; set; }

   // possibly other properties

}

And then bind an IQueryable<Order> to the grid’s DataSource. (In case you use the ASP.NET MVC wrappers). 

The same guidelines apply for the hierarchical Grid, but there you have one other aspect – layout generation – which can be even costlier than column generation for a flat grid. So in case you are in a production setup, it is not advisable to enable automatic generation of hierarchical grid layouts, because this may not only increase the rendering/ initialization time, but also cause your app to transmit a huge extra payload that you will never render or access from your client-side logic. 

Optimizing the Grid Rendering

Last but not least, I would like to mention some ways to speed up the grid’s own rendering. Let’s talk about width and height first. When you set a height or a width for the grid, it has to do extra calculations in order to ensure everything is perfectly aligned – columns, headers, etc. In case fixed headers are enabled, this has an additional overhead on the rendering because the markup is split into two tables – the one for the fixed headers area, and the one for the data records. Those tables are also wrapped with additional divs, so that scrolling is synchronized and works out of the box. There are tons of small things that we are taking into account in order to have all of this functionality work great out of the box. And, inevitably, this could affect performance. So, in case you do not need width/height because you are putting the grid in a container which is scrollable anyway, you don’t need to set those. Also, if you can go without fixed headers, that’s also one way to speed up the rendering. Mixing column width unit – like pixel based columns and percentage based columns can also have a negative effect on performance because of the extra checks and reflows that need to be done in the DOM. The same applies when you set width to only some of the columns. 

One of the best ways to speed up records rendering is to use virtualization. There are two types of virtualization – fixed & continuous. I recommend using continuous virtualization, because you won’t need to deal with issues related to different row heights, for instance. Also, continuous virtualization works great in scenarios like group-by and hierarchical grid. You need to keep in mind that when you have virtualization enabled, you still have all of your data on the client – if it’s millions of rows, data transfer/bandwidth may become a bottleneck. In that case, you should combine server-side paging with client-side virtualization in order to be able to load millions of records. Sometimes simply loading too much JavaScript objects (data records) in the browser, without rendering them in the DOM, may dramatically increase memory consumption.  

When you use filtering, prefer advanced filtering mode in case you have a lot of columns. Any scenario where you render many columns may slow down your grids if there are specific features enabled – Filtering, Updating, multi-column headers, etc. This is because those features always render extra DOM in every column header, and when this DOM turns out to be a full blown widget – like an igEditor that is used to filter a cell, it can significantly affect the rendering time. 

Setting autoFormat to true for a column is also something you should be careful with, because it will render date and number columns according to predefined formatting rules, so that the data renders nicely. 

I’ve already talked about autoGenerateColumns and autoGenerateLayouts, but one extra thing that you should also keep in mind is that if autoGenerateColumns is true (default), and you don’t have a defaultColumnWidth set, and you have some manually defined columns, then when your grid renders, you may end up not even seeing the extra auto-generated columns. So you will think that the grid correctly renders only the columns that you’ve defined, and that they have correct widths, but since autoGenerateColumns is true, it will generate and append the extra columns to the columns collection, and they will be rendered with 0 runtime width. 

Primary keys also play an important role in grid’s performance, and when you can, you should always set the primaryKey option to some unique field from your data model (you can additionally hide this column). If you don’t set it, the grid will need to auto-generate a key, which will slow down things a bit. This gets a lot worse in a scenario where you have persistent selection of records. In that case, if there is no stable primary key to refer to, the grid will try to calculate checksums for the record values, in order to infer the mappings. 

Last but not least, keep in mind that when you use the ASP.NET MVC wrapper, even if you have paging enabled, you may still end up selecting all records for your query from the database and transferring them to the server-side logic of your app. In order to avoid that, you may bind using LINQ, in that case the grid will automatically set the parameters in the LINQ query, making your filtering and paging requests really fast. You won’t keep more data than you need to.