I have a datatable out of dataset like
Dim dt as datatable = dsDB.Tables(0)
UltraDataSource.... = dt
How can I do this?
There's no quick method to do this, you would have to copy all of the data by looping through every row.
But why would you want to do this?
Well, I was looking at the Samples and saw the 'one million records' example using the UltraDataSource. Although we generally do not load that much data, in one of the applications I have written with Infragistics controls, a manager may want to have that much 'breadth' to analyze against or quickly filter through. I am able to get up to around 700,000 using datatables and standard binding and 'imagined' the WinDataSource as an athletic, low body fat version of the MS options. Upon looking a little closer now, am I to understand that it is only recommended for 'read only' grid presentation? Is the trick to its lower memory consumption just the pre-allocation of Record Count? My application allows editing but not record Adding or Deleting, couldn't it work for such an 'editable' grid?
Thanks,
Mitch
Hi Mitch,
Mitchster2 said:Upon looking a little closer now, am I to understand that it is only recommended for 'read only' grid presentation?
No, that is not correct. Where did you get that impression? I think maybe the milltion rows sample is read-only, but there are other samples that demonstrate how to allow updating, deleting, and adding. You just have to handle a few more events for those cases.
I think you have to look under the DataSource samples, not the WinGrid samples.
...\WinForms\2012.2\Samples.EN\DataSource\CS\Virtual Mode Sample - Extended
In answer to your question, I got the impression from several of your responses to other forum users, neither here nor there really.
Well, I got it all set up yesterday; created the fields manually to imitate the Datatable I had in memory.
Again, from your replies pulled up when searching on UltraDataSource, there was not an easy way to do this.
But I got it and it worked great; instantly 'standing in' perfectly for my bound datatable. Cool.
So I extended the range up toward the big one M, I needed and...
System out of memory while looping through my table to manually fill the UltraDataSource.
The actual problem is that the Datatable was pretty well 'pushing the machine's memory' and then when creating a similar object in a loop like that ... fail. I could go back to my DAL and change it to fill the UltraDataSource directly or try lazy looping through LINQ instead of using the giant Datatable. Researched that A bit and found nightmare stories as to how LINQ to SQL does not handle the memory management correctly there ... but I think I could make that work.
So before I do something like that, can you explain to me what the advantages of using the UltraDataSource are?
Thanks
There are several advantages. But it sounds like the one you are most interesting in is the virtual mode. The point of this mode is that the UltraDataSource fires event to let you know when it needs data, instead of keeping the entire set of data in memory all at once.
If you are looping through your DataSet and populating the UltraDataSource with the entire set of data at one time, then you have missed the point. :)
What you should do is take a look at the sample and look at the events on the UltraDataSource. The code in those events like CellDataRequested goes out and gets the data for that cell or row, as needed.
The only really tricky part of this is where you get your real data from in the first place. Ideally, you don't want to load the entire set of data into your DataTable or DataSet, either. If you are going to do that, then you might as well bind the whole data set to the grid, because it's in memory anyway.
What you really want to do is set up some way to retrieve rows from your data source as needed. The sample doesn't really show this. It just generates fake data on the fly. But you will probably want to set up a better way to do this, either by retrieving one row at a time from your back end, or maybe use some caching mechanism to get the data in chunks and store it.
Well, using Virtual mode may have been a solution if had been baked in at the bottom but it wasn't.
The final soultion involved doing quite a bit of testing to see what the memory usage 'shot up to' and then making a best guess estimate of how many records, how much memory we can expect our user to have ... Doing some real field testing once we had it set up. Sliding the record count down a bit when we had one or two users who still got the dreaded OOM error.
So before we load the data we run out and check the count of records that will be returned. If it is more than 600,000 then we poilitely ask the user to try and select less records.
Combining this with a new 'pre filtering' page that allows the user to limit the list to just widgets and flurps made by Ralph's or Maggie's fabrication shops allowed the users to get the really large date ranges/ and all 'departments' they wanted to see at once.
If the program were to evolve a little more we would try to blur the switching between the preload and grid pages so the user wouldn't feel like it was such a 'hump' to go back. Oh and, by the way it isn't, :) ... we are using some fairly advanced methods to get the data quickly and that ~gig of memory gets loaded in about 10 to 14 seconds.
Another thing to consider, Ramsey, is that if you are using the Export to Excel control, you can count on your memory usage doubling when it fires off.
I suspect that Infragistics could improve their architecture on this control to use the structures they already have in memory. After all the output is just XML for .xlsx files.
We never really had inconsistency errors, just the big ugly of Sys out of memory which would floorflush the app.
Another code horror story but it seemed to have a happy ending. Agile development is a lot like playing Spore or as M.Twain said about one of his books 'It just growed.'
:) Good luck
I have the same problem here. We have an app that was developed using datatables and now we have a requirement to handle 10x the number of rows. We are able to load the datatable in an acceptable amount of time but when we try to display it in the grid we get an Out of Memory exception.
I don't suppose that you have a better example of loading rows from a datatable, do you? The example that generates a cell value doesn't help much in terms of fetching a chunk of rows.
I am hoping that we could use the UltraDataSource to feed the grid and keep (with minor modification) the code we have that operates against the datatable. I am worried about tracking RowErrors, Changed rows, sorting, filtering, copy/paste.
Ramsey
Oh and thanks for the insightful control info. I do have that version and will come back to that... Excellent.
Yes, correct on all counts. :) I will try them side by side and test. I will then just write a reasonable throttle on the counts. Nice talking to you, Mike. I will post back soon to this thread with any 'useful' performance data I collect.
Mitchster2 said:What are some of those advantages?
Well, the UltraDataSource can populate data at design-time, not just run-time.
Also, it tends to be a bit more efficient when access child data, because the DataSet has to evaluate a relationship every time you ask for child records and teh UltraDataSource doesn't, since the child rows are defined on the parent.
Then there's the on-demand mode, of course.
Mitchster2 said:Chunks won't do; I have a couple required Summed columns. I am thinking of giving it one more try and use a datareader to directly populate the UltraDataSource (that should cut mem usage in half or there abouts). Duh :)
If you need to sum up an entire column of data, the loading on-demand won't help you, anyway. The grid will have to load all of the data in order to sum it.
Unless you do the summary calculation yourself. You can do that, assuming you have the latest version of the grid. We just added the ability to do external summaries in v12.2. But this would require you to load all of the data - at least for the column(s) you need to sum.
Mitchster2 said:Is there any reason I should think that the UltraDataSource might do better on memory consumption?
If you are loading all of the data, no. Memory usage will probably be about the same, I would think. Obviously, if you are loading your DataSet with all of the data and also copying the same data into the UltraDataSource, then you will have two copies of the same data in memory and so memory usage will go up, not down.
Using UltraDataSource to save memory will only be effective if you avoid loading the underlying data into memory twice.