I have a datatable out of dataset like
Dim dt as datatable = dsDB.Tables(0)
UltraDataSource.... = dt
How can I do this?
Yes, correct on all counts. :) I will try them side by side and test. I will then just write a reasonable throttle on the counts. Nice talking to you, Mike. I will post back soon to this thread with any 'useful' performance data I collect.
Mitchster2 said:What are some of those advantages?
Well, the UltraDataSource can populate data at design-time, not just run-time.
Also, it tends to be a bit more efficient when access child data, because the DataSet has to evaluate a relationship every time you ask for child records and teh UltraDataSource doesn't, since the child rows are defined on the parent.
Then there's the on-demand mode, of course.
Mitchster2 said:Chunks won't do; I have a couple required Summed columns. I am thinking of giving it one more try and use a datareader to directly populate the UltraDataSource (that should cut mem usage in half or there abouts). Duh :)
If you need to sum up an entire column of data, the loading on-demand won't help you, anyway. The grid will have to load all of the data in order to sum it.
Unless you do the summary calculation yourself. You can do that, assuming you have the latest version of the grid. We just added the ability to do external summaries in v12.2. But this would require you to load all of the data - at least for the column(s) you need to sum.
Mitchster2 said:Is there any reason I should think that the UltraDataSource might do better on memory consumption?
If you are loading all of the data, no. Memory usage will probably be about the same, I would think. Obviously, if you are loading your DataSet with all of the data and also copying the same data into the UltraDataSource, then you will have two copies of the same data in memory and so memory usage will go up, not down.
Using UltraDataSource to save memory will only be effective if you avoid loading the underlying data into memory twice.
What are some of those advantages?
Chunks won't do; I have a couple required Summed columns. I am thinking of giving it one more try and use a datareader to directly populate the UltraDataSource (that should cut mem usage in half or there abouts). Duh :)
I am well aware of the 'best practices' I am ignoring :) Have worked with 'fat data' before, like this app, but a recent requirements change is asking for a 10x boost in record counts.
What I am seeing is that the Datareader paradigm works great to about half a million records. I have gotten 'all militant' on my memory clean up so that number is consistent between screens. I can sort, filter, edit... great. After 500,000 I can get System Out of Memory errors.
Is there any reason I should think that the UltraDataSource might do better on memory consumption? I dig IG controls. This app uses an awesome 'vertical grid' in a DataRepeater, which I created, that is the UI; just need ten times my original data :)
Thanks, Mike
Hi Mitch,
There are several advantages. But it sounds like the one you are most interesting in is the virtual mode. The point of this mode is that the UltraDataSource fires event to let you know when it needs data, instead of keeping the entire set of data in memory all at once.
If you are looping through your DataSet and populating the UltraDataSource with the entire set of data at one time, then you have missed the point. :)
What you should do is take a look at the sample and look at the events on the UltraDataSource. The code in those events like CellDataRequested goes out and gets the data for that cell or row, as needed.
The only really tricky part of this is where you get your real data from in the first place. Ideally, you don't want to load the entire set of data into your DataTable or DataSet, either. If you are going to do that, then you might as well bind the whole data set to the grid, because it's in memory anyway.
What you really want to do is set up some way to retrieve rows from your data source as needed. The sample doesn't really show this. It just generates fake data on the fly. But you will probably want to set up a better way to do this, either by retrieving one row at a time from your back end, or maybe use some caching mechanism to get the data in chunks and store it.
In answer to your question, I got the impression from several of your responses to other forum users, neither here nor there really.
Well, I got it all set up yesterday; created the fields manually to imitate the Datatable I had in memory.
Again, from your replies pulled up when searching on UltraDataSource, there was not an easy way to do this.
But I got it and it worked great; instantly 'standing in' perfectly for my bound datatable. Cool.
So I extended the range up toward the big one M, I needed and...
System out of memory while looping through my table to manually fill the UltraDataSource.
The actual problem is that the Datatable was pretty well 'pushing the machine's memory' and then when creating a similar object in a loop like that ... fail. I could go back to my DAL and change it to fill the UltraDataSource directly or try lazy looping through LINQ instead of using the giant Datatable. Researched that A bit and found nightmare stories as to how LINQ to SQL does not handle the memory management correctly there ... but I think I could make that work.
So before I do something like that, can you explain to me what the advantages of using the UltraDataSource are?
Thanks
Mitch