I have implemented several xamdatacharts throughout my app. A new requirement to run my app on a tablet was given to me and basically the xamdatachart make this nearly impossible. I have 2 issues that have plagued me since I wrote the app in v11.1 then 12.2.
I have recently upgraded to 14.1 and have not seen any performance boost in the XamDataChart.
1) I have a chart using a LineSeries and a CategoryXAxis. The source is an observable collection of (DateTime, float). I attempt to simulate an EEG graph by updating that collection by index (so no FIFO). Collection is 4000 points for 8 seconds worth of data. I update the collection every 50ms so I am updating ~25 points of data plus 250 blank points to simulate a moving gap about 20 times a second. I have even used an custom implementation of an ObservableCollection that turns off notifications when I start the update and then sends a reset after all 275 points are updated vs just updating as they come in. The cpu was no different. Point is that this graph is the biggest cpu hog and on the tablet causes the UI to eventually become unresponsive. I am hoping that someone can tell me if there is a more efficient way to create an EEG simulated graph using any of the Infragistics controls?
2) I have another chart that can have N LineSeries but in this case they all use the CategoryDateTimeXAxis. Each series will have points coming in at no faster than 1/second. However, this graph show all the history. What I find is after several hours of running each point added takes longer and longer. I believe this is because the CategoryDateTimeXAxis sorts the data on every insert even though I know it is already sorted. Is there anything in the new version that I can do to optimize that xaxis type or a new way to use it to make this more efficient? I have had to limit each series to only hold 28800 (8 hours) points as beyond that the entire UI gets noticeably choppy each second. I have posted this question before about 18 months ago as a support ticket and the suggestion was to use a scatterline series instead but this had a similar jittery effect, slightly reduced, but happened all the time even when the series only had a few points in it (plus the implementation was much uglier).
Any new suggestions to optimize either of these scenarios?
Hi Mike,
Most of your performance for #1 is probably going to measure and arrange calls for the geometry needed to render a 4000 point LineSeries. Even if you only update a small subset of the data points, that update is going to trigger the entire series to be re-rendered. I'm looking into this specific requirement to see if there is a better way to do it.
For #2, is a CategoryDateTimeXAxis necessary here? If your data points have a fixed interval between them then a CategoryXAxis might be a better choice. It doesn't presort the data before rendering like the CategoryDateTimeXAxis so it should perform better. I'll take a look at your previous support tickets to see if I can find the one dealing with this.
I will get back to you shortly with more information.
Thanks Rob.
I kept using the CategoryDateTimeXAxis because I cannot guarantee all the points will be equally spaced. It just made my life a little easier if it took care of the spacing for me. I just wish I could tell it to not sort as an optional optimization.
I look forward to anything I can do to optimize the EEG chart as well. I have a C++ guy I work with that is giving me a lot of flack about .NET graphing performance... :)
I put together a sample that used our developers data and it turns out this looks more like an EEG than an EKG. Looking at your update code though, I believe we're doing it similarly, except for the while loop and the wave data buffer part. Although in my case I'm not pulling datapoints from a queue to override the data source, but I'm instantiating new data points entirely. It shouldn't matter though, yours may be more memory efficient that way.
One thing I notice in our developers sample though, is that the chart looks like a normal EEG without needing to use 4000 data points. Now I don't know what your EEG looks like with 4000 data points but when I compare my sample with pictures I see on the internet, it looks pretty close. I attached the sample so you can take a look. If I bump up the amount of data points to 4000, I do notice a jump in my cpu usage. Task Manager isn't really the best tool to use for this but on my laptop it was staying around 13% with 4000 data points. I then tried it out on a Surface Pro tablet (i5 1.7GHz so a bit more powerful than yours) and Task Manager showed between 30 and 40% cpu usage. From these tests I definitely can see how it could be worse on your tablet.
My conclusion from all of this is that I don't think there is anything wrong with the way you are updating the data. I think it just comes down to the chart itself trying to render 4000 constantly changing data points on a tablet. For the tablet, you may need to try playing with the Resolution property on the LineSeries to try and cut down on the number of geometry that is rendered. Or you'll have to see if you can get away with lowering the data point count so the chart doesn't have as much to render.
Rob,
I was under the impression that the XamDataChart would generate a "best fit" line graph given a data set so I fed all the data I get to the chart which is roughly 500 points per second. Regardless of how many points I add I still have to "update" the chart around every 50ms in order to attain a smooth scrolling affect so whether I update 275 points (25 of data + 250 for a "moving gap") or 55 points (5 of data and 50 for moving gap after some data processing) I didn't thinnk it would make much difference. I will try this and see if it makes helps with the cpu.
Hello Mike,
I believe the "best fit" fit part comes into play when you have set the Resolution property to some value that would allow the chart to start figuring out what the "best fit" would be. The default behavior if not set though, is to draw a 1 to 1 representation of the data you give it. It won't cut any corners to render the series lines from point to point. Increasing the Resolution value causes the chart to reposition things so that it can render the fewest amount of points and still achieve the similar look as before.
My point from before though, was that on a tablet there is a noticeable increase in cpu usage even in my sample, and that it might need to be an option to lower the resolution/data point count on the chart in order to cut down not only on the number of points that you are updating but also the number of geometry that is rendered by the chart. I would do this for just the tablet version though. On a desktop/laptop you should be seeing much more acceptable performance. On my laptop, the sample never got above 13-ish %.
I believe I understand. Thanks.
On my desktop I do see acceptable performance. I have been working with the resolution and decreasing the number of points I plot but I do not see significant gains in performance vs. the loss of accuracy in the graph. I did find out that if I force software rendering mode I get a nice performance boost on the tablet so this may make the app acceptable.
Thanks for the help.
Mike
Interesting. I'm curious as to what kind of graphics processor the tablet has. I haven't really seen cases where software rendering is actually better than hardware rendering but if the graphics processor is just that bad then I guess it could give you a boost. I never even thought about considering that. Nice find!
empty
Thanks for clarifying. The information in this thread should prove useful to the community.
Let me know if you have any further questions.
We have a tablet with an Intel N2600 Atom processor so it is very low power. I believe it has GMA 3600 graphics chip.