02/03/2016
Hello,
I have gone through the implementation of aggregates as per the OPC UA specification. But I have some question related to this implementation:
1. As the implementation is making use of Slicing and Region concept which holds the data for that slice in memory and then work with the same data for aggregate calculation. So it does know all the required data points. But lets say in the case of actual fields where data is coming one after another and lets say there are huge number of such data samples in a smallest interval. In this cases the current implementation will run into memory issues.
So is there alternate way to process one data point at a time and still be able to calculate the result as per the OPC UA specification?
Regards,
Saurabh
Moderators-Specifications
Moderators-Companion
Moderators-Implementation
Moderators-Certification
Moderators-COM
02/24/2014
Saurabh,
Yes all of the algorithms listed in the specification can be calculated without storing all of the raw data. They do require storing of some intermediate data or depending on the calculation storing the previous value. It does require a little effort to setup each calculation to run without storing all of the values, since what the intermediate storage items are will vary based on what is being computed. For example: a Min or Max only requires that the current value is compare to the latest Max or Min and if it is larger/smaller the value is updated. The TimeAverage requires the previous value (and time stamp) be stored and sum for the average can be calculated as a new value is received. When the interval end is detected then the average can be calculated from the sum and the time interval.
It is also import to note that a separate calculation is required to compute the Quality/Status associated with the aggregate, but at least there are fewer variation of this calculation.
If there are any specific Aggregates that you don't know how to calculated in a one point at a time manner, just ask about them.
Paul
Paul Hunkar - DSInteroperability
02/03/2016
Paul,
Thank you for your reply. My question is how the current implementation of aggregates algorithm supports calculation in one point at a time manner ? As I observe that the implementation required the entire data for an interval to compute the final aggregate. So what part of the implementation needs to be change so that we can achieve the calculation in a one point at a time manner?
Regards,
Saurabh
Moderators-Specifications
Moderators-Companion
Moderators-Implementation
Moderators-Certification
Moderators-COM
02/24/2014
Saurabh,
Are you having a problem with a specific aggregate and how it would be implemented in a one point at a time manner? If so let post which aggregate? And I can help you with the algorithm you would need to implement.
If you are looking for someone to provide sample code for how to calculate aggregates in general then I would post your request to GitHub.
Paul
Paul Hunkar - DSInteroperability
02/03/2016
Paul,
Its not the specific aggregate I am looking for, rather its the way the aggregates are getting calculated I am interested in.
Lets consider any aggregate which uses interpolation where one need to identify the start and end bound. And assume a data stream containing 100000 data point submitted in one point at a time manner. Now if the processing interval is of some x seconds and which contains 50000 data point. A/c to current implementation it may store these data point in memory till the late bound occur for this interval. But I am not expecting to hold these many data point in memory. So in this case how can we change the current implementation to support the one point at a time manner execution ?
Saurabh
Moderators-Specifications
Moderators-Companion
Moderators-Implementation
Moderators-Certification
Moderators-COM
02/24/2014
Saurabh,
So if you want to calculate an interpolated value at some interval - that is very easy - As you get values in, save the last value only, check each new value and see if the interval boundary has been crossed(or landed on), if so take the previous saved value and compute the value at the interval boundary. The reason I was asking about an aggregate - is that the algorithm for how to calculate it on a point by point basis varies. In a previous post I described how to calculate a Min and Max which is a simple compare as the values are reported. There is no short cut, you have to look at each aggregate you are going to support and look at the calculation for it. Some will require a summation and a count, some are compare operations and a saved value or an increment of a counter or timer. If you can't figure out what the math is for a point by point for an aggregate I can help you. Once you have each of them figured out - you will find that some of them overlap and you can reuse parts.
If you are asking about bounding then again it depends on the type of bounding and the aggregate. The key point is that if you know you are going to calculate aggregates you need to keep the last value around (but depending on the aggregate it might be two values if the last value is bad status - since you need to keep the last good status value also for some aggregates).
To calculate status information - you also need to track number of good and bad values and the total time duration of good and bad, but again this varies from aggregate to aggregate. Make a list of the aggregate that need to be supported and based on the list you can generate the items you need to track.
The key point to remember is that aggregate are available for subscription and for history. In history you have a big block of data that you process, which is what the examples provide. In subscription you get the values in one at a time. The history block code might not work for one point at a time (depending on implementations and algorithms), it would be different code (ask GitHub if you need someone to write this). The Subscription code is typically just filter code - i.e. I get values in and need to spit something out into the subscription, but I only do then once I cross a boundary depending on the algorithm.
Paul
Paul Hunkar - DSInteroperability
1 Guest(s)