08/24/2023
We have an OPC UA Server implementation that is front-end to a lot of 'Non OPC UA' Data sources.
In this regard we have run into an implementation/state issue with timestamp on DataValue.
Under normal operation we set the SourceTimestamp and StatusCode of the DataValue based on the Quality and Timestamp from the Data source.
However! If the connectivity to the data source is lost. We then update SourceTimestamp with time of disconnect and statusCode with Bad.
When the connectivity is reestablished data source timestamp and Quality is reapplied to SourceTimestamp and statusCode.
The effect of this can be that the SourceTimestamp can jump backwards I time.
This behavior seems wrong as it can lead to invalid time series as the series is not chronologically correct.
According to https://reference.opcfoundatio.....docs/7.7.3 text says :
” In the case of a bad or uncertain status sourceTimestamp is used to reflect the time that the source recognized the non-good status or the time the Server last tried to recover from the bad or uncertain status.”
Does the above mean that we must update the statusCode to good and SourceTimestamp to timestamp of reconnect? It’s not explicitly saying this but could be a solution to timestamps in chronological order.
05/30/2017
The spec does not specify the clock to use when setting the SourceTimestamp when Bad.
You could use an estimate of the Data source clock by caching a delta from the Server clock.
But this would not eliminate the risk of a discontinuity because the Data source could change when it comes back online.
1 Guest(s)