02/22/2023
The client wants to make sure that Methods called on the server that have physical consequences have priority on all other requests, for example if a server has 10 active client sessions, and 9 of those clients send a read/write request and one of them sends a move command for a robotic arm, this command should be executed first and the read/write operations should be completed later.
This makes sense for safety reasons: I don't want to wait on 9 read operations before the server gets to the conveyor_belt_stop() command, which was issued to prevent a person from getting injured.
Requests management is handled internally by the Server SDK, so there is no way of implementing custom requests handling policies.
I studied the code of the .NET SDK, and I saw that it adds requests to a Queue<T> and uses worker threads to satisfy the requests. Given that Queue<T> is a FIFO queue makes it impossible to somehow rearrange the request to favor those that I want.
For now I plan to use the sample client to benchmark the latency of this queue mechanism and trust the OPCUA devs that this is the right approach, but I would like to have a better answer for my client.
Does OPCUA provide a way to define priorities for the request handling from either the Client or Server side?
Do you have any idea how to handle this scenario?
The alternative would be to write the Server SDK from the core libraries, which would require a monumental effort and I see no one doing that, after all there is a reason SDK's have been made, am I right?
Thanks
05/30/2017
1) All requests received are processed asynchronously. A later request is never blocked because a earlier request is still processing unless there are so many requests that the thread pool is exhausted (i.e. the queue is emptied very quickly in normal operation so prioritization of the request queue is not necessary).
2) Once method processing starts the application code is called. The application code could have logic to suspend or block low priority actions while the high priority request is handled. No special SDK support is needed.
02/22/2023
the queue is emptied very quickly in normal operation so prioritization of the request queue is not necessary
I was suspecting this, thanks for confirming.
Once method processing starts the application code is called. The application code could have logic to suspend or block low priority actions while the high priority request is handled. No special SDK support is needed.
Could you elaborate on this aspect? It's not clear to me where the application code is called and where I can write my logic.
I assume you are talking about SDK's methods that can be overridden by my application, but again, I don't know which one you are referring to.
05/30/2017
In this example the OnStart Method is called after the inputs are validated:
https://github.com/OPCFoundati.....Manager.cs
You would have many similar methods defined on your NodeManager implementation.
The NodeManager can track currently active calls and be able to suspend them if necessary.
02/22/2023
Thanks to that example I came up with another idea.
What if I save the callback parameters in an object which I put in a list for later processing (based in custom scheduling rules) and I return control to the original callback function once the actual method has been executed?
Pseudocode:
private ServiceResult commonCallback(context, method, inputArguments, outputArguments)
{//this callback is common to all methods that I want to be able to schedule myself
semaphore = new Semaphore();
result = ServiceResult();
mc = new MethodCall(semaphore, result, (callback parameters) ) //temporary parameters holder
list.add(mc); //later handled by my code
semaphore.wait(); //triggered by thread that executes callback (my code)
return result;
}
This looks good to me, but I have some concerns about the blocking wait.
What I mean is that, the whole time the callback is blocked, the whole session is also blocked, because a client can't call two methods simultaneously. This is not too much of a problem: if the client asked for something that takes 10 seconds to make, it will have to wait 10 seconds.
What about the server? Does OPCUA handle each Method call with a different thread? I think it is, and if so this idea should work fine, but can you confirm it?
Thanks
05/30/2017
The way the SDK is designed you need to complete the method in the OnCall override. This blocking call does not prevent any other request from being processed but it does block processing of additional Method calls passed in a single Call request. i.e. another Method called with a second Call request would not block.
The SDK could be enhanced to make OnCall an async operation (the SDK design pre-dates async operators in .NET) but that is a feature request that needs to be submitted to GitHub.
05/30/2017
Blocking all the Methods inside a Call shouldn't be a problem, after all I guess the only reason a Call can specify a list of Method is to reduce the TCP/IP stack overhead, correct?
There are use cases, such as acknowledging a list of events, where passing multiple Methods in a single Call request is an important optimization. Most uses case will have one Method per Call request.
1 Guest(s)