Tuning performance in RI 1.2.1
In RI 1.2.1, after removing the UAL and DRI, a performance hit has been introduced in tuning. This is because of a Glib call made in Tuner.C (ri_platform/src). This has resulted in failure of one of my tuning performance test case which requires tuning to happen within 1500 ms.
It is observed that whenever a tuning request is being made, it takes around 1000ms to change tuner state from NO_SYNC state to SYNC state.
Root Cause Identified:
In normal flow, on getting a tuning request, we are requesting tuning to tuner module. When the tuning is success, we are trying to see whether the signal lock has happened. For this, we are polling the tuner to get the status. Once we obtain the locked state, we are sending the SYNC event .
The performance hit happens in the polling mechanism. The polling operation is done using a Glib API - g_timeout_add_seconds() in which we pass 'time' in seconds, a 'function' and an 'object' as parameters. This API sets the function to be called at regular intervals until the function returns false. But the issue is that, the exact precision of the INITIAL TRIGGER of this function call shall be in SECONDS. ie the initial trigger need not happen immediately. So, here, the function is invoked at a later stage (in around 1000 ms), which makes the performance tests fail.
In Glib API doc of g_timeout_add_seconds() , it is clearly mentioned as follows:
"Note that the first call of the timer may not be precise for timeouts of one second. If you need finer precision and have such a timeout, you may want to use g_timeout_add() instead. " (http://developer.gnome.org/glib/2.30/glib-The-Main-Event-Loop.html#g-timeout-add-seconds)
Now, replaced the current call with g_timeout_add(), which takes the timeout interval in milliseconds. Gave timeout as 1000ms. It is working fine and it changes from NON_SYNC state to SYNC state immediately. The performance test cases are passing consistently.
Why the API- g_timeout_add_seconds() is being used instead of g_timeout_add() ? Not to over burden the processor ? Or expecting a delay for signal lock? But, if i use the g_timeout_add() with interval as 1000ms, it will not be of that much burden, right? Also, right now, the signal lock is happening instantly.. Since tuning performance is very critical, cant we this API be used instead?