Skip to main content

Channel Change Performance in RI

Please note these forums are being decommissioned and use the new and improved forums at
5 replies [Last post]
Joined: 2006-07-29

Tune to a clear digital channel in our platform takes around 4-5 seconds with RI.
1) Are there known issues /performance optimization effort which was done in RI to improve channel change?
2) What aspects could create such a delay in channel change with RI because with proprietory OCAP stacks it tunes in 2.5 seconds.

We are using Rel 1.2.2.A version.

Any pointers/directions would help.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Joined: 2004-06-19

I echo Craig's comment that we would highly value your experiences running the RI on your platform.

Joined: 2008-12-18

[Apologies for the delayed response. I was out last week.]

Re #1:

There were performance-focused efforts related to channel changing early on in the RI/ODL project. For instance, the PSI subsystem retrieves and caches PSI information for any tuned transport stream. So if you tune to a service on a transport stream that was previously tuned, the RI will utilize the PSI acquired during that tune instead of waiting to acquire the PAT/PMT. The PSI acquisition subsystem also has the ability to utilize multiple section filters to acquire PSI. This speeds acquisition of PSI if/when multiple tuners are tuned simultaneously (and the PSI isn't already cached). If you have 1.2.2 Rel-A, you should already have these enhancements.

There have been a few enhancements made here and there since 1.2.2 Rel-A related to tune performance.

  • The PAT/PMT timeout logic was improved in 1.2.2 Rel-D. This resulted in significant performance improvements for channel scanning and TSID discovery times.
  • In 1.2.2 Rel-E, the NetworkInterface code underwent a refactoring that improved the handling of a variety of cases - especially those related to rapid calls to tune.
  • As of 1.2.2 Rel-H, the dispatching of tune-related and CA-related events were given a dedicated delivery queue/thread instead of using the "system" queue/threadpool. This means that internal and external components will get notified of tune-related events more promptly, especially when the system is under load.

Re #2:

There are a number of time-critical activities that can affect service selection performance (in order).

  1. The tune time itself - how long it takes to achieve/signal MPE_TUNE_SYNC. Note the Rel-H stack change above that speeds the processing of this event.
  2. PAT/PMT acquisition. This only affects service selection time if/when the PSI is not cached for the selected program.
  3. The time it takes to perform CA signaling to the CableCard. CCIF requires CA-PMT exchange even when no CA descriptors are present. So this ends up being a factor with all tunes.
  4. The time it takes for live decode to be initiated. The stack and application enter the picture here with the invocation of the MediaAccessHandler and other operations.
  5. The time it takes for the first-frame to be recognized and displayed by the platform. This is just the time it takes to process the mpeos_mediaTune() by the platform plus normal i-frame acquisition latency.

Needless to say, logging and thread scheduling all factor into this. And it's important not to gauge tune performance on the first few tunes after startup of the RI. Class loading, SI/PSI acquisition/caching, and asynchronous subsystem initialization (DVR DB initialization and Home Networking) impact tuning performance at startup.

Note that ECR-1806 is designed to help identify performance issues related to service selection. But even without this EC, it should be possible to narrow in on the performance issue given an INFO- or DEBUG-level RI log. And we'd certainly welcome a dialog on any performance-related issues you have with the RI.

On the emulator, we see tuning times under 2 seconds. But this is running on a modern PC. And while there's been effort put into making the emulator platform look and operate like a real STB, there are clear differences. And we would highly value your experiences running the RI on your platform.

Joined: 2006-07-29

Thanks Craig. That was a very useful piece of info.
The major bottlenecks were the following
1) PAT/PMT Acquistion- 500ms
In a single tuner platform which cannot support parallel PAT/PMT processing on multiple TS, we gotto rely on the PSI caching within the same TS. But on a typical environment 1 TS would have only 2-3 HD streams or 4-5 SD streams. So when the TS changes, we dont see any benefits of PSI caching. Is there a way to cache PSIs across TS? I understand when the PMT changes, the delay would be more than a normal channel change, but PMT changes are rare in the field(?) and so can this option be tried?
2) First Frame Alarm -700-1sec
We couldnt do anything much here. The delay has to be accepted.
3)Creation and starting of player
Player creation and the changing of states of the player are asynchronous which involves lot of thread switching which adds to the delay.

We've put our ivestigation on hold for now. I'll come up with a more exhaustive analysis soon.


Joined: 2008-12-17

Hi Shobana,

It would be useful if you can email the logs so we can figure out what may be going on. You can email the logs to prasanna AT ecaspia DOT com.


Joined: 2008-12-18

Hi Shobana,

Re: #1

The PSI (service components) are cached across tunes in a couple different ways. The native SIDB will cache them indefinitely. The Java-level PSI cache will time them out. But the amount of time it takes to reacquire the PSI from the native cache is minimal compared to initial acquisition time. Of course you have to tune to the transport stream at least once to have anything cached. In the rare chance that the PAT/PMT revision changes, a SI change event will be fired.

I'll chat with Prasanna and make sure I'm not forgetting any issues we've seen/resolved in later releases that might affect the caching.

Re: #3

Yeah, threading and class loading can bog things down - mostly on that first selection though.

I'm doing a bit of analysis of thread bottlenecks right now. Look for calls to "adjustThreadCount" in your logs. If you're seeing those regularly then you're probably hitting some threading bottlenecks. Not a lot has been done to address this to-date. So I'm hoping there's some low-hanging fruit, in terms of optimizations.