Skip to main content

Question about APIs for changing video modes

3 replies [Last post]
spencer_schumann
Offline
Joined: 2010-09-07
Points: 0

How can an application set the video output resolution? The screen is represented by a Havi HScreen, which is composed of one or more graphics, video, and background devices. Video is represented by an HVideoDevice, which has a setVideoConfiguration method that applications can use this to request a specific video pixel resolution. Graphics are represented by an HGraphicsDevice, with a setGraphicsConfiguration method that can also set a pixel resolution.

DVB MHP 1.0.3, section 13.2.1.2, says, "pixels in the HGraphicsDevice may not correspond to discrete physical pixels in the actual display device." So the graphics device could be scaled to match the current video mode. I haven’t been able to find any similar statements about the video device. Is the intention that setting the video pixel resolution automatically causes the host to select a video output format with the same pixel resolution? For example, when changing the video device’s pixel resolution from 6400x480 to 1280x720, should the video output mode switch from 480p to 720p?

The Device Settings extension provides its own API for setting video output mode: VideoOutputSettings#setOutputConfiguration sets a specific video output to a specific mode. But if HVideoDevice can also control the mode, what is the purpose of the VideoOutputSettings API?

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
sarendt
Offline
Joined: 2009-07-21
Points: 0

First, a good example of choosing different coherent configs can be found in the HDeviceTest xlet (RI_Stack\apps\qa\org\cablelabs\xlet\HDeviceTest).

As I understand it, the host is responsible for scaling the video resolution in the coherent config to the appropriate display resolution. For example, in the RI Platform, the TV screen size is set in platform.cfg, independent of the choice of coherent config. The RI Platform then scales the coherent config video resolution into the display resolution.

spencer_schumann
Offline
Joined: 2010-09-07
Points: 0

Thanks for the pointer to the test. I will try running it.

If the host scales the video to the appropriate display resolution, then what effect does the coherent config video resolution have? For example, let's say we have a 640x480 SD video stream as the input, 1920x1080 as the coherent config video resolution, and 1280x720 as the display resolution. Does this configuration imply a processing sequence like the following?

Input Stream ===DFC===> 1920x1080 Video ===Output Scaling===> 1280x720 Display

Is the intermediate 1920x1080 format visible in any way? In other words, what is the purpose of having a configurable video device resolution if the video will always end up being scaled to the display resolution? Is the host required to do the intermediate scaling to 1920x1080, or could it be bypassed as in the following diagram?

Input Stream ===DFC===> 1280x720 Display

sarendt
Offline
Joined: 2009-07-21
Points: 0

You have the processing sequence correct. In the case of the RI, the scaling from the video output to the tv screen is done so as to preserve aspect ratio. So, the intermediate format can be visible in the sense that the "video input -> video output" transformation as well as the "video ouput -> tv screen" transformation can both leave letterbox or pillar "dead" space around the final video display.

As for the logic of this set of transformations, I can speculate that the intent is that the coherent config has a video resolution matching that of the display device. But this is just speculation on my part -- anyone else have any ideas of this?