virtio: video: decoder: pass frame formats to input formats

On the guest side application using V4L2 frontend can query frame
sizes using VIDIOC_ENUM_FRAMESIZES ioctl. This ioctl accepts pixel
format as an argument. It is correct to query frame sizes for input
format. In this scenario virtio-video would always return 0x0
resolution provided by crosvm, which is incorrect.

This patch fixes this issue by suppling correct frame sizes into
returned input formats.

BUG=b:160440787
TEST=tast run eve arc.Video*

Change-Id: Ib12b19ca515056aa8fa9470ece34309db2475817
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/3377642
Reviewed-by: Alexandre Courbot <acourbot@chromium.org>
Tested-by: kokoro <noreply+kokoro@google.com>
Commit-Queue: Marcin Wojtas <mwojtas@google.com>
This commit is contained in:
Bartłomiej Grzesik 2022-01-10 11:08:18 +00:00 committed by Commit Bot
parent dd8a12c715
commit f64db6a932

View file

@ -273,7 +273,19 @@ impl DecoderBackend for LibvdaDecoder {
in_fmts.push(FormatDesc {
mask,
format,
frame_formats: vec![Default::default()],
frame_formats: vec![FrameFormat {
width: FormatRange {
min: fmt.min_width,
max: fmt.max_width,
step: 1,
},
height: FormatRange {
min: fmt.min_height,
max: fmt.max_height,
step: 1,
},
bitrates: Vec::new(),
}],
});
match profiles.entry(format) {
Entry::Occupied(mut e) => e.get_mut().push(profile),