Sorry I guess the confusion is that I meant libcamera API and ffmpeg API when I wrote "using libcamera and ffmpeg APIs". I have a C++ program that that uses the libcamera API to deliver video frames to the ffmpeg for filter graph, encoding and muxing.
So my question is what settings (camera, stream, ScalerCrop and/or sensor configs) in libcamera API will deliver frames with the same field of view as I get using libcamera API and ffmpeg API set sensor to 1640x1232 and pass those frames through ffmpeg API filter of crop (4, 157)/1632x918 and then scale to 1280x720?
So my question is what settings (camera, stream, ScalerCrop and/or sensor configs) in libcamera API will deliver frames with the same field of view as I get using libcamera API and ffmpeg API set sensor to 1640x1232 and pass those frames through ffmpeg API filter of crop (4, 157)/1632x918 and then scale to 1280x720?
Statistics: Posted by wkeeling — Mon Dec 01, 2025 7:32 pm