android编码器局限性
1. 颜色格式问题
MediaCodec在初始化的时候,在configure的时候,需要传入一个MediaFormat对象,当作为编码器使用的时候,我们一般需要在MediaFormat中指定视频的宽高,帧率,码率,I帧间隔等基本信息,除此之外,还有一个重要的信息就是,指定编码器接受的YUV帧的颜色格式。这个是因为由于YUV根据其采样比例,UV分量的排列顺序有很多种不同的颜色格式,而对于Android的摄像头在onPreviewFrame输出的YUV帧格式,如果没有配置任何参数的情况下,基本上都是NV21格式,但Google对MediaCodec的API在设计和规范的时候,显得很不厚道,过于贴近Android的HAL层了,导致了NV21格式并不是所有机器的MediaCodec都支持这种格式作为编码器的输入格式!
因此,在初始化MediaCodec的时候,我们需要通过codecInfo.getCapabilitiesForType来查询机器上的MediaCodec实现具体支持哪些YUV格式作为输入格式,一般来说,起码在4.4+的系统上,这两种格式在大部分机器都有支持:
1 | MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Plannder |
两种格式分别是YUV420P和NV21,如果机器上只支持YUV420P格式的情况下,则需要先将摄像头输出的NV21格式先转换成YUV420P,才能送入编码器进行编码,否则最终出来的视频就会花屏,或者颜色出现错乱
这个算是一个不大不小的坑,基本上用上了MediaCodec进行视频编码都会遇上这个问题
2. 编码器支持特性相当有限
如果使用MediaCodec来编码H264视频流,对于H264格式来说,会有一些针对压缩率以及码率相关的视频质量设置,典型的诸如Profile(baseline, main, high),Profile Level, Bitrate mode(CBR, CQ, VBR),合理配置这些参数可以让我们在同等的码率下,获得更高的压缩率,从而提升视频的质量,Android也提供了对应的API进行设置,可以设置到MediaFormat中这些设置项:
1 | MediaFormat.KEY_BITRATE_MODE |
但问题是,对于Profile,Level, Bitrate mode这些设置,在大部分手机上都是不支持的,即使是设置了最终也不会生效,例如设置了Profile为high,最后出来的视频依然还会是Baseline….
这个问题,在7.0以下的机器几乎是必现的,其中一个可能的原因是,Android在源码层级hardcode了profile的的设置:
1 | //XXXX |
Android直到7.0之后才取消了这段地方的Hardcode:
1 | if(h264type.eProfile == OMX_VIDEO_AVCProfileBaseline){ |
这个问题可以说间接导致了MediaCodec编码出来的视频质量偏低,同等码率下,难以获得跟软编码甚至iOS那样的视频质量。
3. 16位对齐要求
前面说到,MediaCodec这个API在设计的时候,过于贴近HAL层,这在很多Soc的实现上,是直接把传入MediaCodec的buffer,在不经过任何前置处理的情况下就直接送入了Soc中。而在编码h264视频流的时候,由于h264的编码块大小一般是16x16,于是乎在一开始设置视频的宽高的时候,如果设置了一个没有对齐16的大小,例如960x540,在某些cpu上,最终编码出来的视频就会直接花屏!
很明显这还是因为厂商在实现这个API的时候,对传入的数据缺少校验以及前置处理导致的,目前来看,华为,三星的Soc出现这个问题会比较频繁,其他厂商的一些早期Soc也有这种问题,一般来说解决方法还是在设置视频宽高的时候,统一设置成对齐16位之后的大小就好了。
4.软编解码介绍
除了使用MediaCodec进行编码之外,另外一种比较流行的方案就是使用ffmpeg+x264/openh264进行软编码,ffmpeg是用于一些视频帧的预处理。这里主要是使用x264/openh264作为视频的编码器。
- x264基本上被认为是当今市面上最快的商用视频编码器,而且基本上所有h264的特性都支持,通过合理配置各种参数还是能够得到较好的压缩率和编码速度的,限于篇幅,这里不再阐述h264的参数配置,有兴趣可以看下这两篇文章对x264编码参数的调优:
- https://www.nmm-hd.org/d/index.php?title=X264%E4%BD%BF%E7%94%A8%E4%BB%8B%E7%BB%8D&variant=zh-cn
- http://www.cnblogs.com/wainiwann/p/5647521.html
但对比起x264,openh264在h264高级特性的支持比较差:
- Profile只支持到baseline, level 5.2
- 多线程编码只支持slice based,不支持frame based的多线程编码
从编码效率上来看,openh264的速度也并不会比x264快,不过其最大的好处,还是能够直接免费使用吧。
android编码器支持参数
Supported Media Formats
Video encoding recommendations The table below lists the Android media framework video encoding profiles and parameters recommended for playback using the H.264 Baseline Profile codec. The same recommendations apply to the Main Profile codec, which is only available in Android 6.0 and later.
SD (Low quality) | SD (High quality) | HD 720p (N/A on all devices) | |
---|---|---|---|
Video resolution | 176 x 144 px | 480 x 360 px | 1280 x 720 px |
Video frame rate | 12 fps | 30 fps | 30 fps |
Video bitrate | 56 Kbps | 500 Kbps | 2 Mbps |
Audio codec | AAC-LC | AAC-LC | AAC-LC |
Audio channels | 1 (mono) | 2 (stereo) | 2 (stereo) |
Audio bitrate | 24 Kbps | 128 Kbps | 192 Kbps |
The table below lists the Android media framework video encoding profiles and parameters recommended for playback using the VP8 media codec.
SD (Low quality) | SD (High quality) | HD 720p (N/A on all devices) | HD 1080p (N/A on all devices) | |
---|---|---|---|---|
Video resolution | 320 x 180 px | 640 x 360 px | 1280 x 720 px | 1920 x 1080 px |
Video frame rate | 30 fps | 30 fps | 30 fps | 30 fps |
Video bitrate | 800 Kbps | 2 Mbps | 4 Mbps | 10 Mbps |
CamcorderProfile
Retrieves the predefined camcorder profile settings for camcorder applications. These settings are read-only.
The compressed output from a recording session with a given CamcorderProfile contains two tracks: one for audio and one for video.
Each profile specifies the following set of parameters:
- The file output format
- Video codec format
- Video bit rate in bits per second
- Video frame rate in frames per second
- Video frame width and height,
- Audio codec format
- Audio bit rate in bits per second,
- Audio sample rate
- Number of audio channels for recording.
Android编码器常见问题
MediaCodec KEY_FRAME_RATE seems to be ignored
总结起来就是和输入编码器的帧率有关系
I am trying to modify the source for screenrecord in android 4.4 and lower the captured frame rate, but no matter what value I put in:
1 | format->setFloat("frame-rate", 5); |
the result is always the same ( a very high frame rate ) Is the encoder ignoring this property ? how can I control the frame rate ?
The frame-rate value is not ignored, but it doesn’t do what you want.
The combination of frame-rate and i-frame-interval determines how often I-frames (also called “sync frames”) appear in the encoded output. The frame rate value might also play a role in meeting the bitrate target on some devices, but I’m not sure about that (see e.g. this post).
The MediaCodec encoder does not drop frames. If you want to reduce the frame rate, you have to do so by sending fewer frames to it.
The screenrecord command doesn’t “sample” the screen at a fixed frame rate. Instead, every frame it receives from the surface compositor (SurfaceFlinger) is sent to the encoder, with an appropriate time stamp. If screenrecord receives 60 frames per seconds, you’ll have 60fps output. If it receives 10 frames in quick succession, followed by nothing for 5 seconds, followed by a couple more, you’ll have exactly that in the output file.
You can modify screenrecord to drop frames, but you have to be a bit careful. If you try to reduce the maximum frame rate from 60fps to 30fps by dropping every-other frame, you run the risk that in a “frame0 - frame1 - long_pause - frame2” sequence you’ll drop frame1, and the video will hold on frame0 instead, showing a not-quite-complete animation. So you need to buffer up a frame, and then encode or drop frame N-1 if the difference in presentation times between that and frame N is ~17ms.
The tricky part is that screenrecord, in its default operating mode, directs the frames to the encoder without touching them, so all you see is the encoded output. You can’t arbitrarily drop individual frames of encoded data, so you really want to prevent the encoder from seeing them in the first place. If you use the screenrecord v1.1 sources you can tap into “overlay” mode, used for –bugreport, to have the frames pass through screenrecord on their way to the encoder.
In some respects it might be simpler to write a post-processor that reduces the frame rate. I don’t know how much quality would be lost by decoding and re-encoding the video.