H264 HW encode using VideoToolbox (issue 1636083003 by emircan@chromium.org)

1,221 views
Skip to first unread message

emi...@chromium.org

unread,
Feb 2, 2016, 1:08:47 AM2/2/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
Reviewers: miu, mcasas, hbos_chromium
CL: https://codereview.chromium.org/1636083003/

Message:
PTAL

Description:
H264 HW encode using VideoToolbox

This CL adds VTVideoEncodeAccelerator whcih enables H264 encode support using
VideoToolbox on mac platforms.

Also, it includes a refactor of common VideoToolbox classes under
video_toolbox_helpers.*.

BUG=500605
TEST= Tested AppRTC loopback with Chrome flag "--enable-webrtc-hw-h264-encoding"
on https://apprtc.appspot.com/r/196623743?debug=loopback&vsc=h264


Base URL: https://chromium.googlesource.com/chromium/src.git@master

Affected files (+763, -192 lines):
  M content/common/BUILD.gn
  M content/common/gpu/media/gpu_video_encode_accelerator.h
  M content/common/gpu/media/gpu_video_encode_accelerator.cc
  A content/common/gpu/media/vt_video_encode_accelerator.h
  A content/common/gpu/media/vt_video_encode_accelerator.cc
  A content/common/gpu/media/vt_video_encode_accelerator_unittest.cc
  M content/content_common.gypi
  M content/content_tests.gypi
  M media/base/mac/BUILD.gn
  A media/base/mac/videotoolbox_helpers.h
  A media/base/mac/videotoolbox_helpers.cc
  M media/cast/sender/h264_vt_encoder.h
  M media/cast/sender/h264_vt_encoder.cc
  M media/media.gyp


pos...@chromium.org

unread,
Feb 2, 2016, 1:15:46 AM2/2/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
On 2016/02/02 06:08:46, emircan wrote:
> PTAL

It would be great if we could try to use video_encode_accelerator_unittest.cc
please. Thanks!

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 2, 2016, 7:38:46 PM2/2/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
On 2016/02/02 06:15:38, Pawel Osciak wrote:
> On 2016/02/02 06:08:46, emircan wrote:
> > PTAL
> 
> It would be great if we could try to use video_encode_accelerator_unittest.cc
> please. Thanks!

Thanks for the notice Pawel. I uploaded a new version using vea_unittest. Since 
we don't support frame rate & fps changes and H264 ffmpeg decoder, I decided
to define the enabled tests seperately. We can add more there as functionality
comes.

Kindly ping for PTAL.

https://codereview.chromium.org/1636083003/

hb...@chromium.org

unread,
Feb 3, 2016, 1:21:10 PM2/3/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
This code is all very much unfamiliar to me, it would be difficult for me to do
an adequate review of this. Only had time for a quick look-through.


https://codereview.chromium.org/1636083003/diff/80001/content/common/BUILD.gn
File content/common/BUILD.gn (right):

https://codereview.chromium.org/1636083003/diff/80001/content/common/BUILD.gn#newcode238
content/common/BUILD.gn:238: "gpu/media/vt_video_encode_accelerator.h",
Is there a reason the decoder accelerator is named "_mac" and the
encoder isn't?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/video_encode_accelerator_unittest.cc
File content/common/gpu/media/video_encode_accelerator_unittest.cc
(right):

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode1098
content/common/gpu/media/video_encode_accelerator_unittest.cc:1098: };
It seems unlikely that VTVEA would be used, this only happens if all the
other encoders fail to initialize? What is VEAClient::CreateEncoder()
used for?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc
File content/common/gpu/media/vt_video_encode_accelerator.cc (right):

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode103
content/common/gpu/media/vt_video_encode_accelerator.cc:103:
input_visible_size_.GetArea() / kOutputBufferSizeRatio));
About thread-safety, how is this class used across threads? Is
|child_task_runner_| not always the current thread? Is it used on
multiple threads and if not, why PostTask?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode169
content/common/gpu/media/vt_video_encode_accelerator.cc:169: if
(compression_session_) {
Does it make sense to DCHECK instead?

https://codereview.chromium.org/1636083003/

mca...@chromium.org

unread,
Feb 3, 2016, 3:01:32 PM2/3/16
to emi...@chromium.org, miu+re...@chromium.org, hb...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
a few comments, could only read so far.


https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode35
content/common/gpu/media/vt_video_encode_accelerator.cc:35: };
Recommend a 
private:
 DISALLOW_IMPLICIT_CONSTRUCTORS(InProgressFrameEncode);

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode47
content/common/gpu/media/vt_video_encode_accelerator.cc:47: };
Same here, use DISALLOW_IMPLICIT_CONSTRUCTORS

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode90
content/common/gpu/media/vt_video_encode_accelerator.cc:90: const bool
session_created = ResetCompressionSession();
No need for temporary?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator.cc:96:
client_ptr_factory_.reset(new base::WeakPtrFactory<Client>(client));
This |client_ptr_factory_| is only used here,
do you need to hold on to it?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode128
content/common/gpu/media/vt_video_encode_accelerator.cc:128: //
CVPixelBufferPool for the allocation of incoming video frames.
nit: s/video frames/VideoFrames/?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode169
content/common/gpu/media/vt_video_encode_accelerator.cc:169: if
(compression_session_) {
nit: 
if (!compression_session_)
  return;
...

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode233
content/common/gpu/media/vt_video_encode_accelerator.cc:233: //
TODO(emircan): Investigate IOS constraints.
?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h
File content/common/gpu/media/vt_video_encode_accelerator.h (right):

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode42
content/common/gpu/media/vt_video_encode_accelerator.h:42: typedef
CoreMediaGlue::CMSampleBufferRef CMSampleBufferRef;
using CMSampleBufferRef = CoreMediaGlue::CMSampleBufferRef;
here and in the next 2 lines.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode79
content/common/gpu/media/vt_video_encode_accelerator.h:79: const
scoped_refptr<base::SingleThreadTaskRunner> child_task_runner_;
nit: s/child_task_runner_/callback_task_runner_/ ?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode83
content/common/gpu/media/vt_video_encode_accelerator.h:83: //
child_task_runner_.
|child_task_runner_|

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 4, 2016, 12:18:53 AM2/4/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
: "gpu/media/vt_video_encode_accelerator.h",
On 2016/02/03 18:21:08, hbos_chromium wrote:
> Is there a reason the decoder accelerator is named "_mac" and the
encoder isn't?
I realize there are some inconsistency in this, and unfortunately I
couldn't find a straight answer from style guides.
But looking over this folder, vt_video_decode_accelerator_mac and
dxva_video_decode_accelerator_win has platform suffixes. Although it is
CrOS specific, v4l2_video_decode_accelerator doesn't have a suffix.
android_video_decode_accelerator is a prefix. The ideal solution would
be to have platform folders in this directory and remove all platform
suffix/prefixs from the names. So I decided to go in that direction as I
am planning to touch more files in this directory soon, but I am open to
suggestions as well.

: };
On 2016/02/03 18:21:08, hbos_chromium wrote:
> It seems unlikely that VTVEA would be used, this only happens if all
the other
> encoders fail to initialize? What is VEAClient::CreateEncoder() used
for?

Since CreateV4L2VEA() and CreateVaapiVEA() are CrOS specific, they would
be skipped.

: };
On 2016/02/03 20:01:31, mcasas wrote:
> Recommend a 
> private:
>  DISALLOW_IMPLICIT_CONSTRUCTORS(InProgressFrameEncode);

Done.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode47
content/common/gpu/media/vt_video_encode_accelerator.cc:47: };
On 2016/02/03 20:01:31, mcasas wrote:
> Same here, use DISALLOW_IMPLICIT_CONSTRUCTORS

Done.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode90
content/common/gpu/media/vt_video_encode_accelerator.cc:90: const bool
session_created = ResetCompressionSession();
On 2016/02/03 20:01:31, mcasas wrote:
> No need for temporary?

Done. I was trying to be verbose.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator.cc:96:
client_ptr_factory_.reset(new base::WeakPtrFactory<Client>(client));
On 2016/02/03 20:01:31, mcasas wrote:
> This |client_ptr_factory_| is only used here,
> do you need to hold on to it?

I can invalidate the weakptrs in Destroy() using it as a class member.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode103
content/common/gpu/media/vt_video_encode_accelerator.cc:103:
input_visible_size_.GetArea() / kOutputBufferSizeRatio));
On 2016/02/03 18:21:08, hbos_chromium wrote:
> About thread-safety, how is this class used across threads? Is
> |child_task_runner_| not always the current thread? Is it used on
multiple
> threads and if not, why PostTask?

Thanks for pointing it out. Actually, I don't need to PostTask here. All
the threads in this class run on |child_task_runner_| except
VTVideoEncodeAccelerator::CompressionCallback() where I still need to
post a task. I realize I was inconsistent with thread_checker so
consistently checking it in every method as well.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode128
content/common/gpu/media/vt_video_encode_accelerator.cc:128: //
CVPixelBufferPool for the allocation of incoming video frames.
On 2016/02/03 20:01:31, mcasas wrote:
> nit: s/video frames/VideoFrames/?

Done.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode169
content/common/gpu/media/vt_video_encode_accelerator.cc:169: if
(compression_session_) {
On 2016/02/03 20:01:31, mcasas wrote:
> nit: 
> if (!compression_session_)
>   return;
> ...

Done.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode169
content/common/gpu/media/vt_video_encode_accelerator.cc:169: if
(compression_session_) {
On 2016/02/03 18:21:08, hbos_chromium wrote:
> Does it make sense to DCHECK instead?

Acknowledged.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode233
content/common/gpu/media/vt_video_encode_accelerator.cc:233: //
TODO(emircan): Investigate IOS constraints.
On 2016/02/03 20:01:31, mcasas wrote:
> ?

Removed.

: typedef
CoreMediaGlue::CMSampleBufferRef CMSampleBufferRef;
On 2016/02/03 20:01:31, mcasas wrote:
> using CMSampleBufferRef = CoreMediaGlue::CMSampleBufferRef;
> here and in the next 2 lines.

Done.

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode79
content/common/gpu/media/vt_video_encode_accelerator.h:79: const
scoped_refptr<base::SingleThreadTaskRunner> child_task_runner_;
On 2016/02/03 20:01:31, mcasas wrote:
> nit: s/child_task_runner_/callback_task_runner_/ ?

callback might be confusing since there are callbacks to VTSession. I
will go with client_task_runner_?

https://codereview.chromium.org/1636083003/diff/80001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode83
content/common/gpu/media/vt_video_encode_accelerator.h:83: //
child_task_runner_.
On 2016/02/03 20:01:31, mcasas wrote:
> |child_task_runner_|

Done.

https://codereview.chromium.org/1636083003/

m...@chromium.org

unread,
Feb 4, 2016, 3:35:28 PM2/4/16
to emi...@chromium.org, mca...@chromium.org, hb...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
not lgtm -- Please re-upload using `git cl upload --similarity=10` so that we
can see the actual differences, and also to preserve change history in the code
repository.  (There's hundreds of LOC here, most of which haven't changed, and I
don't want to have to re-review all that.)  Also, to help the reviewer, please
restore the original ordering of the methods and helper functions (e.g., in
media/cast/sender/h264_vt_encoder.cc the DictionaryWithKeysAndValues() function
came before the others; but when it was moved to
media/base/mac/videotoolbox_helpers.cc, it appears near the bottom).  Thanks! 
:)

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 4, 2016, 3:56:06 PM2/4/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
On 2016/02/04 20:35:27, miu wrote:
> not lgtm -- Please re-upload using `git cl upload --similarity=10` so that we
> can see the actual differences, and also to preserve change history in the
code
> repository.  (There's hundreds of LOC here, most of which haven't changed, and
I
> don't want to have to re-review all that.)  Also, to help the reviewer, please
> restore the original ordering of the methods and helper functions (e.g., in
> media/cast/sender/h264_vt_encoder.cc the DictionaryWithKeysAndValues()
function
> came before the others; but when it was moved to
> media/base/mac/videotoolbox_helpers.cc, it appears near the bottom).  Thanks! 
> :)

Thanks for pointing out. I reuploaded a patch with these done.

https://codereview.chromium.org/1636083003/

m...@chromium.org

unread,
Feb 4, 2016, 5:23:22 PM2/4/16
to emi...@chromium.org, mca...@chromium.org, hb...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc
File content/common/gpu/media/vt_video_encode_accelerator.cc (right):

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode18
content/common/gpu/media/vt_video_encode_accelerator.cc:18: const size_t
kNumInputBuffers = 4;
IIRC, the VT encoder can hold onto several buffers at a time to help
optimize motion prediction.  I have no idea how many buffers it actually
*will* hold onto (1, 2, 8, or something else).  Suggest you consult
documentation for this and/or jfroy@.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode19
content/common/gpu/media/vt_video_encode_accelerator.cc:19: const size_t
kMaxFrameRateNumerator = 30;
Is 30 really the max for all hardware solutions?  I'd imagine the max
framerate to be machine-dependent, and also dependent on the resolution.
 Furthermore, there are likely some hardware encoders that cannot do 30
FPS at 4K resolution.

If anything, you should put a TODO+crbug here to query the platform for
supported profiles rather than hard-code these.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode23
content/common/gpu/media/vt_video_encode_accelerator.cc:23: const size_t
kOutputBufferSizeRatio = 10;
Please document this.  What does 10 mean?

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode122
content/common/gpu/media/vt_video_encode_accelerator.cc:122: (ref_time -
base::TimeTicks()).InMicroseconds(), USEC_PER_SEC);
IIUC, this should be frame->timestamp(), not (ref_time -
base::TimeTicks()).  Though, I see it's that way in the cast code as
well.  Can you fix it there too?  :)

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode201
content/common/gpu/media/vt_video_encode_accelerator.cc:201: return;
You can't return early anywhere in this function. 
Client::BitstreamBufferReady() must be called to release the buffer back
to the client.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode210
content/common/gpu/media/vt_video_encode_accelerator.cc:210: if
(!encoder) {
This should never be null.  DCHECK(!encoder) would suffice here.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode215
content/common/gpu/media/vt_video_encode_accelerator.cc:215: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
You should call Client::NotifyError() and either: 1) shutdown the
encoder; or 2) recover from this error by requiring a key frame be
encoded next.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode228
content/common/gpu/media/vt_video_encode_accelerator.cc:228:
memcpy(buffer_ref->shm->memory(), annexb_buffer.data(),
Please remove this redundant memcpy().  Just provide
buffer_ref->shm->memory() as the 2nd argument in the call to
media::CopySampleBufferToAnnexBBuffer() above.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode246
content/common/gpu/media/vt_video_encode_accelerator.cc:246:
->kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder(),
80 chars

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode296
content/common/gpu/media/vt_video_encode_accelerator.cc:296:
media::SetSessionProperty(
Suggestion: Instead of passing compression_session_ and
videotoolbox_glue_ to every function call, perhaps the
SetSessionProperty() methods could be in a simple class?  Example:

  media::video_toolbox::SessionPropertySetter
setter(compression_session_, videotoolbox_glue_);
 
setter.Set(videotoolbox_glue_->kVTCompressionPropertyKey_ProfileLevel(),
            
videotoolbox_glue_->kVTProfileLevel_H264_Baseline_AutoLevel());
  setter.Set(...);
  setter.Set(...);
  setter.Set(...);

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h
File content/common/gpu/media/vt_video_encode_accelerator.h (left):

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#oldcode76
content/common/gpu/media/vt_video_encode_accelerator.h:76: void
UpdateFrameSize(const gfx::Size& size_needed);
It's unfortunate that VEA only works with fixed input frame sizes.  :(

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h
File content/common/gpu/media/vt_video_encode_accelerator.h (right):

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode70
content/common/gpu/media/vt_video_encode_accelerator.h:70: // Initial
parameters given by media::VideoEncodeAccelerator::Initialize()
This comment is not accurate: |bitrate_| may change repeatedly during a
session.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode71
content/common/gpu/media/vt_video_encode_accelerator.h:71: uint32_t
bitrate_;
Type should be int.  Unsigneds are for index counters (size_t) and bit
twiddling.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode76
content/common/gpu/media/vt_video_encode_accelerator.h:76:
std::deque<scoped_ptr<BitstreamBufferRef>> encoder_output_queue_;
nit: If this is a LIFO, seems like you want a std::stack.

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h
File media/base/mac/videotoolbox_helpers.h (right):

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h#newcode15
media/base/mac/videotoolbox_helpers.h:15:
DictionaryWithKeysAndValues(CFTypeRef* keys, CFTypeRef* values, size_t
size);
All of these functions are VT-specific, but have rather
generalized-sounding names.  Could you put everything in this file in,
say, a video_toolbox namespace?

  namespace media {
  namespace video_toolbox {
  ...functions...
  }  // namespace video_toolbox
  }  // namespace media

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h#newcode15
media/base/mac/videotoolbox_helpers.h:15:
DictionaryWithKeysAndValues(CFTypeRef* keys, CFTypeRef* values, size_t
size);
Also, throughout this file, can you please add a comment to explain what
each function does?

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h#newcode34
media/base/mac/videotoolbox_helpers.h:34: using VTCompressionSessionRef
= VideoToolboxGlue::VTCompressionSessionRef;
style guide violation: Please remove this.  Better to be more specific
and use the fully-qualified name in header files, anyways.  ;)

https://codereview.chromium.org/1636083003/diff/120001/media/media.gyp
File media/media.gyp (right):

https://codereview.chromium.org/1636083003/diff/120001/media/media.gyp#newcode340
media/media.gyp:340: 'base/mac/videotoolbox_helpers.cc',
These should only be built and linked on MacOS or iOS, right?  The
BUILD.gn file seems to do this, so maybe these should be under the
corresponding GYP target?

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 6, 2016, 11:24:07 PM2/6/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
: const size_t
kNumInputBuffers = 4;
On 2016/02/04 22:23:20, miu wrote:
> IIRC, the VT encoder can hold onto several buffers at a time to help
optimize
> motion prediction.  I have no idea how many buffers it actually *will*
hold onto
> (1, 2, 8, or something else).  Suggest you consult documentation for
this and/or
> jfroy@.

I discussed with jfroy@ and added him as a reviewer. Unfortunately, in
theory VT can hold onto as much buffers as it decides to. I couldn't
find any information about it, and it doesn't look like something we can
easily query.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode19
content/common/gpu/media/vt_video_encode_accelerator.cc:19: const size_t
kMaxFrameRateNumerator = 30;
On 2016/02/04 22:23:20, miu wrote:
> Is 30 really the max for all hardware solutions?  I'd imagine the max
framerate
> to be machine-dependent, and also dependent on the resolution. 
Furthermore,
> there are likely some hardware encoders that cannot do 30 FPS at 4K
resolution.
> 
> If anything, you should put a TODO+crbug here to query the platform
for
> supported profiles rather than hard-code these.

I added a TODO for querying platform capabilities. Unfortunately, there
isn't an easy way to query these. One way we discussed with jfroy@ is to
create sessions but that would definitely effect the initialize time. I
will look out for better solution and leave it as a TODO for now.

Coming to framerate and resolution; I agree that in theory there should
be a tradeoff. VTCompressionSession provides a parameter
kVTCompressionPropertyKey_ExpectedFrameRate, but there is nothing that I
found regarding resolution. Even if
kVTCompressionPropertyKey_ExpectedFrameRate is supported for 60, I am
doubtful if it would do 4K 60 fps. So, I am just hardcording max
capabilities for now that we return in
VTVideoEncodeAccelerator::GetSupportedProfiles().

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode23
content/common/gpu/media/vt_video_encode_accelerator.cc:23: const size_t
kOutputBufferSizeRatio = 10;
On 2016/02/04 22:23:20, miu wrote:
> Please document this.  What does 10 mean?

Done.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode122
content/common/gpu/media/vt_video_encode_accelerator.cc:122: (ref_time -
base::TimeTicks()).InMicroseconds(), USEC_PER_SEC);
On 2016/02/04 22:23:20, miu wrote:
> IIUC, this should be frame->timestamp(), not (ref_time -
base::TimeTicks()). 
> Though, I see it's that way in the cast code as well.  Can you fix it
there too?
>  :)

Done.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode201
content/common/gpu/media/vt_video_encode_accelerator.cc:201: return;
On 2016/02/04 22:23:20, miu wrote:
> You can't return early anywhere in this function. 
> Client::BitstreamBufferReady() must be called to release the buffer
back to the
> client.

I am holding onto a BitstreamBuffer from the queue on l.218. If we early
return here, the next callback can use it, and it should be fine?

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode210
content/common/gpu/media/vt_video_encode_accelerator.cc:210: if
(!encoder) {
On 2016/02/04 22:23:20, miu wrote:
> This should never be null.  DCHECK(!encoder) would suffice here.

Done.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode215
content/common/gpu/media/vt_video_encode_accelerator.cc:215: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
On 2016/02/04 22:23:20, miu wrote:
> You should call Client::NotifyError() and either: 1) shutdown the
encoder; or 2)
> recover from this error by requiring a key frame be encoded next.

I added NotifyError call. However, I don't understand why we would need
to shutdown encoder or require a keyframe. AFAIU, other encoders aren't
doing that:
https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu/media/vaapi_video_encode_accelerator.cc&l=1046&gs=cpp:content::class-VaapiVideoEncodeAccelerator::NotifyError(media::VideoEncodeAccelerator::Error)@chromium/../../content/common/gpu/media/vaapi_video_encode_accelerator.cc%257Cdef&gsn=NotifyError&ct=xref_usages

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode228
content/common/gpu/media/vt_video_encode_accelerator.cc:228:
memcpy(buffer_ref->shm->memory(), annexb_buffer.data(),
On 2016/02/04 22:23:20, miu wrote:
> Please remove this redundant memcpy().  Just provide
buffer_ref->shm->memory()
> as the 2nd argument in the call to
media::CopySampleBufferToAnnexBBuffer()
> above.

media::CopySampleBufferToAnnexBBuffer() expects a string input and
reserves buffer in the flights, so it isn't as simple. I was trying to
leave it for a follow-up patch, but since it came up I am doing it.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode246
content/common/gpu/media/vt_video_encode_accelerator.cc:246:
->kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder(),
On 2016/02/04 22:23:20, miu wrote:
> 80 chars

Done.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode296
content/common/gpu/media/vt_video_encode_accelerator.cc:296:
media::SetSessionProperty(
On 2016/02/04 22:23:20, miu wrote:
> Suggestion: Instead of passing compression_session_ and
videotoolbox_glue_ to
> every function call, perhaps the SetSessionProperty() methods could be
in a
> simple class?  Example:
> 
>   media::video_toolbox::SessionPropertySetter
setter(compression_session_,
> videotoolbox_glue_);
>  
setter.Set(videotoolbox_glue_->kVTCompressionPropertyKey_ProfileLevel(),
>             
videotoolbox_glue_->kVTProfileLevel_H264_Baseline_AutoLevel());
>   setter.Set(...);
>   setter.Set(...);
>   setter.Set(...);
>   

Done.

: void
UpdateFrameSize(const gfx::Size& size_needed);
On 2016/02/04 22:23:20, miu wrote:
> It's unfortunate that VEA only works with fixed input frame sizes.  :(

It might be a good fit for V4L2 and VAAPI but VideoToolbox would benefit
from this. Maybe a future TODO about this.

: // Initial
parameters given by media::VideoEncodeAccelerator::Initialize()
On 2016/02/04 22:23:20, miu wrote:
> This comment is not accurate: |bitrate_| may change repeatedly during
a session.

Removed it.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode71
content/common/gpu/media/vt_video_encode_accelerator.h:71: uint32_t
bitrate_;
On 2016/02/04 22:23:20, miu wrote:
> Type should be int.  Unsigneds are for index counters (size_t) and bit
> twiddling.

Done.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode76
content/common/gpu/media/vt_video_encode_accelerator.h:76:
std::deque<scoped_ptr<BitstreamBufferRef>> encoder_output_queue_;
On 2016/02/04 22:23:20, miu wrote:
> nit: If this is a LIFO, seems like you want a std::stack.

Actually it is a FIFO at this point, changing the comment.

:
DictionaryWithKeysAndValues(CFTypeRef* keys, CFTypeRef* values, size_t
size);
On 2016/02/04 22:23:20, miu wrote:
> All of these functions are VT-specific, but have rather
generalized-sounding
> names.  Could you put everything in this file in, say, a video_toolbox
> namespace?
> 
>   namespace media {
>   namespace video_toolbox {
>   ...functions...
>   }  // namespace video_toolbox
>   }  // namespace media

Done.

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h#newcode15
media/base/mac/videotoolbox_helpers.h:15:
DictionaryWithKeysAndValues(CFTypeRef* keys, CFTypeRef* values, size_t
size);
On 2016/02/04 22:23:20, miu wrote:
> Also, throughout this file, can you please add a comment to explain
what each
> function does?

Done.

https://codereview.chromium.org/1636083003/diff/120001/media/base/mac/videotoolbox_helpers.h#newcode34
media/base/mac/videotoolbox_helpers.h:34: using VTCompressionSessionRef
= VideoToolboxGlue::VTCompressionSessionRef;
On 2016/02/04 22:23:20, miu wrote:
> style guide violation: Please remove this.  Better to be more specific
and use
> the fully-qualified name in header files, anyways.  ;)

Done.

https://codereview.chromium.org/1636083003/

pos...@chromium.org

unread,
Feb 7, 2016, 11:33:43 PM2/7/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc
File content/common/gpu/media/video_encode_accelerator_unittest.cc
(right):

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode657
content/common/gpu/media/video_encode_accelerator_unittest.cc:657:
decoder_(new media::FFmpegVideoDecoder()),
Did you mean ffmpeg was just not compiled/working on Mac at all, or that
it would fail for some other reason? I thought ffmpeg worked on Mac...

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode1713
content/common/gpu/media/video_encode_accelerator_unittest.cc:1713:
#else
Even if framerate changes are not supported, perhaps we could still test
bitrate changes?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc
File content/common/gpu/media/vt_video_encode_accelerator.cc (right):

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode33
content/common/gpu/media/vt_video_encode_accelerator.cc:33: struct
InProgressFrameEncode {
Could this be a private substruct of VTVEA?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode48
content/common/gpu/media/vt_video_encode_accelerator.cc:48:
scoped_ptr<base::SharedMemory> shm,
const& perhaps?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode60
content/common/gpu/media/vt_video_encode_accelerator.cc:60: :
videotoolbox_glue_(VideoToolboxGlue::Get()),
We used to check this for != nullptr
(https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=317).
Perhaps it would be safer to still do this?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator.cc:96:
DCHECK_EQ(media::PIXEL_FORMAT_I420, format);
I'd suggest if()s and careful check for argument values. VEA is a Pepper
interface too and there may be other clients that could try to create us
with various arguments and depend on us to fail Initialize().

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode97
content/common/gpu/media/vt_video_encode_accelerator.cc:97:
DCHECK_EQ(media::H264PROFILE_BASELINE, output_profile);
I'd prefer we if()'d profile here nevertheless. There is no requirement
to check supported profiles and some clients may not do this, relying on
Initialize() to fail.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode107
content/common/gpu/media/vt_video_encode_accelerator.cc:107:
client_ptr_factory_.reset(new base::WeakPtrFactory<Client>(client));
I'd suggest doing this before calling any methods, just in case (for
later changes in this class).

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode148
content/common/gpu/media/vt_video_encode_accelerator.cc:148:
DLOG_IF(ERROR, status != noErr)
No need to NOTIFY_ERROR? Can we continue from next Encode() without
reinitialization after VTCompressionSessionEncodeFrame() fails?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode156
content/common/gpu/media/vt_video_encode_accelerator.cc:156:
DCHECK_GE(buffer.size(),
static_cast<size_t>(input_visible_size_.GetArea() /
I think it'd be better to if() here please.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode162
content/common/gpu/media/vt_video_encode_accelerator.cc:162: DLOG(ERROR)
<< "Failed mapping shared memory.";
NOTIFY_ERROR?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode168
content/common/gpu/media/vt_video_encode_accelerator.cc:168:
encoder_output_queue_.push_back(std::move(buffer_ref));
Do we need to wake something up here? If we got Encode() first, but no
output buffers are ready at that time, we would stall?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode173
content/common/gpu/media/vt_video_encode_accelerator.cc:173: uint32_t
framerate) {
If this class cannot handle changing framerate, should we return an
error if such a change is requested?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode177
content/common/gpu/media/vt_video_encode_accelerator.cc:177: bitrate_ =
bitrate;
We should preferably check input values here.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode178
content/common/gpu/media/vt_video_encode_accelerator.cc:178: if
(!compression_session_)
NOTIFY_ERROR?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode181
content/common/gpu/media/vt_video_encode_accelerator.cc:181:
session_property_setter_->SetSessionProperty(
SetSessionProperty() methods are bool, but we are not checking return
values, here and in other places...

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode195
content/common/gpu/media/vt_video_encode_accelerator.cc:195: void
VTVideoEncodeAccelerator::CompressionCallback(void* encoder_opaque,
Do we know what thread this is called on?

Should we DCHECK(encoder->thread_checker_.CalledOnValidThread()) ?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode203
content/common/gpu/media/vt_video_encode_accelerator.cc:203: DLOG(ERROR)
<< " encode failed: " << status;
NOTIFY_ERROR?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode218
content/common/gpu/media/vt_video_encode_accelerator.cc:218: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
Please see my previous comment, but I think this could happen in normal
cases if Client called Encode() first and then
UseOutputBitstreamBuffer(). This should be an acceptable case I feel
though and we should instead just not encode from Encode(), but once an
input:output buffer pair is available, either from Encode() or
UseOutputBitstreamBuffer() ?

Or, perhaps even better, if possible, always encode in Encode(), but
defer this copy here if encoder_output_queue_.empty(), putting sbuf on
some kind of a queue and copying after next
UseOutputBitstreamBuffer()...

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode238
content/common/gpu/media/vt_video_encode_accelerator.cc:238: // This
method is NOT called on |client_task_runner_|, so we still need to
Uhh, if so, I don't think we can access encoder at all, including
members like encoder_output_queue_. Aren't we pushing to
encoder_output_queue_ on client_task_runner_ for example?

If that's the case, could we trampoline from this method at the
beginning to client_task_runner_ to actually handle things there?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode255
content/common/gpu/media/vt_video_encode_accelerator.cc:255: // Keep
these in-sync with those in ConfigureSession().
Where is ConfigureSession() ?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode265
content/common/gpu/media/vt_video_encode_accelerator.cc:265: const int
format[] = {
Is this a fourcc? If so uint32_t please.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode308
content/common/gpu/media/vt_video_encode_accelerator.cc:308:
videotoolbox_glue_->kVTProfileLevel_H264_Baseline_AutoLevel());
Is baseline preferred by us? Normally constrained baseline is more
compatible with various devices (unless this actually is constrained).
Main may also be better in general.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h
File content/common/gpu/media/vt_video_encode_accelerator.h (right):

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode17
content/common/gpu/media/vt_video_encode_accelerator.h:17: // interface
for MacOSX.
Should we keep threading comment, or is VT no longer non-thread-safe?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode40
content/common/gpu/media/vt_video_encode_accelerator.h:40: struct
BitstreamBufferRef;
Could this be private?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode71
content/common/gpu/media/vt_video_encode_accelerator.h:71: int32_t
bitrate_;
uint32_t?

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 8, 2016, 6:41:24 PM2/8/16
to miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, jf...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
:
decoder_(new media::FFmpegVideoDecoder()),
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Did you mean ffmpeg was just not compiled/working on Mac at all, or
that it
> would fail for some other reason? I thought ffmpeg worked on Mac...

It points to this line when built:
https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu/media/video_encode_accelerator_unittest.cc&l=696.
Do you know if it is still the case?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode1713
content/common/gpu/media/video_encode_accelerator_unittest.cc:1713:
#else
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Even if framerate changes are not supported, perhaps we could still
test bitrate
> changes?

Actually, bitrate code isn't doing much as VT does not allow
reconfiguration while in session. See
https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=628
and https://code.google.com/p/chromium/issues/detail?id=425352 . I will
add a comment regarding this in that section as well.

: struct
InProgressFrameEncode {
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Could this be a private substruct of VTVEA?

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode48
content/common/gpu/media/vt_video_encode_accelerator.cc:48:
scoped_ptr<base::SharedMemory> shm,
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> const& perhaps?

I cannot use std::move on const scoped_ptr&.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode60
content/common/gpu/media/vt_video_encode_accelerator.cc:60: :
videotoolbox_glue_(VideoToolboxGlue::Get()),
On 2016/02/08 04:33:43, Pawel Osciak wrote:
> We used to check this for != nullptr
>
(https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=317).
> Perhaps it would be safer to still do this?

Hmm we dont have a similar IsSupported() call though. I will move it to
Initialize() and fail if it is nullptr.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator.cc:96:
DCHECK_EQ(media::PIXEL_FORMAT_I420, format);
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> I'd suggest if()s and careful check for argument values. VEA is a
Pepper
> interface too and there may be other clients that could try to create
us with
> various arguments and depend on us to fail Initialize().

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode97
content/common/gpu/media/vt_video_encode_accelerator.cc:97:
DCHECK_EQ(media::H264PROFILE_BASELINE, output_profile);
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> I'd prefer we if()'d profile here nevertheless. There is no
requirement to check
> supported profiles and some clients may not do this, relying on
Initialize() to
> fail.

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode107
content/common/gpu/media/vt_video_encode_accelerator.cc:107:
client_ptr_factory_.reset(new base::WeakPtrFactory<Client>(client));
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> I'd suggest doing this before calling any methods, just in case (for
later
> changes in this class).

Ok, moving it right after the initial ifs.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode148
content/common/gpu/media/vt_video_encode_accelerator.cc:148:
DLOG_IF(ERROR, status != noErr)
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> No need to NOTIFY_ERROR? Can we continue from next Encode() without
> reinitialization after VTCompressionSessionEncodeFrame() fails?

Thanks for pointing out. I realize I haven't used enough NotifyErrors.
Adding them now to wherever appropriate.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode156
content/common/gpu/media/vt_video_encode_accelerator.cc:156:
DCHECK_GE(buffer.size(),
static_cast<size_t>(input_visible_size_.GetArea() /
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> I think it'd be better to if() here please.

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode162
content/common/gpu/media/vt_video_encode_accelerator.cc:162: DLOG(ERROR)
<< "Failed mapping shared memory.";
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> NOTIFY_ERROR?

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode168
content/common/gpu/media/vt_video_encode_accelerator.cc:168:
encoder_output_queue_.push_back(std::move(buffer_ref));
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Do we need to wake something up here? If we got Encode() first, but no
output
> buffers are ready at that time, we would stall?

Replied to the later comment.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode173
content/common/gpu/media/vt_video_encode_accelerator.cc:173: uint32_t
framerate) {
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> If this class cannot handle changing framerate, should we return an
error if
> such a change is requested?

Actually both changes aren't supported, but at least bitrate has an
input param for session creation. Restarting the session with the new
bitrate param should work, but that wouldn't be a cheap step. I will
visit this issue in a later CL.
If we were to return error, how would the clients act accordingly?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode177
content/common/gpu/media/vt_video_encode_accelerator.cc:177: bitrate_ =
bitrate;
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> We should preferably check input values here.

What kind of checks? I found (bitrate < 1) checks from CrOS VEAs. Would
that be enough?

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode178
content/common/gpu/media/vt_video_encode_accelerator.cc:178: if
(!compression_session_)
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> NOTIFY_ERROR?

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode181
content/common/gpu/media/vt_video_encode_accelerator.cc:181:
session_property_setter_->SetSessionProperty(
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> SetSessionProperty() methods are bool, but we are not checking return
values,
> here and in other places...

I will go through them.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode195
content/common/gpu/media/vt_video_encode_accelerator.cc:195: void
VTVideoEncodeAccelerator::CompressionCallback(void* encoder_opaque,
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Do we know what thread this is called on?
> 
> Should we DCHECK(encoder->thread_checker_.CalledOnValidThread()) ?

No. To quote the documentation: "This function may be called
asynchronously, on a different thread from the one that calls
VTCompressionSessionEncodeFrame"

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode203
content/common/gpu/media/vt_video_encode_accelerator.cc:203: DLOG(ERROR)
<< " encode failed: " << status;
On 2016/02/08 04:33:43, Pawel Osciak wrote:
> NOTIFY_ERROR?

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode218
content/common/gpu/media/vt_video_encode_accelerator.cc:218: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Please see my previous comment, but I think this could happen in
normal cases if
> Client called Encode() first and then UseOutputBitstreamBuffer(). This
should be
> an acceptable case I feel though and we should instead just not encode
from
> Encode(), but once an input:output buffer pair is available, either
from
> Encode() or UseOutputBitstreamBuffer() ?
> 
> Or, perhaps even better, if possible, always encode in Encode(), but
defer this
> copy here if encoder_output_queue_.empty(), putting sbuf on some kind
of a queue
> and copying after next UseOutputBitstreamBuffer()...

Thanks for pointing out. I didn't know Encode() and then
UseOutputBitstreamBuffer() is a valid case. I like the second option
better, and adding a new queue to retain those buffers.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode238
content/common/gpu/media/vt_video_encode_accelerator.cc:238: // This
method is NOT called on |client_task_runner_|, so we still need to
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Uhh, if so, I don't think we can access encoder at all, including
members like
> encoder_output_queue_. Aren't we pushing to encoder_output_queue_ on
> client_task_runner_ for example?
> 
> If that's the case, could we trampoline from this method at the
beginning to
> client_task_runner_ to actually handle things there?

I am posting a task as you suggest. 

But why wouldn't we access encoder at all? Accessing client_task_runner
through encoder is also done on cast code:
https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=696
. If your concern is regarding thread safety of std::deque, I checked it
prior and found mixed answers regarding it.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode255
content/common/gpu/media/vt_video_encode_accelerator.cc:255: // Keep
these in-sync with those in ConfigureSession().
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Where is ConfigureSession() ?

Changed it to ConfigureCompressionSession().

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode265
content/common/gpu/media/vt_video_encode_accelerator.cc:265: const int
format[] = {
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Is this a fourcc? If so uint32_t please.

We need to create CFArrayRef<int> from it.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode308
content/common/gpu/media/vt_video_encode_accelerator.cc:308:
videotoolbox_glue_->kVTProfileLevel_H264_Baseline_AutoLevel());
On 2016/02/08 04:33:42, Pawel Osciak wrote:
> Is baseline preferred by us? Normally constrained baseline is more
compatible
> with various devices (unless this actually is constrained). Main may
also be
> better in general.

I discussed this with WebRTC people. I learned that Baseline is the only
one that openh264 supports -SW case- and compatible with Firefox as
well. I am planning to support only Baseline in this CL. I can add other
profiles after further discussion, and it wouldn't be hard.

: // interface
for MacOSX.
On 2016/02/08 04:33:43, Pawel Osciak wrote:
> Should we keep threading comment, or is VT no longer non-thread-safe?

I will keep the threading comment.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode40
content/common/gpu/media/vt_video_encode_accelerator.h:40: struct
BitstreamBufferRef;
On 2016/02/08 04:33:43, Pawel Osciak wrote:
> Could this be private?

Done.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode71
content/common/gpu/media/vt_video_encode_accelerator.h:71: int32_t
bitrate_;
On 2016/02/08 04:33:43, Pawel Osciak wrote:
> uint32_t?

It was uint32_t but miu@ asked me to change it on PS#3.

https://codereview.chromium.org/1636083003/

m...@chromium.org

unread,
Feb 9, 2016, 6:29:23 PM2/9/16
to emi...@chromium.org, mca...@chromium.org, hb...@chromium.org, jf...@chromium.org, pos...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode201
content/common/gpu/media/vt_video_encode_accelerator.cc:201: return;
On 2016/02/07 04:24:04, emircan wrote:
> On 2016/02/04 22:23:20, miu wrote:
> > You can't return early anywhere in this function. 
> > Client::BitstreamBufferReady() must be called to release the buffer
back to
> the
> > client.
> 
> I am holding onto a BitstreamBuffer from the queue on l.218. If we
early return
> here, the next callback can use it, and it should be fine?

I could be misunderstanding the intended semantics of the VEA API, but
if the client sends a frame to VEA for encoding and the encoder decides
to drop the frame, shouldn't the client be made aware of that fact?  The
client might need to do something to account for that.

Also, the client may be assuming that for a sequence of calls to
VEA::Encode(), say for frame #1, #2, #3, etc.; the
BitstreamBufferReady() method would be invoked for those frames in the
same ordering.  Meaning, the client would assume the first
BitstreamBufferReady() call is the result for frame #1, the next one for
frame #2, and so on.  FWIW, the media::cast::ExternalVideoEncoder (which
uses VEA) makes this assumption.

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode215
content/common/gpu/media/vt_video_encode_accelerator.cc:215: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
On 2016/02/07 04:24:05, emircan wrote:
> On 2016/02/04 22:23:20, miu wrote:
> > You should call Client::NotifyError() and either: 1) shutdown the
encoder; or
> 2)
> > recover from this error by requiring a key frame be encoded next.
> 
> I added NotifyError call. However, I don't understand why we would
need to
> shutdown encoder or require a keyframe. AFAIU, other encoders aren't
doing that:
>

When decoding frames, there are implicit dependencies on prior frames. 
So, if the decoder is missing a P-frame or B-frame, it will not be able
to continue decoding until it receives an I-frame (a key frame).

One suggestion for solving this problem in your code here: Instead of
calling encoder_output_queue_.pop_front() here, perhaps you should make
that call *before* starting the encode (i.e., where you create the
InProgressFrameEncode struct).  That way, you know before you even start
encoding the frame whether you will have an output buffer available.  In
other words, you'd be "reserving" the output buffer for a frame upfront.

https://codereview.chromium.org/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.h#newcode71
content/common/gpu/media/vt_video_encode_accelerator.h:71: int32_t
bitrate_;
On 2016/02/08 23:41:24, emircan wrote:
> On 2016/02/08 04:33:43, Pawel Osciak wrote:
> > uint32_t?
> 
> It was uint32_t but miu@ asked me to change it on PS#3.

Chromium C++ style guide discusses when and when not to use unsigned
types: https://www.chromium.org/developers/coding-style#TOC-Types

In this case, it seems to me it was being used for "documentative
purposes" which the style guide says not to do.  (That, and there was a
cast to a signed value where |bitrate_| is being passed to a VT API
call.)

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode30
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:30: const
size_t kOutputBufferSizeRatio = 10;
Instead of a constant, shouldn't this depend on the configured bit rate?
 e.g., 2 Mbps versus 20 Mbps

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode160
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:160:
reinterpret_cast<void*>(request.release()), nullptr);
I could be mistaken, but it seems memory is being leaked here.  What
uses/destroys the InProgressFrameEncode object?

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode378
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:378:
session_property_setter_.reset(
This shouldn't be allocated for the lifetime of the class, since it's
only used briefly in this method (which is not being called very often).
 Instead, just make it a local stack variable:

  video_toolbox::SessionPropertySetter setter(...);
  setter.SetSessionProperty(...);
  ...

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.h
File content/common/gpu/media/vt_video_encode_accelerator_mac.h (right):

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.h#newcode80
content/common/gpu/media/vt_video_encode_accelerator_mac.h:80:
scoped_ptr<media::video_toolbox::SessionPropertySetter>
Please remove (see comment in .cc file).

https://codereview.chromium.org/1636083003/diff/200001/media/base/mac/videotoolbox_helpers.h
File media/base/mac/videotoolbox_helpers.h (right):

https://codereview.chromium.org/1636083003/diff/200001/media/base/mac/videotoolbox_helpers.h#newcode31
media/base/mac/videotoolbox_helpers.h:31: MEDIA_EXPORT bool
CopySampleBufferToAnnexBBuffer(
For simplicity, you should delete the version of this function that
takes a std::string output.  Client code can just pre-allocate the
string and truncate it afterwards, like so:

  std::string annexb_buffer(kMaxAnnexBSize, '\0');
  size_t used_size = 0;
  CopySampleBufferToAnnex(sbuf,
                         
reinterpret_cast<uint8_t*>(annexb_buffer.data()),
                          kMaxAnnexBSize,
                          keyframe,
                          &used_size);
  annexb_buffer.resize(used_size);

Then, you don't need that extra implementation in the .cc file to handle
both string and uint8_t* output buffer cases.

https://codereview.chromium.org/1636083003/diff/200001/media/base/mac/videotoolbox_helpers.h#newcode37
media/base/mac/videotoolbox_helpers.h:37: uint8_t* annexb_buffer,
style: output arguments go last

https://codereview.chromium.org/1636083003/diff/200001/media/cast/sender/h264_vt_encoder.cc
File media/cast/sender/h264_vt_encoder.cc (right):

https://codereview.chromium.org/1636083003/diff/200001/media/cast/sender/h264_vt_encoder.cc#newcode302
media/cast/sender/h264_vt_encoder.cc:302:
session_property_setter_.reset(new video_toolbox::SessionPropertySetter(
ditto here: Please use local variable instead.

https://codereview.chromium.org/1636083003/diff/200001/media/cast/sender/h264_vt_encoder.h
File media/cast/sender/h264_vt_encoder.h (right):

https://codereview.chromium.org/1636083003/diff/200001/media/cast/sender/h264_vt_encoder.h#newcode123
media/cast/sender/h264_vt_encoder.h:123:
scoped_ptr<video_toolbox::SessionPropertySetter>
session_property_setter_;
Please remove (see comment in .cc file).

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 10, 2016, 12:21:54 AM2/10/16
to miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org
: return;
On 2016/02/09 23:29:22, miu wrote:
> On 2016/02/07 04:24:04, emircan wrote:
> > On 2016/02/04 22:23:20, miu wrote:
> > > You can't return early anywhere in this function. 
> > > Client::BitstreamBufferReady() must be called to release the
buffer back to
> > the
> > > client.
> > 
> > I am holding onto a BitstreamBuffer from the queue on l.218. If we
early
> return
> > here, the next callback can use it, and it should be fine?
> 
> I could be misunderstanding the intended semantics of the VEA API, but
if the
> client sends a frame to VEA for encoding and the encoder decides to
drop the
> frame, shouldn't the client be made aware of that fact?  The client
might need
> to do something to account for that.
> 
> Also, the client may be assuming that for a sequence of calls to
VEA::Encode(),
> say for frame #1, #2, #3, etc.; the BitstreamBufferReady() method
would be
> invoked for those frames in the same ordering.  Meaning, the client
would assume
> the first BitstreamBufferReady() call is the result for frame #1, the
next one
> for frame #2, and so on.  FWIW, the media::cast::ExternalVideoEncoder
(which
> uses VEA) makes this assumption.

I see the sequence assumption. Cast VT code seems to continue and send
back an empty output buffer -payload_size=0- for that case, and I will
follow that logic.

https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=713

https://codereview.chromium.org/1636083003/diff/120001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode215
content/common/gpu/media/vt_video_encode_accelerator.cc:215: DLOG(ERROR)
<< "No more bitstream buffer to encode into.";
On 2016/02/09 23:29:22, miu wrote:
> On 2016/02/07 04:24:05, emircan wrote:
> > On 2016/02/04 22:23:20, miu wrote:
> > > You should call Client::NotifyError() and either: 1) shutdown the
encoder;
> or
> > 2)
> > > recover from this error by requiring a key frame be encoded next.
> > 
> > I added NotifyError call. However, I don't understand why we would
need to
> > shutdown encoder or require a keyframe. AFAIU, other encoders aren't
doing
> that:
> >
>
> 
> When decoding frames, there are implicit dependencies on prior frames.
 So, if
> the decoder is missing a P-frame or B-frame, it will not be able to
continue
> decoding until it receives an I-frame (a key frame).
> 
> One suggestion for solving this problem in your code here: Instead of
calling
> encoder_output_queue_.pop_front() here, perhaps you should make that
call
> *before* starting the encode (i.e., where you create the
InProgressFrameEncode
> struct).  That way, you know before you even start encoding the frame
whether
> you will have an output buffer available.  In other words, you'd be
"reserving"
> the output buffer for a frame upfront.

I changed this behavior after posciak@ comments. I am now pushing those
to a queue and waiting for the next BitstreamBuffer.

: const
size_t kOutputBufferSizeRatio = 10;
On 2016/02/09 23:29:22, miu wrote:
> Instead of a constant, shouldn't this depend on the configured bit
rate?  e.g.,
> 2 Mbps versus 20 Mbps

I tried doing that but it would be very dependent on initial_bitrate,
since we request for Bitstream buffers once only in Initialize(). I will
take max of these two values to be safe. WDYT?

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode160
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:160:
reinterpret_cast<void*>(request.release()), nullptr);
On 2016/02/09 23:29:22, miu wrote:
> I could be mistaken, but it seems memory is being leaked here.  What
> uses/destroys the InProgressFrameEncode object?

Thanks for pointing out. I will make a scoped_ptr to release it properly
in the callback.

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode378
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:378:
session_property_setter_.reset(
On 2016/02/09 23:29:22, miu wrote:
> This shouldn't be allocated for the lifetime of the class, since it's
only used
> briefly in this method (which is not being called very often). 
Instead, just
> make it a local stack variable:
> 
>   video_toolbox::SessionPropertySetter setter(...);
>   setter.SetSessionProperty(...);
>   ...

It is used on l.233 as well. Following example:
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/video_coding/codecs/h264/h264_video_toolbox_encoder.cc&l=320
: MEDIA_EXPORT bool
CopySampleBufferToAnnexBBuffer(
On 2016/02/09 23:29:22, miu wrote:
> For simplicity, you should delete the version of this function that
takes a
> std::string output.  Client code can just pre-allocate the string and
truncate
> it afterwards, like so:
> 
>   std::string annexb_buffer(kMaxAnnexBSize, '\0');
>   size_t used_size = 0;
>   CopySampleBufferToAnnex(sbuf,
>                          
reinterpret_cast<uint8_t*>(annexb_buffer.data()),
>                           kMaxAnnexBSize,
>                           keyframe,
>                           &used_size);
>   annexb_buffer.resize(used_size);
> 
> Then, you don't need that extra implementation in the .cc file to
handle both
> string and uint8_t* output buffer cases.

But wouldn't it be costly to do "std::string
annexb_buffer(kMaxAnnexBSize, '\0')" before each time? Also, I am not
sure what to set a kMaxAnnexBSize in cast code that is always big
enough. We are expecting to lose some frames in VTVEA when
BitstreamBuffer size isn't big enough.

:
session_property_setter_.reset(new video_toolbox::SessionPropertySetter(
On 2016/02/09 23:29:22, miu wrote:
> ditto here: Please use local variable instead.

Done.

:
scoped_ptr<video_toolbox::SessionPropertySetter>
session_property_setter_;
On 2016/02/09 23:29:22, miu wrote:
> Please remove (see comment in .cc file).

Done.

https://codereview.chromium.org/1636083003/

jf...@chromium.org

unread,
Feb 10, 2016, 1:56:30 PM2/10/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode20
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:20: const
size_t kNumInputBuffers = 1;
This is somewhat configurable using
kVTCompressionPropertyKey_MaxFrameDelayCount. It's the maximum number of
input buffers the compression can buffer until it emits a frame. It
defaults to "unlimited", but you can set it to a known value and then
report that as your limit.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:96: if
(media::PIXEL_FORMAT_I420 != format) {
You can also support NV12 pretty easily. I don't know if that's a common
format in Chrome.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode232
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:232: //
TODO(emircan): VideoToolbox does not seem to support bitrate
See my update to that bug. You can actually control the bitrate at
runtime using 2 properties.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode306
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:306: auto
sample_attachments =
static_cast<CFDictionaryRef>(CFArrayGetValueAtIndex(
This code seems duplicated from UseOutputBitstreamBuffer(). It would be
good to refactor it, since it's a bit subtle.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode339
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:339:
->kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder(),
Unless you want to fallback to Apple's *TERRIBLE* software encoder, I
suggest you also specify RequireHardwareAcceleratedVideoEncoder. If you
can't use the hardware encoder, you'd be much better served falling back
on Chromium's software encoder.

https://codereview.chromium.org/1636083003/

m...@chromium.org

unread,
Feb 10, 2016, 4:04:57 PM2/10/16
to emi...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org
: const
size_t kOutputBufferSizeRatio = 10;
On 2016/02/10 05:21:53, emircan wrote:
> On 2016/02/09 23:29:22, miu wrote:
> > Instead of a constant, shouldn't this depend on the configured bit
rate? 
> e.g.,
> > 2 Mbps versus 20 Mbps
> 
> I tried doing that but it would be very dependent on initial_bitrate,
since we
> request for Bitstream buffers once only in Initialize(). I will take
max of
> these two values to be safe. WDYT?

Sounds fine.  I'll leave it to the VEA owners to work out the larger
issue (affecting all VEA's) in some future change.

https://codereview.chromium.org/1636083003/diff/200001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode378
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:378:
session_property_setter_.reset(
On 2016/02/10 05:21:53, emircan wrote:
> On 2016/02/09 23:29:22, miu wrote:
> > This shouldn't be allocated for the lifetime of the class, since
it's only
> used
> > briefly in this method (which is not being called very often). 
Instead, just
> > make it a local stack variable:
> > 
> >   video_toolbox::SessionPropertySetter setter(...);
> >   setter.SetSessionProperty(...);
> >   ...
> 
> It is used on l.233 as well. Following example:
>
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc/modules/video_coding/codecs/h264/h264_video_toolbox_encoder.cc&l=320
> .

The point is that the optimizing compiler can do a lot to eliminate
extra storage, skip steps w.r.t. construction/destruction, etc. when the
class is a local variable.  When it's heap-allocated, it can't.  Also,
by making this a data member, memory is being wasted to hold an object
that is only rarely and briefly used.

: MEDIA_EXPORT bool
CopySampleBufferToAnnexBBuffer(
On 2016/02/10 05:21:53, emircan wrote:
> On 2016/02/09 23:29:22, miu wrote:
> > For simplicity, you should delete the version of this function that
takes a
> > std::string output.  Client code can just pre-allocate the string
and truncate
> 
> But wouldn't it be costly to do "std::string
annexb_buffer(kMaxAnnexBSize,
> '\0')" before each time? Also, I am not sure what to set a
kMaxAnnexBSize in
> cast code that is always big enough. We are expecting to lose some
frames in
> VTVEA when BitstreamBuffer size isn't big enough.

In terms of run-time cost: No, because it's still just one allocation of
a chunk of memory.  In terms of space efficiency: It should be no less
efficient than what client code has to do to pre-allocate a chunk of
memory for the raw pointer version of this function.

Whoops, maybe not kMaxAnnexBSize for the size.  jfroy@, do you know?

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/video_encode_accelerator_unittest.cc
File content/common/gpu/media/video_encode_accelerator_unittest.cc
(right):

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode1714
content/common/gpu/media/video_encode_accelerator_unittest.cc:1714:
INSTANTIATE_TEST_CASE_P(
BTW--Will these tests run in the waterfall?  IIRC, this module's tests
are currently only configured to run on ChromeOS bots.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode126
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:126:
bitrate_ / kBitsPerByte));
Sanity-check: Is the bitrate_ setting a "max bitrate" or a "target
bitrate".  If it's a target, there will be some variance where some
frames take more or fewer bytes than the average; and so you'd want the
value here to be some multiple.  If it's a max, do we know the encoder
absolutely cannot exceed the max bitrate in certain edge-case
circumstances?

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode234
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:234: const
bool rv = session_property_setter_->SetSessionProperty(
Suggestion (to eliminate extra heap-allocated data member):

    media::video_toolbox::SessionPropertySetter(compression_session_,
videotoolbox_glue_)
        .SetSessionProperty(...);

See explanation in comment below...

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode273
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:273:
scoped_ptr<InProgressFrameEncode> request(
This is happening after a possible return statement (above, on line
268).  In order to avoid leaking memory, it should be moved to before
the if-statement above.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode297
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:297: return;
Same problem w.r.t. early return.  The client should get a
BitstreamBufferReady() call with zero-byte payload so it knows which
Encode()'s failed.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode303
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:303:
frame_dropped = true;
Looks good here.  Just need to do this above too.

https://codereview.chromium.org/1636083003/diff/220001/media/base/mac/videotoolbox_helpers.cc
File media/base/mac/videotoolbox_helpers.cc (right):

https://codereview.chromium.org/1636083003/diff/220001/media/base/mac/videotoolbox_helpers.cc#newcode45
media/base/mac/videotoolbox_helpers.cc:45: class AnnexBBuffer {
If you're going to go this route, you should split this class up rather
than mash everything together with switch statements.  Meaning,
something like:

  class AnnexBBuffer {
    virtual bool Reserve(size_t size) = 0;
    virtual void Append(const char* s, size_t n) = 0;
    virtual size_t GetReservedSize() const = 0;
  };

  class StringAnnexBBuffer : public AnnexBBuffer {
    ...method overrides for std::string storage...
  };

  class RawAnnexBBuffer : public AnnexBBuffer {
    ...method overrides for output directly to memory region...
  };

...but I'm not sure you need to handle anything other than the
raw-pointer + size case, per my comment above.

https://codereview.chromium.org/1636083003/diff/220001/media/cast/sender/h264_vt_encoder.cc
File media/cast/sender/h264_vt_encoder.cc (right):

https://codereview.chromium.org/1636083003/diff/220001/media/cast/sender/h264_vt_encoder.cc#newcode302
media/cast/sender/h264_vt_encoder.cc:302:
scoped_ptr<video_toolbox::SessionPropertySetter>
session_property_setter(
Please don't allocate on the heap here.  It should be just:

  video_toolbox::SessionPropertySetter session_property_setter(...);

to remove unnecessary indirection and also to let the compiler optimize
the code better.

https://codereview.chromium.org/1636083003/

jf...@chromium.org

unread,
Feb 10, 2016, 6:15:35 PM2/10/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org
: MEDIA_EXPORT bool
CopySampleBufferToAnnexBBuffer(
On 2016/02/10 21:04:56, miu wrote:
> On 2016/02/10 05:21:53, emircan wrote:
> > On 2016/02/09 23:29:22, miu wrote:
> > > For simplicity, you should delete the version of this function
that takes a
> > > std::string output.  Client code can just pre-allocate the string
and
> truncate
> > 
> > But wouldn't it be costly to do "std::string
annexb_buffer(kMaxAnnexBSize,
> > '\0')" before each time? Also, I am not sure what to set a
kMaxAnnexBSize in
> > cast code that is always big enough. We are expecting to lose some
frames in
> > VTVEA when BitstreamBuffer size isn't big enough.
> 
> In terms of run-time cost: No, because it's still just one allocation
of a chunk
> of memory.  In terms of space efficiency: It should be no less
efficient than
> what client code has to do to pre-allocate a chunk of memory for the
raw pointer
> version of this function.
> 
> Whoops, maybe not kMaxAnnexBSize for the size.  jfroy@, do you know?

My personal opinion on this is that I would only deal with raw buffers
(or a suitable "buffer byte type") and use a stateful converter
initialized with a CMSampleBuffer to calculate the capacity on
construction. That way, a client would be able to allocate, reserve or
check capacity as needed, obtain a contiguous memory pointer as
appropriate and convert.

The original implementation of this code used std::string because that's
what media/cast used. I consider std::string to be a terrible byte
buffer type.

:
bitrate_ / kBitsPerByte));
On 2016/02/10 21:04:56, miu wrote:
> Sanity-check: Is the bitrate_ setting a "max bitrate" or a "target
bitrate".  If
> it's a target, there will be some variance where some frames take more
or fewer
> bytes than the average; and so you'd want the value here to be some
multiple. 
> If it's a max, do we know the encoder absolutely cannot exceed the max
bitrate
> in certain edge-case circumstances?

Myself and lite@ did extensive tests of the iOS encoder's bitrate
control. I can't really speak to how VT will behave on OS X using
QuickSync, but...

The AverageBitRate property (currently used by this implementation)
really means an average. The observed bitrate over a 1-5 seconds window
was observed as much as 200% higher than this average. This is
especially true when the encoder is configured for real-time use.

The DataRateLimits property (see crbug.com/425352) is better at setting
the maximum bitrate, within about 10% over a 1-5 seconds window.

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 11, 2016, 4:22:10 AM2/11/16
to miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org
:
INSTANTIATE_TEST_CASE_P(
On 2016/02/10 21:04:56, miu wrote:
> BTW--Will these tests run in the waterfall?  IIRC, this module's tests
are
> currently only configured to run on ChromeOS bots.

I changed content/content_tests.gypi	to build this test for mac as well.
Would it be picked up as a target? If not, how should I add it to
waterfall?

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode20
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:20: const
size_t kNumInputBuffers = 1;
On 2016/02/10 18:56:27, jfroy wrote:
> This is somewhat configurable using
> kVTCompressionPropertyKey_MaxFrameDelayCount. It's the maximum number
of input
> buffers the compression can buffer until it emits a frame. It defaults
to
> "unlimited", but you can set it to a known value and then report that
as your
> limit.

For me, setting kVTCompressionPropertyKey_MaxFrameDelayCount to any
number fails. I tried to 0-4 with no luck. I will look into this issue
in a follow-up. (OSX 10.11.3 Macbook Air 13)

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode96
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:96: if
(media::PIXEL_FORMAT_I420 != format) {
On 2016/02/10 18:56:27, jfroy wrote:
> You can also support NV12 pretty easily. I don't know if that's a
common format
> in Chrome.

I420 is the common format in Chrome HW encode.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode126
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:126:
bitrate_ / kBitsPerByte));
On 2016/02/10 23:15:34, jfroy wrote:
> On 2016/02/10 21:04:56, miu wrote:
> > Sanity-check: Is the bitrate_ setting a "max bitrate" or a "target
bitrate". 
> If
> > it's a target, there will be some variance where some frames take
more or
> fewer
> > bytes than the average; and so you'd want the value here to be some
multiple. 
> > If it's a max, do we know the encoder absolutely cannot exceed the
max bitrate
> > in certain edge-case circumstances?
> 
> Myself and lite@ did extensive tests of the iOS encoder's bitrate
control. I
> can't really speak to how VT will behave on OS X using QuickSync,
but...
> 
> The AverageBitRate property (currently used by this implementation)
really means
> an average. The observed bitrate over a 1-5 seconds window was
observed as much
> as 200% higher than this average. This is especially true when the
encoder is
> configured for real-time use.
> 
> The DataRateLimits property (see crbug.com/425352) is better at
setting the
> maximum bitrate, within about 10% over a 1-5 seconds window.

I will go with jfroy@ suggestions. I will remove the use of 
kOutputBufferSizeRatio. Instead, I will define an average_bitrate_ and
max_bitrate_ params. I will take client input as average_bitrate_, and
calculate max_bitrate_ via a constant. WDYT?

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode232
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:232: //
TODO(emircan): VideoToolbox does not seem to support bitrate
On 2016/02/10 18:56:27, jfroy wrote:
> See my update to that bug. You can actually control the bitrate at
runtime using
> 2 properties.

Done.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode234
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:234: const
bool rv = session_property_setter_->SetSessionProperty(
On 2016/02/10 21:04:56, miu wrote:
> Suggestion (to eliminate extra heap-allocated data member):
> 
>     media::video_toolbox::SessionPropertySetter(compression_session_,
> videotoolbox_glue_)
>         .SetSessionProperty(...);
> 
> See explanation in comment below...

Done. Keeping it as a class member on stack.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode273
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:273:
scoped_ptr<InProgressFrameEncode> request(
On 2016/02/10 21:04:56, miu wrote:
> This is happening after a possible return statement (above, on line
268).  In
> order to avoid leaking memory, it should be moved to before the
if-statement
> above.

Done.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode297
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:297: return;
On 2016/02/10 21:04:56, miu wrote:
> Same problem w.r.t. early return.  The client should get a
> BitstreamBufferReady() call with zero-byte payload so it knows which
Encode()'s
> failed.

I do not have a BitstreamBuffer to return at this case. This is
Encoder() followed by UseBitstreamBuffer() case that posciak@ mentioned.
If I return buffer->id=-1, this is same as kPlatformFailureError. So
instead, I wait for a buffer to arrive first.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode303
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:303:
frame_dropped = true;
On 2016/02/10 21:04:56, miu wrote:
> Looks good here.  Just need to do this above too.

Done.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode306
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:306: auto
sample_attachments =
static_cast<CFDictionaryRef>(CFArrayGetValueAtIndex(
On 2016/02/10 18:56:27, jfroy wrote:
> This code seems duplicated from UseOutputBitstreamBuffer(). It would
be good to
> refactor it, since it's a bit subtle.

Done.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode339
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:339:
->kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder(),
On 2016/02/10 18:56:27, jfroy wrote:
> Unless you want to fallback to Apple's *TERRIBLE* software encoder, I
suggest
> you also specify RequireHardwareAcceleratedVideoEncoder. If you can't
use the
> hardware encoder, you'd be much better served falling back on
Chromium's
> software encoder.

Thanks for the notice, I added RequireHardwareAcceleratedVideoEncoder. I
will compare this non-HW case against the SW implementation once it
lands, and revisit the fallback situation.

: class AnnexBBuffer {
On 2016/02/10 21:04:56, miu wrote:
> If you're going to go this route, you should split this class up
rather than
> mash everything together with switch statements.  Meaning, something
like:
> 
>   class AnnexBBuffer {
>     virtual bool Reserve(size_t size) = 0;
>     virtual void Append(const char* s, size_t n) = 0;
>     virtual size_t GetReservedSize() const = 0;
>   };
> 
>   class StringAnnexBBuffer : public AnnexBBuffer {
>     ...method overrides for std::string storage...
>   };
> 
>   class RawAnnexBBuffer : public AnnexBBuffer {
>     ...method overrides for output directly to memory region...
>   };
> 
> ...but I'm not sure you need to handle anything other than the
raw-pointer +
> size case, per my comment above.

I am taking your first suggestion into account, but what would be the
ideal kMaxAnnexBSizeInBytes? 100MB?

:
scoped_ptr<video_toolbox::SessionPropertySetter>
session_property_setter(
On 2016/02/10 21:04:56, miu wrote:
> Please don't allocate on the heap here.  It should be just:
> 
>   video_toolbox::SessionPropertySetter session_property_setter(...);
> 
> to remove unnecessary indirection and also to let the compiler
optimize the code
> better.

My bad, it makes total sense to put it on the stack.

https://codereview.chromium.org/1636083003/

jf...@chromium.org

unread,
Feb 11, 2016, 1:27:41 PM2/11/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode126
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:126:
bitrate_ / kBitsPerByte));
On 2016/02/11 09:22:09, emircan wrote:
> On 2016/02/10 23:15:34, jfroy wrote:
> > On 2016/02/10 21:04:56, miu wrote:
> > > Sanity-check: Is the bitrate_ setting a "max bitrate" or a "target
bitrate".
> 
> > If
> > > it's a target, there will be some variance where some frames take
more or
> > fewer
> > > bytes than the average; and so you'd want the value here to be
some
> multiple. 
> > > If it's a max, do we know the encoder absolutely cannot exceed the
max
> bitrate
> > > in certain edge-case circumstances?
> > 
> > Myself and lite@ did extensive tests of the iOS encoder's bitrate
control. I
> > can't really speak to how VT will behave on OS X using QuickSync,
but...
> > 
> > The AverageBitRate property (currently used by this implementation)
really
> means
> > an average. The observed bitrate over a 1-5 seconds window was
observed as
> much
> > as 200% higher than this average. This is especially true when the
encoder is
> > configured for real-time use.
> > 
> > The DataRateLimits property (see crbug.com/425352) is better at
setting the
> > maximum bitrate, within about 10% over a 1-5 seconds window.
> 
> I will go with jfroy@ suggestions. I will remove the use of 
> kOutputBufferSizeRatio. Instead, I will define an average_bitrate_ and
> max_bitrate_ params. I will take client input as average_bitrate_, and
calculate
> max_bitrate_ via a constant. WDYT?

I would really run a few experiments to see how your rate control is
working. VT tries to be a uniform interface to widely different
encoders, so those parameters might work is a surprising and different
way than on iOS.

For what it's worth, in cast we assume the bitrate specified by the
client (i.e. the target rate calculated by cast's rate control
algorithm) is where it wants the encoder to go and we set both the
average bit rate and target rate parameters to the same value (after
converting to the right units -- the target data rate takes bytes, not
bits).

https://codereview.chromium.org/1636083003/

m...@chromium.org

unread,
Feb 11, 2016, 4:09:41 PM2/11/16
to emi...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org
:
INSTANTIATE_TEST_CASE_P(
On 2016/02/11 09:22:09, emircan wrote:
> On 2016/02/10 21:04:56, miu wrote:
> > BTW--Will these tests run in the waterfall?  IIRC, this module's
tests are
> > currently only configured to run on ChromeOS bots.
> 
> I changed content/content_tests.gypi	to build this test for mac as
well. Would
> it be picked up as a target? If not, how should I add it to waterfall?

I'm not sure about this one.  I think it has dependencies on downloading
videos from external sources.  Pawel would know.

 suggestions. I will remove the use of 
> kOutputBufferSizeRatio. Instead, I will define an average_bitrate_ and
> max_bitrate_ params. I will take client input as average_bitrate_, and
calculate
> max_bitrate_ via a constant. WDYT?

I can't remember whether the VEA interface treats bitrate as a target or
a max.  Pawel would know for sure.

https://codereview.chromium.org/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode297
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:297: return;
On 2016/02/11 09:22:10, emircan wrote:
> On 2016/02/10 21:04:56, miu wrote:
> > Same problem w.r.t. early return.  The client should get a
> > BitstreamBufferReady() call with zero-byte payload so it knows which
> Encode()'s
> > failed.
> 
> I do not have a BitstreamBuffer to return at this case. This is
Encoder()
> followed by UseBitstreamBuffer() case that posciak@ mentioned. If I
return
> buffer->id=-1, this is same as kPlatformFailureError. So instead, I
wait for a
> buffer to arrive first.

Acknowledged.  My misunderstanding.

: class AnnexBBuffer {
On 2016/02/11 09:22:10, emircan wrote:
> On 2016/02/10 21:04:56, miu wrote:
> > If you're going to go this route, you should split this class up
rather than
> > mash everything together with switch statements.  Meaning, something
like:
> > 
> >   class AnnexBBuffer {
> >     virtual bool Reserve(size_t size) = 0;
> >     virtual void Append(const char* s, size_t n) = 0;
> >     virtual size_t GetReservedSize() const = 0;
> >   };
> > 
> >   class StringAnnexBBuffer : public AnnexBBuffer {
> >     ...method overrides for std::string storage...
> >   };
> > 
> >   class RawAnnexBBuffer : public AnnexBBuffer {
> >     ...method overrides for output directly to memory region...
> >   };
> > 
> > ...but I'm not sure you need to handle anything other than the
raw-pointer +
> > size case, per my comment above.
> 
> I am taking your first suggestion into account, but what would be the
ideal
> kMaxAnnexBSizeInBytes? 100MB?

It looks like the code at the top of CopySampleBufferToAnnexBBuffer()
computes this (as |total_bytes|).  Seems to me you could just break this
code out into a separate ComputeAnnexBBufferSize(CMSampleBufferRef)
function and now you only need a single memory allocation.  :)

Also, CopySampleBufferToAnnexBBuffer() doesn't use |total_bytes| for
anything other than to confirm its output buffer is big enough. 
Therefore, it should be reasonable for it to just assume the client
provided a big enough buffer as a precondition.  Client code might look
like:

  const size_t annex_b_size = ComputeAnnexBBufferSize(sbuf, key_frame);
  scoped_ptr<uint8_t[]> annex_b_buffer(new uint8_t[annex_b_size]);
  // OR: annex_b_buffer_as_string.resize(annex_b_size, '\0');
  CopySampleBufferToAnnexBBuffer(sbuf, annex_b_buffer.get(), key_frame);

https://codereview.chromium.org/1636083003/diff/240001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://codereview.chromium.org/1636083003/diff/240001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode310
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:310: if
(info & VideoToolboxGlue::kVTEncodeInfo_FrameDropped) {
nit: Add DCHECK(thread_checker_...);

https://codereview.chromium.org/1636083003/diff/240001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode419
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:419:
session_property_setter_ = media::video_toolbox::SessionPropertySetter(
Could you explain why you're trying to keep the
|session_property_setter_| class member variable?  The point I've been
trying to make is that you do not need to share this across two
different methods.  Instead, just create it *separately* wherever you
need it (as a short-lived temporary object); just like you did in the
cast h264_vt_encoder.cc code.

Yes, it will be created each time either of the methods is called. 
However, that is actually desirable: You save memory by not storing it
in the VTVEA object long-term, and the optimizing compiler can eliminate
many of the overhead costs with constructing/destroying it when it is a
local object.

https://codereview.chromium.org/1636083003/diff/240001/media/base/mac/videotoolbox_helpers.h
File media/base/mac/videotoolbox_helpers.h (right):

https://codereview.chromium.org/1636083003/diff/240001/media/base/mac/videotoolbox_helpers.h#newcode45
media/base/mac/videotoolbox_helpers.h:45: SessionPropertySetter();
Please remove the zero-arg and copy ctors (see comments in other file).

https://codereview.chromium.org/1636083003/

emi...@chromium.org

unread,
Feb 11, 2016, 10:40:57 PM2/11/16
to miu+re...@chromium.org, mca...@chromium.org, pos...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org
:
bitrate_ / kBitsPerByte));
On 2016/02/11 21:09:40, miu wrote:
> On 2016/02/11 09:22:09, emircan wrote:
> > On 2016/02/10 23:15:34, jfroy wrote:
> > > On 2016/02/10 21:04:56, miu wrote:
> > > > Sanity-check: Is the bitrate_ setting a "max bitrate" or a
"target
> bitrate".
> > 
> > > If
> > > > it's a target, there will be some variance where some frames
take more or
> > > fewer
> > > > bytes than the average; and so you'd want the value here to be
some
> > multiple. 
> > > > If it's a max, do we know the encoder absolutely cannot exceed
the max
> > bitrate
> > > > in certain edge-case circumstances?
> > > 
> > > Myself and lite@ did extensive tests of the iOS encoder's bitrate
control. I
> > > can't really speak to how VT will behave on OS X using QuickSync,
but...
> > > 
> > > The AverageBitRate property (currently used by this
implementation) really
> > means
> > > an average. The observed bitrate over a 1-5 seconds window was
observed as
> > much
> > > as 200% higher than this average. This is especially true when the
encoder
> is
> > > configured for real-time use.
> > > 
> > > The DataRateLimits property (see crbug.com/425352) is better at
setting the
> > > maximum bitrate, within about 10% over a 1-5 seconds window.
> > 
> > I will go with jfroy@ suggestions. I will remove the use of 
> > kOutputBufferSizeRatio. Instead, I will define an average_bitrate_
and
> > max_bitrate_ params. I will take client input as average_bitrate_,
and
> calculate
> > max_bitrate_ via a constant. WDYT?
> 
> I can't remember whether the VEA interface treats bitrate as a target
or a max. 
> Pawel would know for sure.

I did some further investigation as jfroy@ suggested. I found that when
HW encoder is actually used (see CreateCompressionSession()), setting
kVTCompressionPropertyKey_DataRateLimits parameter ends up giving an
error: GVA encoder warning: CFDataRateLimitType = 3
(DataRateLimitsSeconds) . Changing the parameter doesn't make any
difference  at that point. Googling CFDataRateLimitType returns only a
single result which isn't helpful, so I cannot rely on the param as a
max_bitrate_.

kVTCompressionPropertyKey_AverageBitRate is the only reliable one.
Therefore, I will use the given bitrate_ input there as the average, and
on kVTCompressionPropertyKey_DataRateLimits when it is available. Also,
I will decide BitstreamBuffer size based on (bitrate/8) rather than
using other subjective constants. It is big enough to cover most of the
output cases. There seems to be a frame once in ~5 seconds that is more
than double the buffer size.

: class AnnexBBuffer {
On 2016/02/11 21:09:40, miu wrote:
> On 2016/02/11 09:22:10, emircan wrote:
> > On 2016/02/10 21:04:56, miu wrote:
> > > If you're going to go this route, you should split this class up
rather than
> > > mash everything together with switch statements.  Meaning,
something like:
> > > 
> > >   class AnnexBBuffer {
> > >     virtual bool Reserve(size_t size) = 0;
> > >     virtual void Append(const char* s, size_t n) = 0;
> > >     virtual size_t GetReservedSize() const = 0;
> > >   };
> > > 
> > >   class StringAnnexBBuffer : public AnnexBBuffer {
> > >     ...method overrides for std::string storage...
> > >   };
> > > 
> > >   class RawAnnexBBuffer : public AnnexBBuffer {
> > >     ...method overrides for output directly to memory region...
> > >   };
> > > 
> > > ...but I'm not sure you need to handle anything other than the
raw-pointer +
> > > size case, per my comment above.
> > 
> > I am taking your first suggestion into account, but what would be
the ideal
> > kMaxAnnexBSizeInBytes? 100MB?
> 
> It looks like the code at the top of CopySampleBufferToAnnexBBuffer()
computes
> this (as |total_bytes|).  Seems to me you could just break this code
out into a
> separate ComputeAnnexBBufferSize(CMSampleBufferRef) function and now
you only
> need a single memory allocation.  :)
> 
> Also, CopySampleBufferToAnnexBBuffer() doesn't use |total_bytes| for
anything
> other than to confirm its output buffer is big enough.  Therefore, it
should be
> reasonable for it to just assume the client provided a big enough
buffer as a
> precondition.  Client code might look like:
> 
>   const size_t annex_b_size = ComputeAnnexBBufferSize(sbuf,
key_frame);
>   scoped_ptr<uint8_t[]> annex_b_buffer(new uint8_t[annex_b_size]);
>   // OR: annex_b_buffer_as_string.resize(annex_b_size, '\0');
>   CopySampleBufferToAnnexBBuffer(sbuf, annex_b_buffer.get(),
key_frame);
> 

There are some parameters like |bb|, |pset_count|,
|nal_size_field_bytes| that would need to be calculated each time in
these functions, or passed from one to another. I feel like interface
and StringAnnexBBuffer/RawAnnexBBuffer is the tidiest solution to all
this.

: if
(info & VideoToolboxGlue::kVTEncodeInfo_FrameDropped) {
On 2016/02/11 21:09:40, miu wrote:
> nit: Add DCHECK(thread_checker_...);

Done.

https://codereview.chromium.org/1636083003/diff/240001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode419
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:419:
session_property_setter_ = media::video_toolbox::SessionPropertySetter(
On 2016/02/11 21:09:40, miu wrote:
> Could you explain why you're trying to keep the
|session_property_setter_| class
> member variable?  The point I've been trying to make is that you do
not need to
> share this across two different methods.  Instead, just create it
*separately*
> wherever you need it (as a short-lived temporary object); just like
you did in
> the cast h264_vt_encoder.cc code.
> 
> Yes, it will be created each time either of the methods is called. 
However,
> that is actually desirable: You save memory by not storing it in the
VTVEA
> object long-term, and the optimizing compiler can eliminate many of
the overhead
> costs with constructing/destroying it when it is a local object.

I am changing it as you suggest.

I understand your points and I had those in mind when keeping it as a
class member :) It is a tradeoff. RequestEncodingParametersChange() is
called many times on an average call, almost every other second. We
access 2 class members to construct session_property_setter. I think
accessing just the |session_property_setter_| class member and using it
would be cheaper as it is small such that it has just two member
pointers.

: SessionPropertySetter();
On 2016/02/11 21:09:40, miu wrote:
> Please remove the zero-arg and copy ctors (see comments in other
file).

Done.

https://codereview.chromium.org/1636083003/

pos...@chromium.org

unread,
Feb 18, 2016, 6:16:15 AM2/18/16
to emi...@chromium.org, miu+re...@chromium.org, mca...@chromium.org, hb...@chromium.org, jf...@chromium.org, chromium...@chromium.org, poscia...@chromium.org, j...@chromium.org, mcasas...@chromium.org, feature-me...@chromium.org, dari...@chromium.org, piman...@chromium.org, jf...@chromium.org

https://chromiumcodereview.appspot.com/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc
File content/common/gpu/media/video_encode_accelerator_unittest.cc
(right):

https://chromiumcodereview.appspot.com/1636083003/diff/140001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode657
content/common/gpu/media/video_encode_accelerator_unittest.cc:657:
decoder_(new media::FFmpegVideoDecoder()),
On 2016/02/08 23:41:23, emircan wrote:
> On 2016/02/08 04:33:42, Pawel Osciak wrote:
> > Did you mean ffmpeg was just not compiled/working on Mac at all, or
that it
> > would fail for some other reason? I thought ffmpeg worked on Mac...
> 
> It points to this line when built:
>
https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu/media/video_encode_accelerator_unittest.cc&l=696.
> Do you know if it is still the case?

Are you building and testing on Chromium, instead of Chrome?

https://chromiumcodereview.appspot.com/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc
File content/common/gpu/media/vt_video_encode_accelerator.cc (right):

https://chromiumcodereview.appspot.com/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode238
content/common/gpu/media/vt_video_encode_accelerator.cc:238: // This
method is NOT called on |client_task_runner_|, so we still need to
On 2016/02/08 23:41:23, emircan wrote:
> On 2016/02/08 04:33:42, Pawel Osciak wrote:
> > Uhh, if so, I don't think we can access encoder at all, including
members like
> > encoder_output_queue_. Aren't we pushing to encoder_output_queue_ on
> > client_task_runner_ for example?
> > 
> > If that's the case, could we trampoline from this method at the
beginning to
> > client_task_runner_ to actually handle things there?
> 
> I am posting a task as you suggest. 
> 
> But why wouldn't we access encoder at all? Accessing
client_task_runner through
> encoder is also done on cast code:
>
https://code.google.com/p/chromium/codesearch#chromium/src/media/cast/sender/h264_vt_encoder.cc&l=696
> . If your concern is regarding thread safety of std::deque, I checked
it prior
> and found mixed answers regarding it.
> 

Concurrent read-only calls to containers are ok, but we are not doing
read-only calls only. Also, even if each method of the container was
atomic (like empty(), push_back() etc.), the container wouldn't be
locked across calls. So for example checking !empty() here and then
depending on it staying non-empty to be able to safely call pop() here
would not work without explicit locking if another thread could be using
it in parallel.

https://chromiumcodereview.appspot.com/1636083003/diff/140001/content/common/gpu/media/vt_video_encode_accelerator.cc#newcode308
content/common/gpu/media/vt_video_encode_accelerator.cc:308:
videotoolbox_glue_->kVTProfileLevel_H264_Baseline_AutoLevel());
On 2016/02/08 23:41:24, emircan wrote:
> On 2016/02/08 04:33:42, Pawel Osciak wrote:
> > Is baseline preferred by us? Normally constrained baseline is more
compatible
> > with various devices (unless this actually is constrained). Main may
also be
> > better in general.
> 
> I discussed this with WebRTC people. I learned that Baseline is the
only one
> that openh264 supports -SW case- and compatible with Firefox as well.
I am
> planning to support only Baseline in this CL. I can add other profiles
after
> further discussion, and it wouldn't be hard.

Are we sure that was Baseline, and not Constrained Baseline though? A
good number of HW decoders doesn't support Baseline, as it requires
additional features.

https://chromiumcodereview.appspot.com/1636083003/diff/220001/content/common/gpu/media/video_encode_accelerator_unittest.cc
File content/common/gpu/media/video_encode_accelerator_unittest.cc
(right):

https://chromiumcodereview.appspot.com/1636083003/diff/220001/content/common/gpu/media/video_encode_accelerator_unittest.cc#newcode1714
content/common/gpu/media/video_encode_accelerator_unittest.cc:1714:
INSTANTIATE_TEST_CASE_P(
On 2016/02/11 21:09:40, miu wrote:
> On 2016/02/11 09:22:09, emircan wrote:
> > On 2016/02/10 21:04:56, miu wrote:
> > > BTW--Will these tests run in the waterfall?  IIRC, this module's
tests are
> > > currently only configured to run on ChromeOS bots.
> > 
> > I changed content/content_tests.gypi	to build this test for mac as
well. Would
> > it be picked up as a target? If not, how should I add it to
waterfall?
> 
> I'm not sure about this one.  I think it has dependencies on
downloading videos
> from external sources.  Pawel would know.

We have a short raw stream in Chromium tree that can be used for basic
sanity testing: media/test/data/bear_320x192_40frames.yuv, but it's not
good for bitrate tests. It's already here in the code at the top of this
file (g_default_in_filename).

So if you run this test as is on the bots, the
"media::GetTestDataFilePath(content::g_default_in_filename)" line below
will pick it up automagically.

Locally you have to make sure
https://code.google.com/p/chromium/codesearch#chromium/src/media/base/test_data_util.cc&rcl=1455755384&l=20
resolves to where the file is (using DIR_SOURCE_ROOT for example).

https://chromiumcodereview.appspot.com/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://chromiumcodereview.appspot.com/1636083003/diff/220001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode126
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:126:
bitrate_ / kBitsPerByte));
On 2016/02/11 21:09:40, miu wrote:
> On 2016/02/11 09:22:09, emircan wrote:
> > On 2016/02/10 23:15:34, jfroy wrote:
> > > On 2016/02/10 21:04:56, miu wrote:
> > > > Sanity-check: Is the bitrate_ setting a "max bitrate" or a
"target
> bitrate".
> > 
> > > If
> > > > it's a target, there will be some variance where some frames
take more or
> > > fewer
> > > > bytes than the average; and so you'd want the value here to be
some
> > multiple. 
> > > > If it's a max, do we know the encoder absolutely cannot exceed
the max
> > bitrate
> > > > in certain edge-case circumstances?
> > > 
> > > Myself and lite@ did extensive tests of the iOS encoder's bitrate
control. I
> > > can't really speak to how VT will behave on OS X using QuickSync,
but...
> > > 
> > > The AverageBitRate property (currently used by this
implementation) really
> > means
> > > an average. The observed bitrate over a 1-5 seconds window was
observed as
> > much
> > > as 200% higher than this average. This is especially true when the
encoder
> is
> > > configured for real-time use.
> > > 
> > > The DataRateLimits property (see crbug.com/425352) is better at
setting the
> > > maximum bitrate, within about 10% over a 1-5 seconds window.
> > 
> > I will go with jfroy@ suggestions. I will remove the use of 
> > kOutputBufferSizeRatio. Instead, I will define an average_bitrate_
and
> > max_bitrate_ params. I will take client input as average_bitrate_,
and
> calculate
> > max_bitrate_ via a constant. WDYT?
> 
> I can't remember whether the VEA interface treats bitrate as a target
or a max. 
> Pawel would know for sure.

It's not clearly specified in the docs, but it's expected to be the
target bitrate.

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc
File content/common/gpu/media/vt_video_encode_accelerator_mac.cc
(right):

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode151
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:151:
scoped_ptr<InProgressFrameEncode> request(new InProgressFrameEncode(
Nit: please move above l.163.

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode238
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:238:
set_data_rate_limit_ &= session_property_setter.SetSessionProperty(
I believe this would have the same effect?

if (s_d_r_l_) {
  s_d_r_l = SSP();
}

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode274
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:274:
encoder->client_task_runner_->PostTask(
Do we need to do anything in this method at all? Could we just post to
the task and do error checks there?

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode288
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:288:
base::Unretained(encoder), info, sbuf));
Please add a comment why it's safe to do Unretained.

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode296
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:296: // If
there isn't any BitstreamBuffer to copy into, add it to a queue for
Could we perhaps have a TryReturnBitstreamBuffer() which would do this
also, instead of duplicating this code here and in
UseOutputBitstreamBuffer() please?

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode348
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:348: // Keep
these in-sync with those in ConfigureCompressionSession().
These keys below seem different from the ones in CCS()...? What does it
mean to keep them in sync?

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode370
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:370: // Try
creating session again without forcing HW encode.
Are we interested in doing SW encode here at all? It would be preferable
to do it outside of GPU process then...

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.cc#newcode415
content/common/gpu/media/vt_video_encode_accelerator_mac.cc:415:
reinterpret_cast<void*>(this),
Can we in any way guarantee CompressionCallback caller doesn't outlive
this?

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.h
File content/common/gpu/media/vt_video_encode_accelerator_mac.h (right):

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.h#newcode18
content/common/gpu/media/vt_video_encode_accelerator_mac.h:18: // safe,
so this object is pinned to the thread on which it is constructed.
We should always avoid doing things on GPU Child thread unless we have
no other choice (e.g. need GL context), because this is a
performance-critical thread.

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.h#newcode82
content/common/gpu/media/vt_video_encode_accelerator_mac.h:82: //
Destroy the current compression session if any. Blocks until all pending
This sounds especially concerning if blocking GPU Child.

https://chromiumcodereview.appspot.com/1636083003/diff/260001/content/common/gpu/media/vt_video_encode_accelerator_mac.h#newcode95
content/common/gpu/media/vt_video_encode_accelerator_mac.h:95: bool
set_data_rate_limit_;
Please document this field.

https://chromiumcodereview.appspot.com/1636083003/
Reply all
Reply to author
Forward
0 new messages