Problem with cricket::VideoFrame

Visto 271 veces
Saltar al primer mensaje no leído

yihungbakj hung

no leída,
20 mar 2018, 4:08:4620/3/18
a discuss-webrtc
Hi All

I'm currently trying to port this to newer versions of the library (M59).
The biggest obstacle seems to be that cricket::CapturedFrame was removed in commit
 f5297a019ea6de709a2bb94793d637bb44538931 (https://codereview.webrtc.org/2262443003/).
I can't really find any talk about what replaces it.
Do you happen to have anything to point me in the right direction?

Thank you.

yh

yihungbakj hung

no leída,
20 mar 2018, 5:14:2620/3/18
a discuss-webrtc
Here is my code:

bool QueueVideoCapturer::CaptureCustomFrame(const shared_ptr<CaptureFrame>& cf) {

if (!running_) {
return false;
}

int width = GetCaptureFormat()->width;
int height = GetCaptureFormat()->height;
int fourcc = GetCaptureFormat()->fourcc;

if(cf) {
if(start_rtc_timestamp==-1) {
start_rtc_timestamp = cf->rtc_timestamp;
}

// Currently, |fourcc| is always I420 or ARGB.
// TODO(fbarchard): Extend SizeOf to take fourcc.
uint32_t size = 0u;
if (fourcc == cricket::FOURCC_ARGB) {
size = width * 4 * height;
} else if (fourcc == cricket::FOURCC_I420) {
size = FrameSizeOf(width, height);
} else {
return false; // Unsupported FOURCC.
}
if (size == 0u) {
return false; // Width and/or Height were zero.
}

cricket::CapturedFrame frame;
frame.width = cf->width;
frame.height = cf->height;
frame.fourcc = fourcc;
frame.data_size = cf->frame_size;
// frame.elapsed_time = (cf->rtc_timestamp - start_rtc_timestamp) * 1000;
frame.time_stamp = cf->rtc_timestamp * 1000;//webrtc::TickTime::MicrosecondTimestamp()*1000;
frame.data = cf->frame.get_data();
SignalFrameCaptured(this, &frame);
}
return true;
}

yihungbakj hung於 2018年3月20日星期二 UTC+8下午4時08分46秒寫道:

Niels Moller

no leída,
22 mar 2018, 5:47:3822/3/18
a discuss...@googlegroups.com
Please see this thread:
https://groups.google.com/forum/#!topic/discuss-webrtc/37TSGD6JXuM

Regards,
/Niels

yihungbakj hung

no leída,
22 mar 2018, 23:11:5622/3/18
a discuss-webrtc
Hi Niels

Thank you for your reply. I have another question I would like to ask.

How to replace cricket::CapturedFrame​ with webrtc::VideoFrame (another function)?

Thank you.

yh

Niels Moller於 2018年3月22日星期四 UTC+8下午5時47分38秒寫道:

Niels Moller

no leída,
23 mar 2018, 6:09:0023/3/18
a discuss...@googlegroups.com
On Fri, Mar 23, 2018 at 4:11 AM, yihungbakj hung <yihun...@gmail.com> wrote:
> Hi Niels
>
> Thank you for your reply. I have another question I would like to ask.
>
> How to replace cricket::CapturedFrame with webrtc::VideoFrame (another
> function)?

If you're looking for a concrete class to store the pixel data, use
I420Buffer (api/video/i420_buffer.h), then use this together with
timestamps and other metadata to construct a VideoFrame. If you need
to convert from non-i420, there are libyuv functions for that.

Regards,
/Niels

yihungbakj hung

no leída,
25 mar 2018, 21:14:4125/3/18
a discuss-webrtc
Hi Niels

I still don't understand how to replace cricket::CapturedFrame​ with webrtc::VideoFrame?
Do you provide any example code?

Thank you.

yh

Niels Moller於 2018年3月23日星期五 UTC+8下午6時09分00秒寫道:

Niels Moller

no leída,
26 mar 2018, 9:36:0726/3/18
a discuss...@googlegroups.com
On Mon, Mar 26, 2018 at 3:14 AM, yihungbakj hung <yihun...@gmail.com> wrote:
> Hi Niels
>
> I still don't understand how to replace cricket::CapturedFrame with
> webrtc::VideoFrame?
> Do you provide any example code?

Look for classes inheriting VideoTrackSource or
AdaptedVideoTrackSource. One of the first adopters were the android
capturing code, see android/src/jni/androidvideotracksource.cc in the
webrtc tree.

The method OnByteBufferFrameCaptured is called when frames in NV21
format arrive from the camera. The method allocates an I420Buffer,
converts the pixels, wraps it in a VideoFrame and passes it on to
OnFrame (an inherited convenience method of AdaptedVideoTrackSource).
I would expect most custom capturers to follow a similar pattern.

yihungbakj hung

no leída,
10 abr 2018, 19:21:3210/4/18
a discuss-webrtc
Following your suggestions. The peer connection is disconnected. I can't solve  peer connection problems.

Could you please show me some examples?

For example:

cricket::CapturedFrame => ?
SignalFrameCaptured => ?

Thank you.

PS.

Here is my code.

bool QueueVideoCapturer::CaptureCustomFrame(const shared_ptr<CaptureFrame>& cf) {

if (!running_) {
return false;
}

int width = GetCaptureFormat()->width;
int height = GetCaptureFormat()->height;
int fourcc = GetCaptureFormat()->fourcc;

if(cf) {
if(start_rtc_timestamp==-1) {
start_rtc_timestamp = cf->rtc_timestamp;
}

// Currently, |fourcc| is always I420 or ARGB.
// TODO(fbarchard): Extend SizeOf to take fourcc.
uint32_t size = 0u;
if (fourcc == cricket::FOURCC_ARGB) {
size = width * 4 * height;
} else if (fourcc == cricket::FOURCC_I420) {
size = FrameSizeOf(width, height);
} else {
return false; // Unsupported FOURCC.
}
if (size == 0u) {
return false; // Width and/or Height were zero.
}
#if 0
cricket::CapturedFrame frame;
frame.width = cf->width;
frame.height = cf->height;
frame.fourcc = fourcc;
frame.data_size = cf->frame_size;
frame.time_stamp = cf->rtc_timestamp * 1000;//webrtc::TickTime::MicrosecondTimestamp()*1000;
frame.data = cf->frame.get_data();
SignalFrameCaptured(this, &frame);
#else
rtc::scoped_refptr<webrtc::I420Buffer> buffer(webrtc::I420Buffer::Create(cf->width, cf->height));
buffer->InitializeData();
int crop_x;
int crop_y;
int crop_width;
int crop_height;
const uint8_t* y_plane = (const uint8_t*)(cf->frame.get_data());
const uint8_t* uv_plane = y_plane + width * height;
int uv_width = (width + 1) / 2;
crop_x &= ~1;
crop_y &= ~1;
libyuv::NV12ToI420Rotate(
      y_plane + width * crop_y + crop_x, width,
      uv_plane + uv_width * crop_y + crop_x, width, buffer->MutableDataY(),
      buffer->StrideY(),
      // Swap U and V, since we have NV21, not NV12.
      buffer->MutableDataV(), buffer->StrideV(), buffer->MutableDataU(),
      buffer->StrideU(), crop_width, crop_height,
      static_cast<libyuv::RotationMode>(0));

const webrtc::VideoFrame frame(buffer, webrtc::kVideoRotation_0, cf->rtc_timestamp * 1000);
this->OnFrame(frame, cf->width, cf->height);

#endif
}
return true;
}

Niels Moller於 2018年3月26日星期一 UTC+8下午9時36分07秒寫道:

yihungbakj hung

no leída,
13 abr 2018, 3:54:1913/4/18
a discuss...@googlegroups.com
I have some questions I would like to ask.

1. How to replace "SignalFrameCaptured" with "OnFrame"  in my code?

2. How to replace "cricket::CapturedFrame" with "webrtc::VideoFrame"  in my code?

Here is my code.

bool QueueVideoCapturer::CaptureCustomFrame(const shared_ptr<CaptureFrame>& cf) {

if (!running_) {
return false;
}

int width = GetCaptureFormat()->width;
int height = GetCaptureFormat()->height;
int fourcc = GetCaptureFormat()->fourcc;

if(cf) {
if(start_rtc_timestamp==-1) {
start_rtc_timestamp = cf->rtc_timestamp;
}

// Currently, |fourcc| is always I420 or ARGB.
// TODO(fbarchard): Extend SizeOf to take fourcc.
uint32_t size = 0u;
if (fourcc == cricket::FOURCC_ARGB) {
size = width * 4 * height;
} else if (fourcc == cricket::FOURCC_I420) {
size = FrameSizeOf(width, height);
} else {
return false; // Unsupported FOURCC.
}
if (size == 0u) {
return false; // Width and/or Height were zero.
}
#if 0
cricket::CapturedFrame frame; <-- 1
frame.width = cf->width;
frame.height = cf->height;
frame.fourcc = fourcc;
frame.data_size = cf->frame_size;
frame.time_stamp = cf->rtc_timestamp * 1000;//webrtc::TickTime::MicrosecondTimestamp()*1000;
frame.data = cf->frame.get_data();
SignalFrameCaptured(this, &frame); <-- 2
#else
rtc::scoped_refptr<webrtc::I420Buffer> buffer(webrtc::I420Buffer::Create(cf->width, cf->height));
buffer->InitializeData();
int crop_x;
int crop_y;
int crop_width;
int crop_height;
const uint8_t* y_plane = (const uint8_t*)(cf->frame.get_data());
const uint8_t* uv_plane = y_plane + width * height;
int uv_width = (width + 1) / 2;
crop_x &= ~1;
crop_y &= ~1;
libyuv::NV12ToI420Rotate(
      y_plane + width * crop_y + crop_x, width,
      uv_plane + uv_width * crop_y + crop_x, width, buffer->MutableDataY(),
      buffer->StrideY(),
      // Swap U and V, since we have NV21, not NV12.
      buffer->MutableDataV(), buffer->StrideV(), buffer->MutableDataU(),
      buffer->StrideU(), crop_width, crop_height,
      static_cast<libyuv::RotationMode>(0));

const webrtc::VideoFrame frame(buffer, webrtc::kVideoRotation_0, cf->rtc_timestamp * 1000);
this->OnFrame(frame, cf->width, cf->height);

#endif
}
return true;
}


--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/wXqFTJrdxzI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/CAAO0x16KTmKp2XWUoryZjcpWVWi%3DMiFTGjG3Xy%3Dgs4BTN2v4NQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

yihungbakj hung

no leída,
13 abr 2018, 3:55:4713/4/18
a discuss-webrtc
I have some questions I would like to ask. Can you help me?
Niels Moller於 2018年3月26日星期一 UTC+8下午9時36分07秒寫道:

Niels Moller

no leída,
13 abr 2018, 4:23:1913/4/18
a discuss...@googlegroups.com
On Fri, Apr 13, 2018 at 9:54 AM, yihungbakj hung <yihun...@gmail.com> wrote:
> I have some questions I would like to ask.
>
> 1. How to replace "SignalFrameCaptured" with "OnFrame" in my code?

You typically let your VideoSource spawn a thread reading or
generating the frames. For each frame, you are expected to call
OnFrame on all registered sinks. If, e.g., you inherit
AdaptedVideoTrackSource, it will do that for you. Otherwise, you can
use VideoBroadcaster as a member in about the same way.

> 2. How to replace "cricket::CapturedFrame" with "webrtc::VideoFrame" in my
> code?

If you used cricket::CapturedFrame as a container for the pixel data,
then I420Buffer is the replacement. I think your code, which creates
an I420Buffer, fills it out with the data (converting from NV21), and
then wraps it in a VideoFrame passed to OnFrame, looks right. Assuming
that your input data really is NV21; otherwise you should use some
other libyuv functions to copy or convert it to I420 format.

> // Currently, |fourcc| is always I420 or ARGB.
> // TODO(fbarchard): Extend SizeOf to take fourcc.
> uint32_t size = 0u;
> if (fourcc == cricket::FOURCC_ARGB) {
> size = width * 4 * height;
> } else if (fourcc == cricket::FOURCC_I420) {
> size = FrameSizeOf(width, height);
> } else {
> return false; // Unsupported FOURCC.
> }
> if (size == 0u) {
> return false; // Width and/or Height were zero.
> }

The size variable appears unused.

> rtc::scoped_refptr<webrtc::I420Buffer>
> buffer(webrtc::I420Buffer::Create(cf->width, cf->height));
> buffer->InitializeData();

InitializeData() is unneeded, it's a hack only intended for test code
which doesn't write any real pixel data.

> int crop_x;
> int crop_y;
> int crop_width;
> int crop_height;
> const uint8_t* y_plane = (const uint8_t*)(cf->frame.get_data());
> const uint8_t* uv_plane = y_plane + width * height;
> int uv_width = (width + 1) / 2;
> crop_x &= ~1;
> crop_y &= ~1;
> libyuv::NV12ToI420Rotate(
> y_plane + width * crop_y + crop_x, width,
> uv_plane + uv_width * crop_y + crop_x, width, buffer->MutableDataY(),
> buffer->StrideY(),
> // Swap U and V, since we have NV21, not NV12.
> buffer->MutableDataV(), buffer->StrideV(), buffer->MutableDataU(),
> buffer->StrideU(), crop_width, crop_height,
> static_cast<libyuv::RotationMode>(0));

Looks like the crop_* variables are uninitialized. You either need to
set them to desired values (and then the I420Buffer should be created
with size crop_width, crop_height), or delete them and tell
NV12ToI420Rotate to process the complete input frame.

> const webrtc::VideoFrame frame(buffer, webrtc::kVideoRotation_0,
> cf->rtc_timestamp * 1000);
> this->OnFrame(frame, cf->width, cf->height);

From the signature of this call, it looks like your class is still
inheriting cricket::VideoCapturer. Don't do that, it's deprecated and
will be deleted as soon as webrtc's internal captures are refactored
to not use it. Instead, implement VideoSourceInterface.

Regards,
/Niels

yihungbakj hung

no leída,
16 abr 2018, 2:59:4616/4/18
a discuss-webrtc
Hi Niels, Thank you very much.

Here is my CaptureFrame class. Not cricket::CapturedFrame. I want to transfer my data from CaptureFrame to webrtc::VideoFrame.

I don't know how to transfer data to webrtc::VideoFrame. Can you give me some examples? Thank you. yh.

/*
 * tavarua_data_structure.h
 *
 *  Created on: Dec 22, 2017
 *      Author: yh
 */

#ifndef TAVARUA_DATA_STRUCTURE_H_
#define TAVARUA_DATA_STRUCTURE_H_

#ifndef __STDC_LIMIT_MACROS
#define __STDC_LIMIT_MACROS
#endif
#include <stdint.h>
#include "utils/buffer.h"
#include "utils/tqueue.h"
#include "utils/timestamp.h"

enum TAVARUA_FMT {
TAVARUA_PIX_FMT_NONE = 0xFF00,
TAVARUA_PIX_FMT_MJPEG,
TAVARUA_PIX_FMT_PNG,
TAVARUA_PIX_FMT_H264,
TAVARUA_PIX_FMT_VP8,
TAVARUA_PIX_FMT_VP9,
TAVARUA_PIX_FMT_MPEG4,
TAVARUA_PIX_FMT_YUV420P,
TAVARUA_PIX_FMT_YUV422P,
TAVARUA_PIX_FMT_YUYV422,
TAVARUA_PIX_FMT_UYVY422,
TAVARUA_PIX_FMT_BGRA,
TAVARUA_AUDIO_PCM,
TAVARUA_AUDIO_AAC,
TAVARUA_AUDIO_G711,
TAVARUA_AUDIO_OPUS
};

class CaptureFrame {
public:
static void *operator new(size_t size) {
return malloc(size);
}
static void operator delete(void *block) {
free(block);
}

union{
size_t width;
size_t sample_rate;
};
union{
size_t height;
size_t channel;
};
size_t pix_fmt;
buffer frame; // a big enough buffer for store frame
size_t frame_size; // the real size of frame, the "frame.size()" is not equal to frame_size
// always save the below three "time" variables in microsecondsto be portable
int64_t timestamp;
int64_t rtc_timestamp;

CaptureFrame(size_t w=0, size_t h=0, size_t fmt=TAVARUA_PIX_FMT_NONE):
width(w), height(h), pix_fmt(fmt), frame(), frame_size(0), timestamp(us_timestamp::now()), rtc_timestamp(0){
}
CaptureFrame(size_t w, size_t h, size_t fmt, buffer b):
width(w), height(h), pix_fmt(fmt), frame(b), frame_size(b.size()), timestamp(us_timestamp::now()), rtc_timestamp(0){
}
CaptureFrame(size_t w, size_t h, size_t fmt, buffer b, int64_t ts, int64_t rtc_ts=0):
width(w), height(h), pix_fmt(fmt), frame(b), frame_size(b.size()), timestamp(ts), rtc_timestamp(rtc_ts){
}

~CaptureFrame() {}

bool operator ==(const CaptureFrame& cf)  const {
return (width == cf.width) &&
   (height == cf.height) &&
   (pix_fmt == cf.pix_fmt) &&
   (timestamp == cf.timestamp ) &&
   (frame == cf.frame);
}

bool operator !=(const CaptureFrame& cf)  const {
return (width != cf.width) ||
(height != cf.height) ||
(pix_fmt != cf.pix_fmt) ||
(timestamp != cf.timestamp) ||
(frame != cf.frame);
}

operator bool () const {
return (pix_fmt != TAVARUA_PIX_FMT_NONE) && (frame.is_valid());
}
size_t size() {
return frame.is_valid() ? frame.size() : 0;
}
};

class CircularCaptureQueue {
protected:
tqueue< shared_ptr<CaptureFrame> > freeCaptureFrameq;
public:
CircularCaptureQueue() : freeCaptureFrameq(10) {}
virtual ~CircularCaptureQueue() { freeCaptureFrameq.clear(); }

virtual void freeFrameBuffer(shared_ptr<CaptureFrame> freeFrame) {
if(freeFrame) freeCaptureFrameq.push_back(freeFrame);
}

virtual shared_ptr<CaptureFrame> getFrameBuffer(size_t size, size_t w=0, size_t h=0, size_t fmt=TAVARUA_PIX_FMT_NONE, int64_t timestamp=0) {
if(!freeCaptureFrameq.empty()) {
shared_ptr<CaptureFrame> ret = freeCaptureFrameq.pop_front();
if(ret && ret->frame.max_size() >= size) {
ret->frame.clear();
ret->frame.set_length(size);
ret->width = w;
ret->height = h;
ret->pix_fmt = fmt;
ret->timestamp = timestamp;
return ret;
}
}

// INFO << "No free element or size = "<< size <<". Create new one, qsize = " << freeCaptureFrameq.size();
if(timestamp == 0) timestamp = us_timestamp::now();
return shared_ptr<CaptureFrame>(new CaptureFrame(w, h, fmt, buffer(size), timestamp));
}
};

/* Add by Yang, 2012-03-20
 * FrameNumber is for VIDEO ENCODER and VIDEO DECODER
 * It implements circular number in 2^23 - 1.
 *
 * Assume A and B is in FrameNumber.
 * Define:
 *  1. A+B = (A + B) % MAX_FRAME_NUMBER
 *  2. A-B = (A + MAX_FRAME_NUMBER - B) % MAX_FRAME_NUMBER
 * 3. diff(A,B) = min(A-B, B-A)
 *
 * Support:
 * 1. int = FrameNumber + int
 * 2. FrameNumber = int
 * 3.  FrameNumber A >= FrameNumber B
 * If true, B + diff(A,B) = A , else false.
 * 4.  FrameNumber A <= FrameNumber B
 * If true, A + diff(A,B) = B , else false.
 * */
class FrameNumber {
private:
const static int MAX_FRAME_NUMBER = 8388607 ; // 2^23-1

int number;

inline int diff(int a, int b) {
int diff1 = minus(a, b);
int diff2 = minus(b, a);
return (diff1 > diff2)? diff2 : diff1;
}

inline int minus(int a, int b) {
return (a + MAX_FRAME_NUMBER - b) % MAX_FRAME_NUMBER;
}

inline int pulse(int a, int b) {
return (a + b) % MAX_FRAME_NUMBER;
}

void setValue(int value) {
number = value;
}

public:
FrameNumber(int init_value = 0) : number(init_value) { }
~FrameNumber(){ }

bool operator < (const FrameNumber &B) {
int difference  = diff(number, B);
return bool (pulse(number, difference) == B);
}

bool operator > (const FrameNumber &B) {
int difference  = diff(number, B);
return bool (number == pulse(B, difference));
}

int operator ++ ( int /*dummy*/ ) {
int original = number;
setValue(pulse(number, 1));
return original;
}

int operator - (FrameNumber B) {
return diff(number, B.number);
}

int operator - (int B) {
return diff(number, B);
}

int operator + (const int value) {
return pulse( this->number, value);
}

void operator = (const int value) {
setValue(value);
}

void operator = (const FrameNumber value) {
setValue(value);
}

operator int () const {
return number;
}
};

class VideoEncodedChunk {
public:
struct Header {
int64_t timestamp;
Header(int64_t t=0):timestamp(t){}
~Header() {}
}header;

const static int headerLength = sizeof(Header); // 64 ippc + 8 timestamp
bool key_frame; // is key frame;
int frame;              // frame number
int num_encoders;
buffer  encodedFrameData;

VideoEncodedChunk(struct Header h=Header(), bool key=false, int fn=0, int ne=1, buffer buf=buffer()) :
header(h),key_frame(key), frame(fn), num_encoders(ne), encodedFrameData(buf) { }
~VideoEncodedChunk() { }

operator bool () const { return (encodedFrameData.is_valid()); }
int get_encoded_frame_size() { return encodedFrameData.length(); }
int get_sended_data_len() { return encodedFrameData.length(); }
char* get_sended_data() { return encodedFrameData.is_valid() ? encodedFrameData.get_data() : NULL; }
bool is_valid() { return encodedFrameData.is_valid(); }
};

struct AudioEncodedChunk {
int frame_number;
buffer data;
int64_t timestamp;
AudioEncodedChunk(int _num = 0, buffer _data = buffer(), int64_t ts=0) : frame_number(_num), data(_data), timestamp(ts) {}
~AudioEncodedChunk() {}
operator bool () const { return data.is_valid() && (frame_number!=0); }
bool operator !=(const AudioEncodedChunk& aec) const {
return (aec.frame_number != frame_number) || (aec.data != data);
}
};

struct ArchivedChunk {
bool is_raw;
size_t width;
size_t height;
int64_t timestamp;
buffer image;

ArchivedChunk(bool raw=false, size_t w=0, size_t h=0, buffer img=buffer(), int64_t ts=0)
: is_raw(raw), width(w), height(h), timestamp(0), image(img) {
timestamp = (ts!=0) ? ts : us_timestamp::now();
}

~ArchivedChunk() { }

operator bool () const {
return (image.is_valid() && width>0 && height>0);
}
};


#endif /* TAVARUA_DATA_STRUCTURE_H_ */


Niels Moller於 2018年4月13日星期五 UTC+8下午4時23分19秒寫道:
Responder a todos
Responder al autor
Reenviar
0 mensajes nuevos