Hi again,
after dealing with the udp sample and trying to use it in my project, I have a couple of doubts. I'll post the code so we can see what are we dealing with:
Encoding step:
initialitation of the encoder ( the same as in udp sample):
vpx_codec_enc_config_default(&vpx_codec_vp8_cx_algo, &cfgEncoder, 0);
cfgEncoder.rc_target_bitrate = video_bitrate;
cfgEncoder.g_w = _width * 2;//we want to compress the side by side frame
cfgEncoder.g_h = _height;
cfgEncoder.g_timebase.num = 1;
cfgEncoder.g_timebase.den = (int) 10000000;
cfgEncoder.rc_end_usage = VPX_CBR;
cfgEncoder.g_pass = VPX_RC_ONE_PASS;
cfgEncoder.g_lag_in_frames = 0;
cfgEncoder.rc_min_quantizer = 20;
cfgEncoder.rc_max_quantizer = 50;
cfgEncoder.rc_dropframe_thresh = 1;
cfgEncoder.rc_buf_optimal_sz = 1000;
cfgEncoder.rc_buf_initial_sz = 1000;
cfgEncoder.rc_buf_sz = 1000;
cfgEncoder.g_error_resilient = 1;
cfgEncoder.kf_mode = VPX_KF_DISABLED;
cfgEncoder.kf_max_dist = 999999;
cfgEncoder.g_threads = 1;
vpx_img_alloc(&raw, VPX_IMG_FMT_I420, _width * 2, _height, 1);
cfgEncoder.rc_target_bitrate = video_bitrate;
vpx_codec_enc_init(&encoder, &vpx_codec_vp8_cx_algo, &cfgEncoder, 0);
//vpx_codec_control_(&encoder, VP8E_SET_CPUUSED, cpu_used);
int static_threshold = 1200;
vpx_codec_control_(&encoder, VP8E_SET_STATIC_THRESHOLD, static_threshold);
vpx_codec_control_(&encoder, VP8E_SET_ENABLEAUTOALTREF, 0);
NOTE: the image i want to encode comes from two webcams capturing at 1280 * 720 each. after it i concatenate the two separate images into one large image of 2560 * 720 ( see attached one)
encode the frame:
RGBtoYUV420PSameSize((unsigned char*)img->imageData,raw.img_data,3,0,cfgEncoder.g_w,cfgEncoder.g_h);
//note: copied from:
http://comserver.googlecode.com/svn/trunk/demo/comDemo/coder/vp8.cpp buffer_time = get_time() / 1000.000;
long long time_in_nano_seconds = (long long)(buffer_time * 10000000.000 + .5);
const vpx_codec_cx_pkt_t *pkt;
vpx_codec_iter_t iter = NULL;
flags = 0;
vpx_codec_err_t error = vpx_codec_encode(&encoder, &raw, time_in_nano_seconds, 30000000, flags, VPX_DL_REALTIME);
pkt = vpx_codec_get_cx_data(&encoder, &iter);
memcpy(_cpCompressedData,pkt->data.frame.buf,pkt->
data.frame.sz);
Decoding step:
init of the decoder:
vp8_postproc_cfg_t ppcfg;
vpx_codec_dec_init(&decoder, &vpx_codec_vp8_dx_algo, &cfgDecoder, 0);
/* Config post processing settings for decoder */
ppcfg.post_proc_flag = VP8_DEMACROBLOCK | VP8_DEBLOCK | VP8_ADDNOISE;
ppcfg.deblocking_level = 5 ;
ppcfg.noise_level = 1 ;
vpx_codec_control(&decoder, VP8_SET_POSTPROC, &ppcfg);
decoding the frame:
vpx_codec_iter_t iter = NULL;
vpx_image_t *img;
if (vpx_codec_decode(&decoder, _cpCompressedData, packetSize, 0, 0))
{
return -1;
}
//img = vpx_codec_get_frame(&decoder, &iter);
FILE * outfile = fopen("E:\\testImage.yuv", "wb");
while((img = vpx_codec_get_frame(&decoder, &iter))) {
unsigned int plane, y;
for(plane=0; plane < 3; plane++) {
unsigned char *buf =img->planes[plane];
for(y=0; y < (plane ? (img->d_h + 1) >> 1 : img->d_h); y++) {
if(fwrite(buf, 1, (plane ? (img->d_w + 1) >> 1 : img->d_w),outfile));
buf += img->stride[plane];
}
}
}
fclose (outfile);
//note: the saved file is the .yuv attached.
So, what I need is after the decoding step, show this image in the screen, like with opencv. I tried several ways to convert from I420 ( when i check the image format is this one ) to rgb but i can't find the good way to do it. Is there any function or method to convert the decoded frame to a rgb image as it was the original one? Did i make any mistake in the configuration of the encoder/decoder? is there a better way to do this? Is there a sample that decodes a frame and writes it as rgb image?
thanks for the help,
El dimarts 5 de juny de 2012 10:22:28 UTC+2, Spun va escriure: