FFMpegFrameRecorder issues when I extend video with audio tracks

528 views
Skip to first unread message

Martin Fuchs

unread,
May 9, 2020, 7:08:07 PM5/9/20
to javacv
Hi, 

playing around with FFmpegFrameRecorder in javacv and everything works as it should capturing x11 or Mac screen converting to rtmp. when I extend the working program with an audio track the sound is available only in the beginning of the video, then struggles and afterwards only the video is visible without any sound. 

find a sample video of the problem attached. Is there an error in my recorder parametrization (below) any other hints what I can do? 

thx a lot, Martin




the behavior is the same if I use nginx-rtmp or the local rtmp server on my Mac book, an upgrade from 3.5 to 4.0xx changed nothing. 

        <dependency>


                 <groupId>org.bytedeco.javacpp-presets</groupId>


                 <artifactId>opencv</artifactId>


                 <version>4.0.1-1.4.4</version>


                </dependency>






                <!-- https://mvnrepository.com/artifact/org.bytedeco/javacv -->


<dependency>


    <groupId>org.bytedeco</groupId>


    <artifactId>javacv</artifactId>


    <version>1.4.4</version>


</dependency>




                <!-- https://mvnrepository.com/artifact/org.bytedeco/javacv-platform -->


                <dependency>


                    <groupId>org.bytedeco</groupId>


                    <artifactId>javacv-platform</artifactId>


                    <version>1.4.4</version>


                </dependency>




I configured the FFmpegFrameRecorder like this (results of the respective get functions of FFmpegFrameRecorder) 

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.format=flv

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.maxdelay=-1

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - video

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.framerate=20.0

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.gopsize=40

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.pixelformat=-1

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.videobitrate=4000000

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.videocodec=27 codename=null

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.videoquality=-1.0

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.videooptions={crf=18, preset=veryfast}

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - audio

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.audiobitrate=320000

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.audiochannels=2

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.sampleformat=-1

[2020-05-09 23:27:14,150]-[netty task-4-2] INFO  io.be1.quattro.stream.mediaprocessor.MediaProcessorRecorder - recorder.samplerate=44100


private void displayRecorderSettings (FFmpegFrameRecorder recorder)

{

if (recorder==null) return;

log.info("recorder.options={}",recorder.getOptions());

log.info("recorder.format={}",recorder.getFormat());

log.info("recorder.maxdelay={}",recorder.getMaxDelay());

log.info("video");

log.info("recorder.framerate={}",recorder.getFrameRate());

log.info("recorder.gopsize={}",recorder.getGopSize());

log.info("recorder.pixelformat={}",recorder.getPixelFormat());

log.info("recorder.videobitrate={}",recorder.getVideoBitrate());

log.info("recorder.videocodec={} codename={}",recorder.getVideoCodec(),recorder.getVideoCodecName());

log.info("recorder.videoquality={}",recorder.getVideoQuality());

log.info("recorder.videooptions={}",recorder.getVideoOptions());

log.info("audio");

log.info("recorder.audiobitrate={}",recorder.getAudioBitrate());

log.info("recorder.audiochannels={}",recorder.getAudioChannels());

log.info("recorder.sampleformat={}",recorder.getSampleFormat());

log.info("recorder.samplerate={}",recorder.getSampleRate());

log.info("recorder.audiocodec={} audiocodename={}",recorder.getAudioCodec(),recorder.getAudioCodecName());

log.info("recorder.audiooptions={}",recorder.getAudioOptions());

}



Output 

Input #0, mp3, from './mediasamples/test.mp3':

 Metadata:

   genre           : Cinematic

   album           : YouTube Audio Library

   title           : Impact Moderato

   artist          : Kevin MacLeod

 Duration: 00:00:27.17, start: 0.025057, bitrate: 320 kb/s

Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s

   Metadata:

     encoder         : LAME3.99r

[libx264 @ 0x7f93c0b00000] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2

[libx264 @ 0x7f93c0b00000] profile High, level 3.1

[libx264 @ 0x7f93c0b00000] 264 - core 155 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=12 lookahead_threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=40 keyint_min=4 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00

Output #0, flv, to 'rtmp://127.0.0.1/live':

 Metadata:

   encoder         : Lavf58.20.100

   Stream #0:0: Video: h264 (Constrained Baseline) ([7][0][0][0] / 0x0007), yuv420p, 800x600, q=2-31, 4000 kb/s, 20 fps, 1k tbn, 20 tbc

   Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 320 kb/s[avfoundation @ 0x7f93bd445a00] Selected pixel format (bgr24) is not supported by the input device.

[avfoundation @ 0x7f93bd445a00] Supported pixel formats:

[avfoundation @ 0x7f93bd445a00]   uyvy422

[avfoundation @ 0x7f93bd445a00]   yuyv422

[avfoundation @ 0x7f93bd445a00]   nv12

[avfoundation @ 0x7f93bd445a00]   0rgb

[avfoundation @ 0x7f93bd445a00]   bgr0

[avfoundation @ 0x7f93bd445a00] Overriding selected pixel format to use uyvy422 instead.

[avfoundation @ 0x7f93bd445a00] Stream #0: not enough frames to estimate rate; consider increasing probesize

Input #0, avfoundation, from '1':

 Duration: N/A, start: 382907.925667, bitrate: N/A

   Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 2880x1800, 1000k tbr, 1000k tbn, 1000k tbc

[mp3float @ 0x7f93c37e7c00] Could not update timestamps for skipped samples.

[libx264 @ 0x7f93c43b3600] frame I:48    Avg QP: 9.25  size: 25542

[libx264 @ 0x7f93c43b3600] frame P:521   Avg QP:19.31  size:   784

[libx264 @ 0x7f93c43b3600] frame B:1329  Avg QP:25.92  size:   130

[libx264 @ 0x7f93c43b3600] consecutive B-frames:  4.8%  3.3%  6.6% 85.3%

[libx264 @ 0x7f93c43b3600] mb I  I16..4: 72.4%  2.6% 24.9%

[libx264 @ 0x7f93c43b3600] mb P  I16..4:  2.2%  0.2%  0.4%  P16..4:  0.9%  0.3%  0.2%  0.0%  0.0%    skip:95.8%

[libx264 @ 0x7f93c43b3600] mb B  I16..4:  0.1%  0.0%  0.0%  B16..8:  0.4%  0.1%  0.0%  direct: 0.1%  skip:99.3%  L0:36.5% L1:55.4% BI: 8.1%

[libx264 @ 0x7f93c43b3600] 8x8 transform intra:3.8% inter:17.1%

[libx264 @ 0x7f93c43b3600] coded y,uvDC,uvAC intra: 32.2% 7.1% 4.6% inter: 0.2% 0.1% 0.0%

[libx264 @ 0x7f93c43b3600] i16 v,h,dc,p: 48% 46%  6%  0%

[libx264 @ 0x7f93c43b3600] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 56% 17%  1%  0%  1%  1%  1%  2%

[libx264 @ 0x7f93c43b3600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 40%  9%  2%  2%  2%  3%  2%  3%

[libx264 @ 0x7f93c43b3600] i8c dc,h,v,p: 70% 27%  2%  0%

[libx264 @ 0x7f93c43b3600] Weighted P-Frames: Y:0.2% UV:0.2%

[libx264 @ 0x7f93c43b3600] kb/s:152.32

[aac @ 0x7f93c0ab8a00] Qavg: 120.000

[aac @ 0x7f93c0ab8a00] 2 frames left in the queue on closing


The core of the recording function (When the End of the MP3 Stream is detected I reinstantiate and restart the respective FrameGrabber)

//first record videoframe
recorder
.record(converter.convert(frame));

//check if configuration enabled audio

if (this.getConfigInteger("audiochannels")!=null&&this.getConfigInteger("audiochannels").get()>0)


{

Frame audioframe;

try {
audioframe = audioFrames.grabFrame();


 if (audioframe==null)

{

nextAudioSource();

audioframe=audioFrames.grabFrame();

}


if (audioframe!=null)

 {
//capture audioframe

recorder.record(audioframe);

}else log.error("audioframe still null");

} catch (org.bytedeco.javacv.FrameGrabber.Exception e) {

                                      e.printStackTrace();

}





vlc-output.ts

Samuel Audet

unread,
May 9, 2020, 7:36:15 PM5/9/20
to javacv, martinf...@gmail.com
It doesn't look like you're synchronizing the streams. How do you ensure that the audio and video timestamps match?

2020年5月10日(日) 8:08 Martin Fuchs <martinf...@gmail.com>:
--

---
You received this message because you are subscribed to the Google Groups "javacv" group.
To unsubscribe from this group and stop receiving emails from it, send an email to javacv+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/javacv/43fa4040-ca6f-4012-bcd5-edb6cdab96e7%40googlegroups.com.

Martin Fluch

unread,
May 10, 2020, 7:51:34 AM5/10/20
to javacv
Hi samuel, 

Dont exactly know what you mean, i use an example you posted years ago: 

On Friday, 27 July 2012 21:09:48 UTC-7, Samuel Audet wrote:
On 2012-07-24 19:51, RAJESH K wrote: 
> Hi All , 

> I want to bind an audio with an video file using javacv/Android platform . 
> If any one has idea or any experience in that please advice me to come 
> out . 

Something like this should work: 

     FrameGrabber grabber1 = new FFmpegFrameGrabber("video.avi"); 
     FrameGrabber grabber2 = new FFmpegFrameGrabber("audio.mp3"); 
     grabber1.start(); 
     grabber2.start(); 
     FrameRecorder recorder = new FFmpegFrameRecorder("output.mp4", 
             grabber1.getImageWidth(), grabber1.getImageHeight(), 
             grabber2.getAudioChannels()); 
     recorder.setFrameRate(grabber1.getFrameRate()); 
     recorder.setSampleFormat(grabber2.getSampleFormat()); 
     recorder.setSampleRate(grabber2.getSampleRate()); 
     recorder.start(); 
     Frame frame1, frame2; 
     while ((frame1 = grabber1.grabFrame()) != null || 
            (frame2 = grabber2.grabFrame()) != null) { 
         recorder.record(frame1); 
         recorder.record(frame2); 
     } 
     recorder.stop(); 
     grabber1.stop(); 
     grabber2.stop(); 

The goal is to underlay the screencast with an endless Playing mp3 or webradio, in you can point me to a proper example handling the sync this Would be great 

Thx Martin 
To unsubscribe from this group and stop receiving emails from it, send an email to jav...@googlegroups.com.

Martin Fluch

unread,
May 10, 2020, 7:51:35 AM5/10/20
to javacv

Ok, Not sure what you mean, reduced to the facts i used a sample like the one you posted years Ago

How can i Sync Tao completly different streams, Can you Point me to a proper Exemple? Goal is to extend the screencast with a music file (endless repeat) or mp3 stream (eg webradio) 

Thx Martin 

Am Sonntag, 10. Mai 2020 01:36:15 UTC+2 schrieb Samuel Audet:
To unsubscribe from this group and stop receiving emails from it, send an email to jav...@googlegroups.com.

Samuel Audet

unread,
May 10, 2020, 7:55:02 AM5/10/20
to jav...@googlegroups.com, Martin Fluch
Right, that's OK, as long as the streams are equal in size and all
frames are passed along, that works. If your case is different, which is
often the case, you'll need to compensate for the discrepancy.

Martin Fluch

unread,
May 10, 2020, 10:51:25 AM5/10/20
to javacv

Ok, can you recommend an example how to do it the Right way as in my case the streams are different m

Samuel Audet

unread,
May 10, 2020, 7:07:22 PM5/10/20
to javacv, Martin Fluch
There's probably an example somewhere for sure, but nothing super easy to understand. The basic idea is that we need to add empty audio frames if the audio stream is shorter, while if video stream is shorter, we can usually get away by calling setTimestamp() to skip frames.

2020年5月10日(日) 23:51 Martin Fluch <martinfl...@gmail.com>:

Martin

unread,
May 11, 2020, 12:16:03 PM5/11/20
to javacv
ok, to be sure to be on the right path: 

as the screen grabbing is more less "endless" and the audio file is short eg 3min the following steps have to be implemented

1) grabbing video frames from screen in a while loop like capturedFrame = grabber.grabImage()) != null
2) grabbing audio frames from the opened mp3; like audioframe=audioFrames.grabFrame(); if end reached reopen mp3 file 

3) record the video frame (set a self generated timestamp) 

4) check the time difference of the grabbed frames 
- if the audio frame is "before" the relative time position of the video frame the record "empty" created audio frames.  (set a self generated timestamp) 
- if the audio frame is "behind" the relative time position skip the audio frame?

Question: what is the right parameter to check time diff between two streams ? I checked capturedFrame.timestamp - but this timestamps do not start from 0; I tried to normalize the two streams (storing initial timestamp from video and audio stream) but not really happy with the results. 

thx

Martin

unread,
May 11, 2020, 4:33:57 PM5/11/20
to javacv

Hi, attached my source 


- screen grabbing (on macOS with avfoundation format)

- mp3 file grabbing 

- central while loop mixing both together

- when I enable audio the problems start


like described in my post a few hours ago I understand that there is a synchronization issue between video & audio but currently I struggle calculating the time diff.


Martin 




package io.be1.quattro;

import java.util.ArrayList;
import
java.util.List;

import org.bytedeco.javacpp.avcodec;

import org.bytedeco.javacv.FFmpegFrameGrabber;

import org.bytedeco.javacv.FFmpegFrameRecorder;

import org.bytedeco.javacv.Frame;

import org.bytedeco.javacv.FrameRecorder.Exception;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

public class MfGrabberTest

{

      private static final Logger log = LoggerFactory.getLogger(MfGrabberTest.class);

   final private static int FRAME_RATE = 20;

   final private static int GOP_LENGTH_IN_FRAMES = 40;

    private static long startTime = 0;

   private static long videoTS = 0;

   private static boolean doaudio=true;

   

   

   

   private static List<String> audioSources=new ArrayList<String>();

      private static int currentAudioSource=-1;

      private static FFmpegFrameGrabber audioFrames;

      private static Frame audioframe=null;

       

     

       

   

   private static void nextAudioSource()

      {

            if (audioSources==null||audioSources.size()==0)

            {

                   log.info("audiosource==null, setting channels 0");

                   

                   return;

             }

            currentAudioSource++;

             if (currentAudioSource>(audioSources.size()-1)) currentAudioSource=0;

             log.info("nextAudioSource id={} at videots={} ",currentAudioSource,videoTS);

           

             String source=audioSources.get(currentAudioSource);

            if (source==null&&currentAudioSource>0)

            {

                   currentAudioSource=0;

                   source=audioSources.get(currentAudioSource);

                   

            }

            log.info("nextAudioSource id={} source={}",currentAudioSource,source);

           

             

           

             

            audioFrames = new FFmpegFrameGrabber(source);

           

             

            try {

                   

                   audioFrames.start();

            } catch (org.bytedeco.javacv.FrameGrabber.Exception e) {

                   

                   e.printStackTrace();

            }

            log.info("nextAudioSource done" );

           

       }

     

   

    public static void main(String[] args) throws Exception, org.bytedeco.javacv.FrameGrabber.Exception

   {

       final int captureWidth = 1280;

       final int captureHeight = 720;

        audioSources.add("./mediasamples/americano.mp3");

       // The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),

       // DC1394FrameGrabber, FlyCapture2FrameGrabber, OpenKinectFrameGrabber,

       // PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.

       final FFmpegFrameGrabber grabber = new FFmpegFrameGrabber("1");

       grabber.setImageWidth(captureWidth);

       grabber.setImageHeight(captureHeight);

       grabber.setFormat("avfoundation");

     

        grabber.setFrameRate(FRAME_RATE);

     

       

        grabber.start();

       

        // org.bytedeco.javacv.FFmpegFrameRecorder.FFmpegFrameRecorder(String

       // filename, int imageWidth, int imageHeight, int audioChannels)

       // For each param, we're passing in...

       // filename = either a path to a local file we wish to create, or an

       // RTMP url to an FMS / Wowza server

       // imageWidth = width we specified for the grabber

       // imageHeight = height we specified for the grabber

       // audioChannels = 2, because we like stereo

       final FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(

               "rtmp://127.0.0.1/live",

               captureWidth, captureHeight, doaudio?2:0);

       recorder.setInterleaved(true);

        // decrease "startup" latency in FFMPEG (see:

       // https://trac.ffmpeg.org/wiki/StreamingGuide)

       recorder.setVideoOption("tune", "fastdecode");

       // tradeoff between quality and encode speed

       // possible values are ultrafast,superfast, veryfast, faster, fast,

       // medium, slow, slower, veryslow

       // ultrafast offers us the least amount of compression (lower encoder

       // CPU) at the cost of a larger stream size

       // at the other end, veryslow provides the best compression (high

       // encoder CPU) while lowering the stream size

       // (see: https://trac.ffmpeg.org/wiki/Encode/H.264)

       recorder.setVideoOption("preset", "ultrafast");

       // Constant Rate Factor (see: https://trac.ffmpeg.org/wiki/Encode/H.264)

       recorder.setVideoOption("crf", "18");

       // 2000 kb/s, reasonable "sane" area for 720

       recorder.setVideoBitrate(4000000);

       recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);

       recorder.setFormat("flv");

       // FPS (frames per second)

       recorder.setFrameRate(FRAME_RATE);

       // Key frame interval, in our case every 2 seconds -> 30 (fps) * 2 = 60

       // (gop length)

       recorder.setGopSize(GOP_LENGTH_IN_FRAMES);

       recorder.setVideoOption("threads", "1");

        // We don't want variable bitrate audio

       

       

       

       if(doaudio)

       {

            nextAudioSource();

                   

              recorder.setAudioOption("crf", "0");

              // Highest quality

              recorder.setAudioQuality(0);

              // 192 Kbps

              recorder.setAudioBitrate(192000);

              recorder.setSampleRate(44100);

              recorder.setAudioChannels(2);

               recorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);

 }

       

       

     

        recorder.start();

             

        Frame capturedFrame = null;

     

       

        // While we are capturing...

       while ((capturedFrame = grabber.grabFrame()) != null)

       {

     

           // Let's define our start time...

           // This needs to be initialized as close to when we'll use it as

           // possible,

           // as the delta from assignment to computed time could be too high

           if (startTime == 0)

               startTime = System.currentTimeMillis();

         

            // Create timestamp for this frame

           videoTS = 1000 * (System.currentTimeMillis() - startTime);

           long correction=0;

            // Check for AV drift

           

           

           

           if (videoTS > recorder.getTimestamp())

           {

           

            correction=videoTS - recorder.getTimestamp();

               log.info(

                       "Lip-flap correction: "

                        + videoTS + " : "

                       + recorder.getTimestamp() + " -> "

                       + correction );

                // We tell the recorder to write this frame at this timestamp

               recorder.setTimestamp(videoTS);

           }

           

           recorder.record(capturedFrame);

           

           

           if (doaudio)

           {

            audioframe=audioFrames.grabFrame();

           

                    if (audioframe==null)

                    {

                          nextAudioSource();

                          audioframe=audioFrames.grabSamples();

                   }

                   

                   

                   

                   if (audioframe!=null)

                   {

                   

                          recorder.record(audioframe);

                   }else log.error("audioframe=null");

           }

           

           

         

         

           

           

           

       }

       

       recorder.stop();

       grabber.stop();

   }

}

Samuel Audet

unread,
May 11, 2020, 8:14:22 PM5/11/20
to jav...@googlegroups.com, Martin
We might not get any interesting timestamps when capturing from a
real-time source like a camera or the screen, but in that case we can
use the system clock with something like System.nanoTime() / 1000, which
you're already doing, so no need to change that.

I still don't see where you compare timestamps though, but anyway, yes,
we need to do something similar to what you describe, so try it out!
Message has been deleted

Martin

unread,
May 12, 2020, 6:16:18 PM5/12/20
to javacv
finally I did it, here is the easy solution

package io.be1.quattro;


import java.util.ArrayList;
import java.util.List;
import org.bytedeco.javacpp.avcodec;
import org.bytedeco.javacv.FFmpegFrameGrabber;
import org.bytedeco.javacv.FFmpegFrameRecorder;
import org.bytedeco.javacv.Frame;
import org.bytedeco.javacv.FrameRecorder.Exception;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class MfGrabberTest
{
private static final Logger log = LoggerFactory.getLogger(MfGrabberTest.class);
    final private static int FRAME_RATE = 20;
    final private static int GOP_LENGTH_IN_FRAMES = 40;

    private static long startTime = 0;
    private static long videoTS = 0;
    private static boolean doaudio=true; 
   
    private static String destination="rtmp://127.0.0.1/live";
                destination,
                captureWidth, captureHeight, doaudio?2:0);
        //recorder.setInterleaved(true);

        // decrease "startup" latency in FFMPEG (see:
        recorder.setVideoOption("tune", "fastdecode");
        
        // tradeoff between quality and encode speed
        // possible values are ultrafast,superfast, veryfast, faster, fast,
        // medium, slow, slower, veryslow
        // ultrafast offers us the least amount of compression (lower encoder
        // CPU) at the cost of a larger stream size
        // at the other end, veryslow provides the best compression (high
        // encoder CPU) while lowering the stream size
        recorder.setVideoOption("preset", "ultrafast");
        // Constant Rate Factor (see: https://trac.ffmpeg.org/wiki/Encode/H.264)
        recorder.setVideoOption("crf", "18");
        // 2000 kb/s, reasonable "sane" area for 720
        recorder.setVideoBitrate(4000000);
        recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        recorder.setFormat("flv");
        // FPS (frames per second)
        recorder.setFrameRate(FRAME_RATE);
        // Key frame interval, in our case every 2 seconds -> 30 (fps) * 2 = 60
        // (gop length)
        recorder.setGopSize(GOP_LENGTH_IN_FRAMES);
       // recorder.setVideoOption("threads", "1");

        // We don't want variable bitrate audio
        
        
        
        if(doaudio)
        {
        nextAudioSource();
       recorder.setAudioOption("crf", "0");
       // Highest quality
       recorder.setAudioQuality(0);
       // 192 Kbps
       recorder.setAudioBitrate(192000);
       recorder.setSampleRate(44100);
       recorder.setAudioChannels(2); 
       recorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);
  }
        
        
       
        recorder.start();

              
        Frame capturedFrame = null;
        boolean go=true; 
        
        // While we are capturing...
        while (go)
        {
      
       
        capturedFrame = grabber.grabFrame();
       
           
            if (startTime == 0)
                startTime = System.currentTimeMillis();
           
            // Create timestamp for this frame
            
            videoTS = 1000 * (System.currentTimeMillis() - startTime);
            
            
            if (doaudio)
            {
            long delta=1;
            videoTS = 1000 * (System.currentTimeMillis() - startTime);
            while (delta>=0)
            {
           
            long audioTS=audioFrames.getTimestamp();
            delta=videoTS-audioTS; 
            
           
           
            log.info("audioframe audiots: {}ms vidoets: {}ms delta: {}ms",audioTS/1000, videoTS/1000,delta/1000);

           
            //audioFrames.setTimestamp(audioFrames.getTimestamp()+delta);
        audioframe=audioFrames.grabFrame();
       
if (audioframe==null) 
{
nextAudioSource();
audioframe=audioFrames.grabSamples();
}
if (audioframe!=null)
{
recorder.setTimestamp(videoTS);
recorder.record(audioframe);
}else log.error("audioframe=null");
            }
            
            }
            
            recorder.setTimestamp(videoTS);
            log.info("videoframe vidoets: {}ms ", videoTS/1000);
            recorder.record(capturedFrame);
           
          
            
            
           
        }
        

       
        recorder.stop();
        grabber.stop();
    }
}

Samuel Audet

unread,
May 12, 2020, 8:20:59 PM5/12/20
to jav...@googlegroups.com, Martin
Looks good! Thanks for the update. If you have the time, please consider
contributing a new sample based on that by sending a pull request here:
https://github.com/bytedeco/javacv/tree/master/samples

Samuel
Reply all
Reply to author
Forward
0 new messages