Demo h264 encoding app

2,232 views
Skip to first unread message

jons...@gmail.com

unread,
Oct 17, 2013, 10:00:33 PM10/17/13
to linux...@googlegroups.com
I pushed a demo h264 encoding app to:
https://github.com/jonsmirl/cam

It seems to be working, but the output file is just a blank image.
I'll keep working on it, but if anyone can see what is wrong with it
please let me know.

Not much too it, about 1,000 lines of code derived from the
enc_dec_demo program. I basically deleted all of the display code
since it is harder to build and left the encoding code.

--
Jon Smirl
jons...@gmail.com

jons...@gmail.com

unread,
Oct 17, 2013, 10:18:41 PM10/17/13
to linux...@googlegroups.com
Allwinner h264 demo app for Android Compiles with Android NDK

Demo takes 500 frames from camera and compresses them with h264 output
is in /mnt/sdcard/h264.dat

CC = arm-linux-androideabi-gcc --sysroot=$(SYSROOT)
jonsmirl@terra:~/cam$ set | grep SYSROOT
SYSROOT=/home/apps/adt/android-ndk-r9/platforms/android-18/arch-arm

PATH
/home/apps/adt/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin:/home/apps/adt/sdk/tools:/home/apps/adt/sdk/platform-tools
--
Jon Smirl
jons...@gmail.com

jons...@gmail.com

unread,
Oct 17, 2013, 11:22:28 PM10/17/13
to linux...@googlegroups.com
Actually it appear that the file being written is working. mplayer can
play it but vlc can't. It is a raw h264 byte stream and does not
contain FPS data so it plays really fast. Next step is to capture
audio and wrap everything in MPEG TS. That will let me add FPS data.

--
Jon Smirl
jons...@gmail.com

sophi...@gmail.com

unread,
Oct 18, 2013, 2:08:46 AM10/18/13
to linux...@googlegroups.com
Here is a rtsp server using hardware encoding

https://github.com/ashwing920/rtspserver

thomas schorpp

unread,
Oct 18, 2013, 4:29:21 AM10/18/13
to linux...@googlegroups.com
Am 18.10.2013 05:22, schrieb jons...@gmail.com:
> Actually it appear that the file being written is working. mplayer can
> play it but vlc can't. It is a raw h264 byte stream and does not
> contain FPS data so it plays really fast. Next step is to capture
> audio and wrap everything in MPEG TS. That will let me add FPS data.
>

You could use the the Transport Stream Controller (TSC) for that, theres is
a basic driver port in this threads
[PATCH v2 1/1] [sunxi-boards/a20] Add support for Allwinner (DVB/ATSC) Transport Stream Controller(s) (TSC)
[PATCH v4 2/2] [stage/sunxi-3.4] Add support for Allwinner (DVB/ATSC) Transport Stream Controller(s) (TSC)

or where the maintainers commited it

and a Manual at
http://dl.linux-sunxi.org/A10/A10%20Transport%20Stream%20Controller%20V1.00%2020120917.pdf

I'm working on a DVB- receiver extension, takes more time since the Philips CU1216 has got 2mm pin pitch,
my Olimex A20 2,56, head -> wall, lucky cubieboard users with 2mm pin headers.

Maybe we should state some legitimate use cases for the encoder to get more support from Allwinnertech,
one of my later milestones is using the h.264 encoder for h.262 SDTV recordings storage to save HDD storage space :-)

y
tom

Enrico

unread,
Oct 18, 2013, 5:26:58 AM10/18/13
to linux...@googlegroups.com
Il giorno venerdì 18 ottobre 2013 05:22:28 UTC+2, Jon Smirl ha scritto:
Actually it appear that the file being written is working. mplayer can
play it but vlc can't. It is a raw h264 byte stream and does not
contain FPS data so it plays really fast. Next step is to capture
audio and wrap everything in MPEG TS. That will let me add FPS data.


mkvmerge -o h264.mkv  h264.dat

Enrico

thomas schorpp

unread,
Oct 18, 2013, 7:43:30 AM10/18/13
to linux...@googlegroups.com
Am 18.10.2013 11:26, schrieb Enrico:
Maybe pretty slow, remember, we're on ARM here, we've got nor X86 SSE/MMX
neither Advanced Cortex NEON SIMD extensions support
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0462f/ch02s01s02.html
(enabled?):

$ readelf -A /usr/local/src/mkvtoolnix_6.4.1-1_armhf/usr/bin/mkvmerge
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "7-A"
Tag_CPU_arch: v7
Tag_CPU_arch_profile: Application
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_VFP_arch: VFPv3-D16
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_rounding: Needed
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align8_needed: Yes
Tag_ABI_align8_preserved: Yes, except leaf SP
Tag_ABI_enum_size: int
Tag_ABI_HardFP_use: SP and DP
Tag_ABI_VFP_args: VFP registers
Tag_CPU_unaligned_access: v6

If You want to play in MPlayer1 You should better use FFmpeg and the MP4 container to avoid issues.
And should be faster, too, cause it's in C not C++ like mkvtoolnix (compilers, optimizations progress... )?

y
tom

Enrico

unread,
Oct 18, 2013, 9:15:23 AM10/18/13
to linux...@googlegroups.com, thomas....@gmail.com
Il giorno venerdì 18 ottobre 2013 13:43:30 UTC+2, Thomas Schorpp ha scritto:
Am 18.10.2013 11:26, schrieb Enrico:

 Too much over-engineering, just use it on your pc :D

Enrico

Manuel Braga

unread,
Oct 21, 2013, 5:38:31 PM10/21/13
to linux...@googlegroups.com
Hi.

This is not only for the thread parent, but to all that could have a
interest in contributing to the reverse engineering effort.

As you(plural) are getting *dirty* with the h264 encoder binary blob,
don't want to help by making a minimal encoder?

This minimal encoder would be used to generate traces, so that the
process of reverse engineering the h264 hardware encode can eventually
start.
Is simply, to put the blob library in a program form, and make the
encoding work. We will be testing and say what is missing or needs to
be added, so whoever is writing doesn't even need to run it under the
tracer.

Any takers?


--
mul

Enrico

unread,
Oct 23, 2013, 6:55:29 PM10/23/13
to linux...@googlegroups.com

Do you think the encoder Jon shared is not minimal enough?
I have another version for linux that is basically the same, just without threads. I don't know if it's possible to make the encoder even more simple, any suggestions?

Enrico

Rosimildo DaSilva

unread,
Oct 24, 2013, 7:49:46 AM10/24/13
to linux...@googlegroups.com
Just post what you have in a GIT repo and let people take it from there.
R

Manuel Braga

unread,
Oct 24, 2013, 12:46:11 PM10/24/13
to linux...@googlegroups.com
Hi.

On Wed, 23 Oct 2013 15:55:29 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
> Do you think the encoder Jon shared is not minimal enough?
> I have another version for linux that is basically the same, just
> without threads. I don't know if it's possible to make the encoder
> even more simple, any suggestions?
>
> Enrico
>

The encoder of Jon is for Android, so i stopped there.
In this thread sophiasmth posted a rtspserver,and by exploring github
there is one even better, as it names says.

https://github.com/ashwing920/SimpleRecorder

And it's working fine in A13(only hardware tested), i already made h264
encoder traces.

Why this isn't enough? You ask.
Because the tracer is slow, very slow.
The tracer is a tool for valgrind, valgrind emulates and JIT every
instruction, and the tracer injects for every target memory access a
call to a function to log.
Is slow but it works, but can you see the pain that is to work this way.
Any extra baggages is only dead weight slowing even more.
So a minimal program that easy layouts all options of the encoder
library, and ready to work under the tracer is the ideal.
This will save time to the people that is making the traces to
understand how the hardware works.

For example that SimpleRecorder, has a preview using SDL, osd text, and
capture from a v4l device. All this things aren't required to make the
encoding work.
Forward more the cedar hardware uses if i am not mistaken a non common
format for the frame data, that SimpleRecorder has a function to
convert between, that works in place in the reserved cedar memory and
slows and pollutes the resulting traces, with useless things.

I know, making this changes is trivial to who knows c and has the time
to understand how to make the encoder library work.
To have a FLOSS cedar library, this step in some way or another have to
be done, some one will do it, be by me or some else.

But this is also a chance to the people that thinks reverse
engineering is too hard, to even try in the first place.
By simply helping in this, will give a useful contribution to pushing
forward the RE effort.


--
mul

jons...@gmail.com

unread,
Oct 24, 2013, 2:29:23 PM10/24/13
to linux...@googlegroups.com
On Thu, Oct 24, 2013 at 12:46 PM, Manuel Braga <mul....@gmail.com> wrote:
> Hi.
>
> On Wed, 23 Oct 2013 15:55:29 -0700 (PDT) Enrico <ebu...@gmail.com>
> wrote:
>> Do you think the encoder Jon shared is not minimal enough?
>> I have another version for linux that is basically the same, just
>> without threads. I don't know if it's possible to make the encoder
>> even more simple, any suggestions?
>>
>> Enrico
>>
>
> The encoder of Jon is for Android, so i stopped there.

There is no armhf release of the encoder library.

> In this thread sophiasmth posted a rtspserver,and by exploring github
> there is one even better, as it names says.
>
> https://github.com/ashwing920/SimpleRecorder

I've have to try that one. The rtspserver doesn't compile. They used a
binary of the v4l library that should have been used in source form.
Sorting that out should let it work.
> --
> You received this message because you are subscribed to the Google Groups "linux-sunxi" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to linux-sunxi...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.



--
Jon Smirl
jons...@gmail.com

Manuel Braga

unread,
Oct 24, 2013, 3:08:11 PM10/24/13
to linux...@googlegroups.com
On Thu, 24 Oct 2013 14:29:23 -0400 "jons...@gmail.com"
<jons...@gmail.com> wrote:
> >
> > The encoder of Jon is for Android, so i stopped there.
>
> There is no armhf release of the encoder library.

You are right, but those .a encoder library, is working ok, in a
chrooted armel rootfs.

> > In this thread sophiasmth posted a rtspserver,and by exploring
> > github there is one even better, as it names says.
> >
> > https://github.com/ashwing920/SimpleRecorder
>
> I've have to try that one. The rtspserver doesn't compile. They used a
> binary of the v4l library that should have been used in source form.
> Sorting that out should let it work.

For me rtspserver compiled as is and runs, but the mplayer that i tried
can't successful play back from the rtsp stream. But the recorded mkv
that it puts in a directory is correct and plays. So encoding works.
Next, was a failure to get a trace.


> >
> > And it's working fine in A13(only hardware tested), i already made
> > h264 encoder traces.
> >


--
mul

jons...@gmail.com

unread,
Oct 24, 2013, 4:08:22 PM10/24/13
to linux...@googlegroups.com
I've been doing what I can in trying to talk Allwinner into open
sourcing the h264 encoding library. With H264 the patent royalties
are not paid by the OEM making the chips, instead they are paid at the
stage in the production chain where the branding is applied to the
item (like when Onda makes a tablet). Each brand gets 100,000 free
licenses, then licenses are $0.20. I got MPEGLA to write that up in
an email and sent it to Allwinner.

So there shouldn't be any real patent concerns about open sourcing the
encoder library. Of course no one can guard against trolls but trolls
are unlikely since they would have already attacked the big fish -
Apple - and we'd know about them.

---------------------------------------------------------

Dear Mr. Smirl,

Thank you for your message and for your interest in MPEG LA. We
appreciate hearing from you and I will be happy to assist you.

As you appear to be aware, MPEG LA offers our AVC Patent Portfolio
License which provides coverage under patents that are essential for
use of the AVC/H.264 Standard (MPEG-4 Part 10). Under the License,
coverage is generally provided for end products that make use of
AVC/H.264 technology. Accordingly, the party offering such end
products for Sale to End Users (for example, the brand owner)
concludes the AVC License and is responsible for paying the applicable
royalties. An upstream OEM supplier or component supplier of non-end
products does not conclude our Licenses in order to provide coverage
for its customer's branded end products.

For your additional information, Google is a Licensee to MPEG LA's AVC
License for the AVC products they offer. However, coverage under
their AVC License cannot be extended to a third-party's branded end
product.

Therefore, if a company incorporates Android software (which includes
AVC/H.264 functionality) into the company's own branded security
camera product, then the company offering the finished product for
Sale (e.g., camera brand owner) would need to conclude the AVC License
and would be responsible for paying the applicable royalties.

I hope this information is helpful. If you have additional questions
or would like a copy of the AVC License, please feel free to contact
me directly.

Best regards,
Ben


Benjamin J. Myers
Licensing Associate
MPEG LA
5425 Wisconsin Avenue, Suite 801
Chevy Chase, MD 20815
USA
Phone: +1 301 986 6660, Ext. 219
Fax: +1 301 986 8575


--
Jon Smirl
jons...@gmail.com

ashwi...@gmail.com

unread,
Oct 25, 2013, 2:18:30 AM10/25/13
to linux...@googlegroups.com
http://github.com/ashwing920/rtspserver
The DEMO is like I do, you can use the VLC IOS WIN platform and Android can use the goodplayer inux platform without a test

jons...@gmail.com

unread,
Oct 25, 2013, 9:33:55 AM10/25/13
to linux...@googlegroups.com
On Fri, Oct 25, 2013 at 2:18 AM, <ashwi...@gmail.com> wrote:
> http://github.com/ashwing920/rtspserver
> The DEMO is like I do, you can use the VLC IOS WIN platform and Android can use the goodplayer inux platform without a test

This is the error....

arm-linux-gnueabi-g++ -static -I./ -I./h264 -I./h264/linux_lib
-D__OS_LINUX -o rtspserver ringfifo.o rtputils.o rtspservice.o
rtsputils.o h264/camera.o h264/encoder.o h264/log.o
h264/matroska_ebml.o h264/output.o h264/preview.o h264/textoverlay.o
-pthread -L. ./h264/libv4lconvert.a -lm -lrt
./h264/linux_lib/libcedarv_osal.a ./h264/linux_lib/libcedarxalloc.a
./h264/linux_lib/libh264enc.a ./h264/linux_lib/libcedarv.a
./h264/libv4lconvert.a(libv4lconvert_la-libv4lcontrol.o): In function
`v4lcontrol_create':
/home/zhang/v4l-utils-0.9.5/lib/libv4lconvert/control/libv4lcontrol.c:690:
warning: Using 'getpwuid_r' in statically linked applications requires
at runtime the shared libraries from the glibc version used for
linking

I think that means libv4lconvert.a was linked against a dynamic
library and then it was linked into the statically compiled rtspserver
app.

Manuel Braga

unread,
Oct 25, 2013, 10:56:23 AM10/25/13
to linux...@googlegroups.com
Hi,

On Fri, 25 Oct 2013 09:33:55 -0400 "jons...@gmail.com"
<jons...@gmail.com> wrote:
> On Fri, Oct 25, 2013 at 2:18 AM, <ashwi...@gmail.com> wrote:
> > http://github.com/ashwing920/rtspserver
> > The DEMO is like I do, you can use the VLC IOS WIN platform and
> > Android can use the goodplayer inux platform without a test
>
> This is the error....
>
> arm-linux-gnueabi-g++ -static -I./ -I./h264 -I./h264/linux_lib
> -D__OS_LINUX -o rtspserver ringfifo.o rtputils.o rtspservice.o
> rtsputils.o h264/camera.o h264/encoder.o h264/log.o
> h264/matroska_ebml.o h264/output.o h264/preview.o h264/textoverlay.o
> -pthread -L. ./h264/libv4lconvert.a -lm -lrt
> ./h264/linux_lib/libcedarv_osal.a ./h264/linux_lib/libcedarxalloc.a
> ./h264/linux_lib/libh264enc.a ./h264/linux_lib/libcedarv.a
> ./h264/libv4lconvert.a(libv4lconvert_la-libv4lcontrol.o): In function
> `v4lcontrol_create':
> /home/zhang/v4l-utils-0.9.5/lib/libv4lconvert/control/libv4lcontrol.c:690:
> warning: Using 'getpwuid_r' in statically linked applications requires
> at runtime the shared libraries from the glibc version used for
> linking
>
> I think that means libv4lconvert.a was linked against a dynamic
> library and then it was linked into the statically compiled rtspserver
> app.
>

My fault, i didn't remembered what i had done.

I also got that error, and got rid of it by:
* editing the makefile,
* remove the cross compile (arm-none-linux-gnueabi-)
* remove -static flag.
* compile native in armel rootfs

--
mul

Manuel Braga

unread,
Oct 25, 2013, 11:01:25 AM10/25/13
to Manuel Braga, linux...@googlegroups.com
On Fri, 25 Oct 2013 15:56:23 +0100 Manuel Braga <mul....@gmail.com>
wrote:
Forgot to add, that every thing was only tested with vivi.ko
(a v4l Virtual Video Driver)

--
mul

Rosimildo DaSilva

unread,
Oct 25, 2013, 1:37:41 PM10/25/13
to linux...@googlegroups.com
I think this team is making some progress!

It is probably too much "setup" to get the input from a camera. 

Maybe for the test, would be better to get the input from a "pre-recorded file" so it would be played over and over again.

Also, we should not expend time to "re-invent" ffmpeg.

R

Patrick Wood

unread,
Oct 25, 2013, 2:58:10 PM10/25/13
to linux...@googlegroups.com
I was thinking the same thing.  See https://github.com/patrickhwood/h264encoder.  I pulled out all the camera and preview code and read image data instead from frame.N files in the main.c loop.  The frame.[0-5].xz files in the repo are planar YUV420 frames taken from a cedrus/mpeg-test run.

Unfortunately, when I try to run simplerecorder in an armel chroot, I get 

E/osal_linux: (329) flush cache fail, range error!

followed by a segfault.  Probably some silly thing on my part with virt/phys buffers.

I still need to instrument the cedar_dev driver's ioctl to see what it's trying to actually do, but I'm hoping this can be a start for others as well.

Pat

Patrick Wood

unread,
Oct 25, 2013, 5:45:34 PM10/25/13
to linux...@googlegroups.com
Well, I found and fixed the cause of the segfault -- freed a buffer too soon.

Now I am getting video output, but it's garbled on playback.  Still getting the flush cache fail message.

Pat

Rosimildo DaSilva

unread,
Oct 25, 2013, 5:49:28 PM10/25/13
to linux...@googlegroups.com
I don't see where you read the content of the "frame" from the file.

I see you reading the dimensions of the frame to alloc the buffer, but not the content itself... but I could be wrong.

R

Manuel Braga

unread,
Oct 25, 2013, 5:50:30 PM10/25/13
to linux...@googlegroups.com
Hi

On Fri, 25 Oct 2013 11:58:10 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
> I was thinking the same thing. See
> https://github.com/patrickhwood/h264encoder. I pulled out all the
> camera and preview code and read image data instead from frame.N
> files in the main.c loop. The frame.[0-5].xz files in the repo are
> planar YUV420 frames taken from a cedrus/mpeg-test run.

But then you can't change the frame size in a easy way.
Why not same as http://v4l.videotechnology.com/vivi/vivi.c
one function that auto generates frames in the right format that
the hardware requires.

> Unfortunately, when I try to run simplerecorder in an armel chroot, I
> get
>
> E/osal_linux: (329) flush cache fail, range error!

I got this too.


> followed by a segfault. Probably some silly thing on my part with
> virt/phys buffers.
No any more messages?
Did you mounted devtmpfs, as poorly explained here
http://linux-sunxi.org/CedarX/RE_Toolkit


>
> I still need to instrument the cedar_dev driver's ioctl to see what
> it's trying to actually do, but I'm hoping this can be a start for
> others as well.

The tracer also gets the ioctls, and there is also a trace viewer.
But caution, this is not trivial to install, doesn't have error
checking, and is slow very slow.
https://gitorious.org/recedro/recedro/


>
> Pat
>
> On Friday, October 25, 2013 1:37:41 PM UTC-4, Rosimildo DaSilva wrote:
> >
> > I think this team is making some progress!
> >
> > It is probably too much "setup" to get the input from a camera.
> >
> > Maybe for the test, would be better to get the input from a
> > "pre-recorded file" so it would be played over and over again.
> >
> > Also, we should not expend time to "re-invent" ffmpeg.
> >
> > R

--
mul

Patrick Wood

unread,
Oct 25, 2013, 5:52:18 PM10/25/13
to linux...@googlegroups.com
One other thing to note is that the statically linked armel build runs on my armhf rootfs and produces the same (garbled) mkv output.  No need for an armel chroot if you have the armel cross compiler available (I'm using gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)).

Pat

On Friday, October 25, 2013 2:58:10 PM UTC-4, Patrick Wood wrote:

Rosimildo DaSilva

unread,
Oct 25, 2013, 5:52:26 PM10/25/13
to linux...@googlegroups.com
On "frame.0" ... I forgot to say....

Patrick Wood

unread,
Oct 25, 2013, 6:26:29 PM10/25/13
to linux...@googlegroups.com


On Friday, October 25, 2013 5:52:26 PM UTC-4, Rosimildo DaSilva wrote:
On "frame.0" ... I forgot to say....
Yeah, I see that now; however adding 

         fread(pic.buffer, 1, pic.width * pic.height / 4 * 6, fp);

for the first frame doesn't help.

Patrick Wood

unread,
Oct 25, 2013, 6:31:04 PM10/25/13
to linux...@googlegroups.com


On Friday, October 25, 2013 5:50:30 PM UTC-4, Manuel Braga wrote:
Hi

On Fri, 25 Oct 2013 11:58:10 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
> I was thinking the same thing.  See
> https://github.com/patrickhwood/h264encoder.  I pulled out all the
> camera and preview code and read image data instead from frame.N
> files in the main.c loop.  The frame.[0-5].xz files in the repo are
> planar YUV420 frames taken from a cedrus/mpeg-test run.

But then you can't change the frame size in a easy way.
Why not same as http://v4l.videotechnology.com/vivi/vivi.c
one function that auto generates frames in the right format that
the hardware requires.

Why would I want to change the frame size?

Anyway, I didn't want to bring in v4l when I can get the frame data in the format I want directly from the cedar decoder engine.  (Well, here's hoping it's in the format I want -- the documentation and cedrus code all agree that it is, but since the output is garbled, I'm concerned that it might not be.)

> Unfortunately, when I try to run simplerecorder in an armel chroot, I
> get
>
> E/osal_linux: (329) flush cache fail, range error!

I got this too.

 
> followed by a segfault.  Probably some silly thing on my part with
> virt/phys buffers.
No any more messages?
Did you mounted devtmpfs, as poorly explained here
http://linux-sunxi.org/CedarX/RE_Toolkit

Yes, of course /dev and other system directories (/sys, /proc) are mounted via --bind.

>
> I still need to instrument the cedar_dev driver's ioctl to see what
> it's trying to actually do, but I'm hoping this can be a start for
> others as well.

The tracer also gets the ioctls, and there is also a trace viewer.
But caution, this is not trivial to install, doesn't have error
checking, and is slow very slow.
https://gitorious.org/recedro/recedro/

Yeah, I know... I've been using valgrind on x86 for many years.  recedro seems like a really nifty tool; I'll probably try it once I've got the encoder working.

Pat 

Patrick Wood

unread,
Oct 25, 2013, 7:58:53 PM10/25/13
to linux...@googlegroups.com
So I found that the problem wasn't in the encoder, but in the code I put in cedrus to save the frames (the cedar decoder appears to generate 32x32 macroblocks).  I've removed the frame.* files.  It also appears that you need ~25 frames for the encoder to work (I'm probably missing a flush somewhere), so I'm not uploading new ones.  You can create your own frame data by using mpeg-test from my cedrus branch here:  https://github.com/patrickhwood/cedrus.

My next step is to hook these two programs together to see if I can make an MPEG->H.264 HW transcoder.

Pat

Manuel Braga

unread,
Oct 26, 2013, 6:04:04 AM10/26/13
to linux...@googlegroups.com
On Fri, 25 Oct 2013 15:31:04 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
>
>
> On Friday, October 25, 2013 5:50:30 PM UTC-4, Manuel Braga wrote:
> >
> > Hi
> >
> > On Fri, 25 Oct 2013 11:58:10 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> > > I was thinking the same thing. See
> > > https://github.com/patrickhwood/h264encoder. I pulled out all
> > > the camera and preview code and read image data instead from
> > > frame.N files in the main.c loop. The frame.[0-5].xz files in
> > > the repo are planar YUV420 frames taken from a cedrus/mpeg-test
> > > run.
> >
> > But then you can't change the frame size in a easy way.
> > Why not same as http://v4l.videotechnology.com/vivi/vivi.c
> > one function that auto generates frames in the right format that
> > the hardware requires.
> >
>
> Why would I want to change the frame size?

To easy see which registers in the hardware controls the input frame
dimensions.(when reserve engineering)

You were using fixed frame files, now you don't, so don't mind.


> Anyway, I didn't want to bring in v4l when I can get the frame data
> in the format I want directly from the cedar decoder engine. (Well,

Then i probably misunderstood what you are trying to do.

> here's hoping it's in the format I want -- the documentation and
> cedrus code all agree that it is, but since the output is garbled,
> I'm concerned that it might not be.)

Did you noticed in encoder.c, the function I420toNV12

--
mul

Enrico

unread,
Oct 26, 2013, 9:04:00 AM10/26/13
to linux...@googlegroups.com
Il giorno giovedì 24 ottobre 2013 18:46:11 UTC+2, Manuel Braga ha scritto:
Hi.

On Wed, 23 Oct 2013 15:55:29 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
> Do you think the encoder Jon shared is not minimal enough?
> I have another version for linux that is basically the same, just
> without threads. I don't know if it's possible to make the encoder
> even more simple, any suggestions?
>
> Enrico
>

But this is also a chance to the people that thinks reverse
engineering is too hard, to even try in the first place.
By simply helping in this, will give a useful contribution to pushing
forward the RE effort.


I just published my version in [1], to summarize: linux example app, captures from webcam and encodes to raw h264 file, no display.

Have a look at the readme and git log to see what i changed, and at the video sample to see the problem with the output.

I hope it's simple enough to make tracing a bit faster.

Enrico

[1]: https://github.com/ebutera/cedar-h264enc

Rosimildo DaSilva

unread,
Oct 26, 2013, 10:14:12 AM10/26/13
to linux...@googlegroups.com
This is great team!

Congratulations to all participating on this thread. You all took seriously the request for a simple encoder example,
and now we have a few ones working, which give the folks doing the RE options to select from.

Enrico, I would split your file "capture.c" in two, one for the capture ( camera ) and another one for display the preview. I believe it would make easier for people to read the code.

Also, I believe we have enough info on this thread to start documenting things on the Wiki at the CEDARX pages.

Great work team, we had nothing a week ago, and now plenty of examples to read from.

Manuel,  do you think you have everything needed to help with the RE ? Any more thoughts ?

R

Patrick Wood

unread,
Oct 26, 2013, 10:34:52 AM10/26/13
to linux...@googlegroups.com

On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga wrote:
On Fri, 25 Oct 2013 15:31:04 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
>
>
> On Friday, October 25, 2013 5:50:30 PM UTC-4, Manuel Braga wrote:
> >
> > Hi
> >
> > On Fri, 25 Oct 2013 11:58:10 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> > > I was thinking the same thing.  See
> > > https://github.com/patrickhwood/h264encoder.  I pulled out all
> > > the camera and preview code and read image data instead from
> > > frame.N files in the main.c loop.  The frame.[0-5].xz files in
> > > the repo are planar YUV420 frames taken from a cedrus/mpeg-test
> > > run.
> >
> > But then you can't change the frame size in a easy way.
> > Why not same as http://v4l.videotechnology.com/vivi/vivi.c
> > one function that auto generates frames in the right format that
> > the hardware requires.
> >
>
> Why would I want to change the frame size?

To easy see which registers in the hardware controls the input frame
dimensions.(when reserve engineering)

You were using fixed frame files, now you don't, so don't mind.

Actually, I'm checking that each new frame has the same size as frame.0, but I can change that if you think it'll help with the reverse engineering efforts.  It's just a sanity check right now in case the frames are not properly formatted.
 


> Anyway, I didn't want to bring in v4l when I can get the frame data
> in the format I want directly from the cedar decoder engine.  (Well,

Then i probably misunderstood what you are trying to do.

I was trying to create the minimal demo program for testing and RE and also a launching point for a HW transcoder project.  I realize that might be a little ambitious, as it may not be possible to switch the cedar HW between encoding and decoding on a frame-by-frame basis, but it's certainly worth a try! 

> here's hoping it's in the format I want -- the documentation and
> cedrus code all agree that it is, but since the output is garbled,
> I'm concerned that it might not be.)

Did you noticed in encoder.c, the function I420toNV12

Yes.  It interleaves the U/V values.  I could bypass this and the frame_write code in mpeg-test that deinterleaves this data prior to saving it (but then the files are no longer technically YUV420p), but the issue I was having was due to the way the cedar decoder organized its output: 32x32 macroblocks.  That's fixed now in my frame_write code. 

The coder problems I'm seeing now are the first frame is very blocky and off-color, and frames with lots of motion have some short horizontal lines in those areas.

Pat

--
mul

Manuel Braga

unread,
Oct 26, 2013, 11:16:31 AM10/26/13
to linux...@googlegroups.com
On Sat, 26 Oct 2013 06:04:00 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
> >
> I just published my version in [1], to summarize: linux example app,
> captures from webcam and encodes to raw h264 file, no display.
>
> Have a look at the readme and git log to see what i changed, and at
> the video sample to see the problem with the output.
I tested, same result.

I think you need this function.
https://github.com/ashwing920/SimpleRecorder/blob/master/encoder.c#L160

NV12 appears to be what the hardware takes in(i don't have certain)
And v4l devices are limited by the formats that are supported.

> I hope it's simple enough to make tracing a bit faster.

Yes it will help.
But there is still more changes, still up to task?

> Enrico
>
> [1]: https://github.com/ebutera/cedar-h264enc
>

--
mul

Manuel Braga

unread,
Oct 26, 2013, 11:42:47 AM10/26/13
to linux...@googlegroups.com
On Sat, 26 Oct 2013 07:34:52 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
>
> On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga wrote:
> >
> > On Fri, 25 Oct 2013 15:31:04 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> > > Anyway, I didn't want to bring in v4l when I can get the frame
> > > data in the format I want directly from the cedar decoder
> > > engine. (Well,
> >
> > Then i probably misunderstood what you are trying to do.
> >
>
> I was trying to create the minimal demo program for testing and RE

Great.
Enrico is also doing something, maybe both of you could join, and split
the work.

> and also a launching point for a HW transcoder project. I realize
> that might be a little ambitious, as it may not be possible to switch
> the cedar HW between encoding and decoding on a frame-by-frame basis,
> but it's certainly worth a try!

https://github.com/linux-sunxi/cedarx-libs/tree/master/enc_dec_demo
is doing exactly that, encoder in one thread, decoder in an other
thread.
But i only got partial success, because that source code is missing
the .a cedar libraries, and didn't find ones that compiled.
Only tested the pre-compiled binary there.

--
mul

Enrico

unread,
Oct 26, 2013, 11:50:34 AM10/26/13
to linux...@googlegroups.com, Hans de Goede
Il giorno sabato 26 ottobre 2013 17:16:31 UTC+2, Manuel Braga ha scritto:
On Sat, 26 Oct 2013 06:04:00 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
> >
> I just published my version in [1], to summarize: linux example app,
> captures from webcam and encodes to raw h264 file, no display.
>
> Have a look at the readme and git log to see what i changed, and at
> the video sample to see the problem with the output.
I tested, same result.

I think you need this function.
https://github.com/ashwing920/SimpleRecorder/blob/master/encoder.c#L160

NV12 appears to be what the hardware takes in(i don't have certain)
And v4l devices are limited by the formats that are supported.

I tried using NV12 but it's worse, maybe it's not "standard" NV12 but something similar and that function does the right conversion.
I'm adding Hans to Cc, he's libv4lconvert author maybe he knows if i'm doing something wrong with it.
 

> I hope it's simple enough to make tracing a bit faster.

Yes it will help.
But there is still more changes, still up to task?
 

Sure.
I tried a trace with recedro/ammt too but i'm not sure i'm using it correctly, can someone give an example?

Enrico

Patrick Wood

unread,
Oct 26, 2013, 12:24:15 PM10/26/13
to linux...@googlegroups.com


On Saturday, October 26, 2013 11:42:47 AM UTC-4, Manuel Braga wrote:
On Sat, 26 Oct 2013 07:34:52 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
>
> On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga wrote:
>
> I was trying to create the minimal demo program for testing and RE

Great.
Enrico is also doing something, maybe both of you could join, and split
the work.

Sure.  What do you think is left to do with the sources at this point? 

> and also a launching point for a HW transcoder project.  I realize
> that might be a little ambitious, as it may not be possible to switch
> the cedar HW between encoding and decoding on a frame-by-frame basis,
> but it's certainly worth a try!

https://github.com/linux-sunxi/cedarx-libs/tree/master/enc_dec_demo
is doing exactly that, encoder in one thread, decoder in an other
thread.
But i only got partial success, because that source code is missing
the .a cedar libraries, and didn't find ones that compiled.
Only tested the pre-compiled binary there.

Ah, I see now.  I didn't realize this program did encoding and decoding at the same time.

This should be easier for me, as I'm using the mpeg-test standalone program for decoding, which doesn't rely on any AW libraries.  Good to know the encoder and decoder are probably different blocks with separate states.  Thanks.

Pat
 

--
mul

Patrick Wood

unread,
Oct 26, 2013, 11:26:20 PM10/26/13
to linux...@googlegroups.com


On Saturday, October 26, 2013 10:34:52 AM UTC-4, Patrick Wood wrote:

On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga wrote:

To easy see which registers in the hardware controls the input frame
dimensions.(when reserve engineering)

You were using fixed frame files, now you don't, so don't mind.

Actually, I'm checking that each new frame has the same size as frame.0, but I can change that if you think it'll help with the reverse engineering efforts.  It's just a sanity check right now in case the frames are not properly formatted.

I just tried changing the frame size in the middle of an encoding run and got pretty bad looking output.  The encoder doesn't complain, but then all I did was make a  VENC_SET_ENC_INFO_CMD ioctl call with the new width and height set in the __video_encode_format_t structure and update virAddr and phyAddrY (essentially copied code from encoder_init).  I suspect the problem is in the mkv output; I don't know a whole lot about it, and the width/height are only set in the header with no API calls anywhere to change them mid-stream.

If anyone wants to look at this, I can push an update.

Pat

Manuel Braga

unread,
Oct 27, 2013, 7:40:11 AM10/27/13
to linux...@googlegroups.com
On Sat, 26 Oct 2013 09:24:15 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
> On Saturday, October 26, 2013 11:42:47 AM UTC-4, Manuel Braga wrote:
> >
> > On Sat, 26 Oct 2013 07:34:52 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> [...]
> >
> > Great.
> > Enrico is also doing something, maybe both of you could join, and
> > split the work.
> >
>
> Sure. What do you think is left to do with the sources at this
> point?

Sorry, but seeing you reading yuv frames from files, it didn't came to
my mind the YUV4MPEG2 file format.
Is from http://mjpeg.sourceforge.net/, and in debian is a package
called mjpegtools, this is perfect to manipulate yuv video frames, is
old before HD video, but still works. This way, only one file as input.

This would mean add ffmpeg/libavformat to read in YUV4MPEG2, and also
the current code to output in mkv could be replace with libavformat.

No hurry, one thing at a time.
What you time?

--
mul

Manuel Braga

unread,
Oct 27, 2013, 7:43:54 AM10/27/13
to linux...@googlegroups.com
On Sat, 26 Oct 2013 20:26:20 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
> On Saturday, October 26, 2013 10:34:52 AM UTC-4, Patrick Wood wrote:
> >
> >
> > On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga
> > wrote:
> [...]
> >
> > Actually, I'm checking that each new frame has the same size as
> > frame.0, but I can change that if you think it'll help with the
> > reverse engineering efforts. It's just a sanity check right now in
> > case the frames are not properly formatted.
> >
>
> I just tried changing the frame size in the middle of an encoding run
> and got pretty bad looking output. The encoder doesn't complain, but
> then all I did was make a VENC_SET_ENC_INFO_CMD ioctl call with the
> new width and height set in the __video_encode_format_t structure and
> update virAddr and phyAddrY (essentially copied code from
> encoder_init). I suspect the problem is in the mkv output; I don't
> know a whole lot about it, and the width/height are only set in the
> header with no API calls anywhere to change them mid-stream.
>
> If anyone wants to look at this, I can push an update.

Is likely the encoder needs to be stop, and started again.
Don't worry about it.

--
mul

Manuel Braga

unread,
Oct 27, 2013, 7:46:47 AM10/27/13
to Manuel Braga, linux...@googlegroups.com
On Sun, 27 Oct 2013 11:40:11 +0000 Manuel Braga <mul....@gmail.com>
wrote:
> On Sat, 26 Oct 2013 09:24:15 -0700 (PDT) Patrick Wood
> <patric...@gmail.com> wrote:
> > On Saturday, October 26, 2013 11:42:47 AM UTC-4, Manuel Braga wrote:
> > >
> > > On Sat, 26 Oct 2013 07:34:52 -0700 (PDT) Patrick Wood
> > > <patric...@gmail.com <javascript:>> wrote:
> > [...]
> > >
> > > Great.
> > > Enrico is also doing something, maybe both of you could join, and
> > > split the work.
> > >
> >
> > Sure. What do you think is left to do with the sources at this
> > point?
>
> Sorry, but seeing you reading yuv frames from files, it didn't came to
> my mind the YUV4MPEG2 file format.
> Is from http://mjpeg.sourceforge.net/, and in debian is a package
> called mjpegtools, this is perfect to manipulate yuv video frames, is
> old before HD video, but still works. This way, only one file as
> input.
>
> This would mean add ffmpeg/libavformat to read in YUV4MPEG2, and also
> the current code to output in mkv could be replace with libavformat.
>
> No hurry, one thing at a time.
> What you time?
Correction, what i want to say was:

What you think?

--
mul

Enrico

unread,
Oct 27, 2013, 8:18:29 AM10/27/13
to linux...@googlegroups.com

Isn't adding more features going to hurt performance for tracing? I thought the point was to keep it simple for tracing, not for generating mkv (you can do that outside of the test app).

I just modified my example to read frames from file, so i can confirm the problem is in colorspace conversion because now the encoder output is correct. Now i'm sure, input colorspace for the encoder is NV12.

I will commit my changes to github, i just don't have time right now.

Enrico

Manuel Braga

unread,
Oct 27, 2013, 10:00:02 AM10/27/13
to linux...@googlegroups.com
On Sun, 27 Oct 2013 05:18:29 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
You right, is slower, but more important is the easiness of use and do
changes. The output to mkv or whatever is only to verify that works,
when tracing everything not need can be commented out, or could be a
command line options.

And isn't a usable encoder library useful to people than can't wait to
use the hardware.


> I just modified my example to read frames from file, so i can confirm
> the problem is in colorspace conversion because now the encoder
> output is correct. Now i'm sure, input colorspace for the encoder is
> NV12.


And what is the format, when main.c:line67
enc_fmt.color_format = PIXEL_YUV420; is set as other value.
The person doing the RE, will have to try all this options and see what
changes in the registers.
If this changes are easy to make, will greatly help.


> I will commit my changes to github, i just don't have time right now.
We help what we can, no pressure.

--
mul

Patrick Wood

unread,
Oct 27, 2013, 12:19:59 PM10/27/13
to linux...@googlegroups.com
I agree that for RE, the smallest amount of code and shortest code path is best (which is why I stripped out the camera or preview code).  mkv output can also be disabled pretty easily, either with ifdefs (which I don't like) or just a stubbed output_stub.c and a makefile option.  The same can be done for input.c if needed, now that I moved the frame reading code out of main.c.

And isn't a usable encoder library useful to people than can't wait to
use the hardware.

Well, yes -- I happen to be one of those!  But I think the code can be structured to be both simple but still use abstractions for I/O for more complete solutions.  A "real" encoder/transcoder will need to be a lot more robust than the simple demo programs we have right now.  It would be nice to have something that works well enough to be a template and test bed for bigger, better solutions, including the RE work, much like the jpeg/mpeg-test is for the decoder.

Pat

Manuel Braga

unread,
Nov 5, 2013, 3:32:30 PM11/5/13
to linux...@googlegroups.com
Hi.

On Sat, 26 Oct 2013 09:24:15 -0700 (PDT) Patrick Wood
<patric...@gmail.com> wrote:
> On Saturday, October 26, 2013 11:42:47 AM UTC-4, Manuel Braga wrote:
> > On Sat, 26 Oct 2013 07:34:52 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> > > On Saturday, October 26, 2013 6:04:04 AM UTC-4, Manuel Braga
> > > wrote:
> > >
> > > I was trying to create the minimal demo program for testing and
> > > RE
> >
> > Great.
> > Enrico is also doing something, maybe both of you could join, and
> > split the work.
> >
>
> Sure. What do you think is left to do with the sources at this
> point?

Still up for some more changes?
If not, maybe someone else could be interested in doing.

* That I420toNV12 function in encoder.c is polluting the traces with
unuseful things. Because is converting into cedar reserved memory,
and the tracer logs every access to that.
For the tracer would be preferred to use memcpy, convert before then
memcpy to cedar memory.


--
mul

Enrico

unread,
Nov 5, 2013, 4:53:40 PM11/5/13
to linux...@googlegroups.com

I just committed a modified version that reads frames from file for encoding, it's still an hack and should be cleaned up but should be better for tracing.

https://github.com/ebutera/cedar-h264enc

Enrico

Manuel Braga

unread,
Nov 7, 2013, 3:30:47 PM11/7/13
to linux...@googlegroups.com
On Tue, 5 Nov 2013 13:53:40 -0800 (PST) Enrico <ebu...@gmail.com>
wrote:
> Il giorno martedì 5 novembre 2013 21:32:30 UTC+1, Manuel Braga ha
> scritto:
> >
> > Hi.
> >
> > On Sat, 26 Oct 2013 09:24:15 -0700 (PDT) Patrick Wood
> > <patric...@gmail.com <javascript:>> wrote:
> [...]
> [...]
> [...]
> [...]
> > >
> > > Sure. What do you think is left to do with the sources at this
> > > point?
> >
> > Still up for some more changes?
> > If not, maybe someone else could be interested in doing.
> >
> > * That I420toNV12 function in encoder.c is polluting the traces
> > with unuseful things. Because is converting into cedar reserved
> > memory, and the tracer logs every access to that.
> > For the tracer would be preferred to use memcpy, convert before
> > then memcpy to cedar memory.
> >
> >
> I just committed a modified version that reads frames from file for
> encoding, it's still an hack and should be cleaned up but should be
> better for tracing.
>
> https://github.com/ebutera/cedar-h264enc
>

Okay, as i said to Enrico in irc, this is to hard to explain with only
words. More easy, to be me who writes the needed changes to allow good
traces.

But because time is always missing, i am already busy with writing
a trace viewer.
I will summit patches, but would like someone else to help maintaining
in working condition a encoder suitable for making traces, and test in
A10, A20 hardware (only have A13)

The are now two repositories, better is if everyone worked in one.
The code from SimpleRecorder that Patrick used as base looks better
structured, let's use this one?

Enrico.
Patrick, are you still on?
Someone else?

Manuel Braga

unread,
Nov 7, 2013, 3:45:06 PM11/7/13
to linux...@googlegroups.com
On Sat, 26 Oct 2013 08:50:34 -0700 (PDT) Enrico <ebu...@gmail.com>
wrote:
>
> Sure.
> I tried a trace with recedro/ammt too but i'm not sure i'm using it
> correctly, can someone give an example?
>
> Enrico
>

I saw this email, but i missed to answer, sorry for that.
This is a script that i am using to call valgrind.

#!/bin/sh

VGDIR=/encode/trace/valgrind

exec $VGDIR/vg-in-place \
-q \
--vgdb=no \
--trace-children=yes \
--log-socket=10.0.0.1:10000 \
--tool=ammt \
--trace-file=/dev/cedar_dev \
--show-stack-fnnames=yes \
$@

10.0.0.1 is my desktop, if there isn't anything listen a that address,
it will output to stdout
or you can use --log-file=/path/to/file.log

Careful with big traces files, the trace viewer is slow to render them.


--
mul

Patrick Wood

unread,
Nov 7, 2013, 4:16:53 PM11/7/13
to linux...@googlegroups.com

Enrico

unread,
Nov 7, 2013, 6:22:16 PM11/7/13
to linux...@googlegroups.com

Yes, just ask :)
I'll play a bit with tracing.
 
Enrico

Enrico

unread,
Nov 7, 2013, 6:25:39 PM11/7/13
to linux...@googlegroups.com
Il giorno giovedì 7 novembre 2013 22:16:53 UTC+1, Patrick Wood ha scritto:

We can avoid the conversion if we have frame.0 already in nv12 format, did you try that?

Enrico
 

Manuel Braga

unread,
Nov 8, 2013, 9:05:50 AM11/8/13
to linux...@googlegroups.com
On Thu, 7 Nov 2013 15:25:39 -0800 (PST) Enrico <ebu...@gmail.com>
wrote:
> Il giorno giovedì 7 novembre 2013 22:16:53 UTC+1, Patrick Wood ha
> scritto:
> >
> > Do you mean something like this:
> > https://github.com/patrickhwood/h264encoder/commit/dfda30f51ba6e783fd40668c722e5398f3db6f99

Yes that, (not actually tested)
There is still a lot more to go, but i will show source code.

> We can avoid the conversion if we have frame.0 already in nv12
> format, did you try that?

I don't like reading individual frames from files. I prefer to have a
function that generate colorbars frames, and optional read raw frames
from only 1 file using libavformat



> > On Thursday, November 7, 2013 3:30:47 PM UTC-5, Manuel Braga wrote:
> >> The are now two repositories, better is if everyone worked in one.
> >> The code from SimpleRecorder that Patrick used as base looks
> >> better structured, let's use this one?

Can we have only 1 repository, in which we all have commit access?
Patrick, can you add us as committers, or make a new repository that
will allow this.


--
mul

Patrick Wood

unread,
Nov 8, 2013, 11:50:08 AM11/8/13
to linux...@googlegroups.com

On Friday, November 8, 2013 9:05:50 AM UTC-5, Manuel Braga wrote:
On Thu, 7 Nov 2013 15:25:39 -0800 (PST) Enrico <ebu...@gmail.com>
wrote:
> Il giorno giovedì 7 novembre 2013 22:16:53 UTC+1, Patrick Wood ha
> scritto:
> >
> > Do you mean something like this:
> > https://github.com/patrickhwood/h264encoder/commit/dfda30f51ba6e783fd40668c722e5398f3db6f99

Yes that, (not actually tested)
There is still a lot more to go, but i will show source code.

> We can avoid the conversion if we have frame.0 already in nv12
> format, did you try that?

I don't like reading individual frames from files. I prefer to have a
function that generate colorbars frames, and optional read raw frames
from only 1 file using libavformat

I'll look into abstracting the input functions some more; this would be a good place to put the color bar generator and also the NV12 interleaving.
 
I would prefer to not use libavformat (or any large set of libraries) on the basic version right now, as it's currently possible to build and run this program on armhf with a static build, even though it must be built with armel tools (running in an armel chroot is OK, but not exactly optimal).  I'd prefer to have it read just a bunch of concatenated raw images from 1 file if needed.  I'll be happy once we have something that can actually be linked with libavformat on armhf.  Presumably, the endpoint for this work is a libavcodec plugin?


> > On Thursday, November 7, 2013 3:30:47 PM UTC-5, Manuel Braga wrote:
> >> The are now two repositories, better is if everyone worked in one.
> >> The code from SimpleRecorder that Patrick used as base looks
> >> better structured, let's use this one?

Can we have only 1 repository, in which we all have commit access?
Patrick, can you add us as committers, or make a new repository that
will allow this.

Yes. Please give me your github names and I'll add them to the collaborator list.

Pat 


--
mul

Enrico

unread,
Nov 9, 2013, 7:18:22 AM11/9/13
to linux...@googlegroups.com

ebutera

Enrico

Enrico

unread,
Nov 9, 2013, 8:35:43 AM11/9/13
to linux...@googlegroups.com

Is the attached trace file "good"? (i know you need more info, i'll elaborate more later).

Enrico
trace.log.gz

Manuel Braga

unread,
Nov 9, 2013, 10:08:45 AM11/9/13
to linux...@googlegroups.com
On Fri, 8 Nov 2013 08:50:08 -0800 (PST) Patrick Wood
<patric...@gmail.com> wrote:
> I would prefer to not use libavformat (or any large set of libraries)

Okay.

> on the basic version right now, as it's currently possible to build
> and run this program on armhf with a static build, even though it
> must be built with armel tools (running in an armel chroot is OK, but
> not exactly optimal). I'd prefer to have it read just a bunch of
> concatenated raw images from 1 file if needed. I'll be happy once we

Then can be rawvideo?
That way, ffmpeg/libav could be used as this:

avconv -i video.mkv -pix_fmt nv12 -f rawvideo pipe: | h264encoder \
<options to still be defined that include frame WxH, format, framerate>


> have something that can actually be linked with libavformat on
> armhf. Presumably, the endpoint for this work is a libavcodec plugin?

Not for me.
My intention is to only make easy to use various input formats.


> Yes. Please give me your github names and I'll add them to the
> collaborator list.
https://github.com/mulb
Do you prefer that i commit to a separate branch and you do the merge,
or directly to re-project branch is ok?

--
mul

Manuel Braga

unread,
Nov 9, 2013, 12:12:45 PM11/9/13
to linux...@googlegroups.com
On Sat, 9 Nov 2013 05:35:43 -0800 (PST) Enrico <ebu...@gmail.com>
wrote:
> Il giorno giovedì 7 novembre 2013 21:45:06 UTC+1, Manuel Braga ha
> scritto:
> >
> > On Sat, 26 Oct 2013 08:50:34 -0700 (PDT) Enrico
> > <ebu...@gmail.com<javascript:>>
Is polluted.
And i forgot to say to not strip the binary, and currently the tracer
can't proper wrapper functions from static compiled.

This is how it should look in the viewer, is in A13 from simplerecorder
16x16 frames (but it looks the encoder can only handle multiples of
16?32?, registers 0xa00 - 0xa0c)
The purple lines are abbreviations, and expand with right-click to
became similar the first frame that is expanded

http://i.imgur.com/hWPlecs.png


--
mul

Patrick Wood

unread,
Nov 9, 2013, 10:18:00 PM11/9/13
to linux...@googlegroups.com


On Saturday, November 9, 2013 10:08:45 AM UTC-5, Manuel Braga wrote:
On Fri, 8 Nov 2013 08:50:08 -0800 (PST) Patrick Wood
<patric...@gmail.com> wrote:

> on the basic version right now, as it's currently possible to build
> and run this program on armhf with a static build, even though it
> must be built with armel tools (running in an armel chroot is OK, but
> not exactly optimal).  I'd prefer to have it read just a bunch of
> concatenated raw images from 1 file if needed.  I'll be happy once we

Then can be rawvideo?
That way, ffmpeg/libav could be used as this:

  avconv -i video.mkv -pix_fmt nv12 -f rawvideo pipe: | h264encoder \
  <options to still be defined that include frame WxH, format,  framerate>

Interesting.  Yes, that should be pretty simple to implement. 


> have something that can actually be linked with libavformat on
> armhf.  Presumably, the endpoint for this work is a libavcodec plugin?

Not for me.
My intention is to only make easy to use various input formats.

OK. Does anyone else reading this have any experience with libavcodec or any other encoding library?  Someone should start looking at what it'll take to create a plugin for at least one codec library.
 


> Yes. Please give me your github names and I'll add them to the
> collaborator list.
https://github.com/mulb
Do you prefer that i commit to a separate branch and you do the merge,
or directly to re-project branch is ok?

I added the re-project branch for the RE work, so feel free to commit directly to it.

I've added you and ebutera to the collaborators list; let me know if you have any problems checking in code.

Pat

Patrick Wood

unread,
Nov 9, 2013, 10:56:38 PM11/9/13
to linux...@googlegroups.com
I've just pushed a patch to re-encode that makes it read NV12 as the default input frame format.  If I have time tonight, I'll push another one that lets it read a stream of frames from stdin.

Pat

Patrick Wood

unread,
Nov 10, 2013, 12:10:46 AM11/10/13
to linux...@googlegroups.com
Okay, I pushed another change that allows it to record NV12 frames from stdin.  I tested it with this:

avconv -i big_buck_bunny_480p_MPEG2_MP2_25fps_1800K.MPG -vf pad="trunc((iw+31)/32)*32" -pix_fmt nv12 -f rawvideo pipe:  | simplerecorder 864 480

Note that the raw video width needs to be padded to a multiple of 32 pixels (pad="trunc((iw+31)/32)*32").

Pat

Enrico

unread,
Nov 10, 2013, 6:53:41 AM11/10/13
to linux...@googlegroups.com
Il giorno domenica 10 novembre 2013 06:10:46 UTC+1, Patrick Wood ha scritto:
Okay, I pushed another change that allows it to record NV12 frames from stdin.  I tested it with this:

avconv -i big_buck_bunny_480p_MPEG2_MP2_25fps_1800K.MPG -vf pad="trunc((iw+31)/32)*32" -pix_fmt nv12 -f rawvideo pipe:  | simplerecorder 864 480

Note that the raw video width needs to be padded to a multiple of 32 pixels (pad="trunc((iw+31)/32)*32").

Pat


I don't want to be annoying, but all this features are hurting tracing performance and output log quality.

Just try it: compare a trace with my sample app (branch "tracing" in [1]) and latest simplerecorder.

Encoding 50 frames, 160x120 with my app outputs a ~700kB trace file, with simplerecorder it's ~21MB

Enrico

[1]: https://github.com/ebutera/cedar-h264enc/tree/tracing

Patrick Wood

unread,
Nov 10, 2013, 11:36:03 AM11/10/13
to linux...@googlegroups.com
Any idea where the bulk of the extra traces are coming from?  It's most likely one or two specific places where it's occurring.

Pat

Manuel Braga

unread,
Nov 10, 2013, 2:15:57 PM11/10/13
to linux...@googlegroups.com
On Sun, 10 Nov 2013 08:36:03 -0800 (PST) Patrick Wood
<patric...@gmail.com> wrote:
> Any idea where the bulk of the extra traces are coming from? It's
> most likely one or two specific places where it's occurring.

Is at memcpy in encoder_encode_frame.
In Enrico's encoder the frame data is fread into a cedar buffer
directly. As this looks like to be a copy in the kernel side, the
tracer doesn't see it.

Can you join irc? This kind of talk is more suited to be discussed
there.


--
mul

Patrick Wood

unread,
Nov 11, 2013, 3:17:19 PM11/11/13
to linux...@googlegroups.com


On Sunday, November 10, 2013 2:15:57 PM UTC-5, Manuel Braga wrote:
On Sun, 10 Nov 2013 08:36:03 -0800 (PST) Patrick Wood
<patric...@gmail.com> wrote:
> Any idea where the bulk of the extra traces are coming from?  It's
> most likely one or two specific places where it's occurring.

Is at memcpy in encoder_encode_frame.
In Enrico's encoder the frame data is fread into a cedar buffer
directly. As this looks like to be a copy in the kernel side, the
tracer doesn't see it.

Odd that fread isn't traced but memcpy is.  Or is this because the frames are read all at once before the encoder starts?  I can easily change my code to fread directly to a cedar buffer, but to operate on a stream, those calls will be made in between the encoder calls. 

Can you join irc? This kind of talk is more suited to be discussed
there.

I'll look into it, but probably not.  If you want to move this discussion off of the mailing list, perhaps github comments would be better?

Pat

Manuel Braga

unread,
Nov 11, 2013, 4:05:17 PM11/11/13
to linux...@googlegroups.com
On Mon, 11 Nov 2013 12:17:19 -0800 (PST) Patrick Wood
<patric...@gmail.com> wrote:
> On Sunday, November 10, 2013 2:15:57 PM UTC-5, Manuel Braga wrote:
> > Is at memcpy in encoder_encode_frame.
> > In Enrico's encoder the frame data is fread into a cedar buffer
> > directly. As this looks like to be a copy in the kernel side, the
> > tracer doesn't see it.
> >
>
> Odd that fread isn't traced but memcpy is. Or is this because the
> frames are read all at once before the encoder starts? I can easily

(Guessing, i don't know kernel details.)
fread makes syscall to kernel, and then is the kernel that fills the
buffer, memcpy happens in user space. The tracer is user space only.
But this fails for small size freads, memcpy is internaly used.

> change my code to fread directly to a cedar buffer, but to operate on
> a stream, those calls will be made in between the encoder calls.

I already commited that changes, using the read instead.
And got 1080p traces at acceptable speed.


> >
> > Can you join irc? This kind of talk is more suited to be discussed
> > there.
> >
> > I'll look into it, but probably not. If you want to move this
> > discussion
You don't need to be there 24/7 idle.
I only sporadic join (nick: nove) when i have something to say to
someone. There is also real time logs
http://irclog.whitequark.org/linux-sunxi/

> off of the mailing list, perhaps github comments would be better?

Then ok, github comments to discuss only details about the source code.


--
mul

Rosimildo DaSilva

unread,
Dec 12, 2013, 8:58:44 AM12/12/13
to linux...@googlegroups.com
It has been a month, since last post on this thread.

Can anyone provide some update of the progress on this front ?

Manuel Braga

unread,
Dec 12, 2013, 3:20:05 PM12/12/13
to linux...@googlegroups.com
On Thu, 12 Dec 2013 05:58:44 -0800 (PST) Rosimildo DaSilva
<rosi...@gmail.com> wrote:
> It has been a month, since last post on this thread.
>
> Can anyone provide some update of the progress on this front ?

On the reverse engineering front?

Too little time, too busy, too litle motivation => not much done.
Of course i am speaking for myself, but i think the others are in a
similar position.

github.com/patrickhwood/h264encoder is ready to be used when tracing,
only is need that there are people be willing to actual do the hard
work.

In other news.
Today, i tested to confirm something i was wondering from some time.
That android compiled .a libraries some what *works* in armel gnu/linux

I got libjpegenc.a from [1] to work in a half baked copy-pasted
h264encoder, just putted a call to JpegEnc in a propitious place, made
stubs for any unresolved symbols. And when run, it resulted in a a
correct jpeg file, and a trace.

This is a mess, so no source code from my part. In fact this all is a
mess, i have to take time to reorganize.

[1] https://github.com/Quarx2k/android_external_cedarx/

lyc....@gmail.com

unread,
Feb 12, 2015, 8:36:30 PM2/12/15
to linux...@googlegroups.com
The a31 encoding can use this demo?

On Friday, October 18, 2013 at 10:00:33 AM UTC+8, Jon Smirl wrote:
I pushed a demo h264 encoding app to:
https://github.com/jonsmirl/cam

It seems to be working, but the output file is just a blank image.
I'll keep working on it, but if anyone can see what is wrong with it
please let me know.

Not much too it, about 1,000 lines of code derived from the
enc_dec_demo program. I basically deleted all of the display code
since it is harder to build and left the encoding code.

--
Jon Smirl
jons...@gmail.com

achun liu

unread,
Feb 12, 2015, 8:55:20 PM2/12/15
to linux...@googlegroups.com
the demo can run on linux? not android.

--
You received this message because you are subscribed to a topic in the Google Groups "linux-sunxi" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/linux-sunxi/D-2cICv9zbI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to linux-sunxi...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

achun liu

unread,
Feb 12, 2015, 9:05:37 PM2/12/15
to linux...@googlegroups.com
I try to use enc_dec_demo modify , but I did not a31 encoding library.

vis...@nextbitcpu.com

unread,
Apr 2, 2015, 10:19:04 AM4/2/15
to linux...@googlegroups.com
Hello All,

I have just purchased and successfully set up the Hummingboard A20 platform.

I am interested in developing a real time Media encode solution, and intend to do this using a gStreamer pipeline based implementation.

In these regards, i am interested in any pointers on some sample code that could help me test out the TV IN section, encode function, and any documentation that could point me to work ( stable ) that could be a baseline to the efforts.

From this thread, i understand there has been quite some work going on in this space, and to cut a long chase short, it would be nice to have some pointers on my best way forward.

Please suggest!

warm regards,
Vishal B.

ditma...@gmail.com

unread,
Apr 2, 2015, 3:19:27 PM4/2/15
to linux...@googlegroups.com
Search the sunxi wiki
http://linux-sunxi.org/CedarX

And for reengineering of video decoder
http://linux-sunxi.org/VE_Planning

Rosimildo DaSilva

unread,
Apr 2, 2015, 5:57:57 PM4/2/15
to linux...@googlegroups.com, vis...@nextbitcpu.com
If you read this thread all the way, you would realize that there is no driver for the TV decoder, the block that handles the TV in on AllWinner devices.
Also, if you look when this thread started, it was in 2012, 3 years ago and nothing happened. 

Based on the track record of AW, always deceiving the customers, with GPL violations and selling H/W without proper functioning software.... so, don't hold your breath you are going to get anywhere with this kind of H/W.

R

Vishal B

unread,
Apr 3, 2015, 2:37:36 AM4/3/15
to Rosimildo DaSilva, linux...@googlegroups.com
R,

Are you saying that The TV-IN section does not have a driver implemented as yet?

Looking into the kernel sources, i find drivers/media/video/sunxi-csi/ directory which i believe contains the drivers for the TV-IN section.

Again, this inference is based purely on skimming through the code and some documentation on the CSI0/CSI1 interfaces only. The veracity of the codebase therein and its usefulness is an assumption on my end right now.

Surely, there is someone who has already checked the "CSI0 v4L2 driver" and "CSI1 v4L2 driver" for Video Input section already?

Please do let me know your thoughts.

regards,
Vishal Borker


--
TVeeBoX - Sales & Marketing Division,
NextBiT Computing Pvt. Ltd.,
30/2, IInd Floor, R.K. Plaza,
CMH Road, Indiranagar,
Bangalore - 560 038,
India.


URL            : www.nextbitcpu.com
Office Ph  : +91-80-41133238/29
Mobile       : +91-9916116273
email         : vis...@nextbitcpu.com
Skype        : vishal_borker



Priit Laes

unread,
Apr 3, 2015, 3:37:13 AM4/3/15
to linux...@googlegroups.com
On Fri, 2015-04-03 at 12:07 +0530, Vishal B wrote:
> R,
>
> Are you saying that The TV-IN section does not have a driver
> implemented as yet?
>
> Looking into the kernel sources, i find drivers/media/video/sunxi-
> csi/ directory which i believe contains the drivers for the TV-IN
> section.

Believers are often wrong.
> --
> You received this message because you are subscribed to the Google
> Groups "linux-sunxi" group.
> To unsubscribe from this group and stop receiving emails from it,

Julian Calaby

unread,
Apr 3, 2015, 7:37:15 AM4/3/15
to linux-sunxi, Rosimildo DaSilva
Hi Vishal,

On Fri, Apr 3, 2015 at 5:37 PM, Vishal B <vis...@nextbitcpu.com> wrote:
> R,
>
> Are you saying that The TV-IN section does not have a driver implemented as
> yet?
>
> Looking into the kernel sources, i find drivers/media/video/sunxi-csi/
> directory which i believe contains the drivers for the TV-IN section.

There are three video input devices in SunXi devices:
1. CSI _camera_ input, i.e. from a camera like the one in your phone.
There are two channels: CSI0 and CSI1. I believe there is a driver for
this in the 3.4 kernel which works.
2. TV decoder, i.e. analog video input. Nothing is known about this.
3. DVB decoder. There was work on a driver, but nothing has
materialised as far as I'm aware.

Where exactly are you expecting to get your "TV" input from?

Thanks,

--
Julian Calaby

Email: julian...@gmail.com
Profile: http://www.google.com/profiles/julian.calaby/

Vishal B

unread,
Apr 3, 2015, 10:30:11 AM4/3/15
to linux...@googlegroups.com, Rosimildo DaSilva
Hello Julian,


The Kernel features two drivers:

- sun7i_tvd
- sunxi_csi0

I figure sunxi_csi0 would be the TV in driver on my humming board, which has a TV-IN Minijack interface. This is supposed to receive YPbPr data from an external source.

Still trying to get a signal in on there - lack of documentation and support is indeed hindering big time.

regards,
Vishal Borker





--
You received this message because you are subscribed to a topic in the Google Groups "linux-sunxi" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/linux-sunxi/D-2cICv9zbI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to linux-sunxi...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Henrik Nordström

unread,
Apr 3, 2015, 4:41:45 PM4/3/15
to linux...@googlegroups.com, Rosimildo DaSilva
fre 2015-04-03 klockan 12:07 +0530 skrev Vishal B:

> Looking into the kernel sources, i find drivers/media/video/sunxi-csi/
> directory which i believe contains the drivers for the TV-IN section.

That is the digital CSI camera interface, for attaching a digital
cameras. Drivers exists for this.

The chip also seem to have a TV-in function where it supposedly can
"decode" a PAL/NTSC video signal and encode it in digital form, but very
little is known about this function. Some binary drivers exists in
various CedarX releases.

Additionally the chip also have a digital MPEG transport stream input
interface for digital TV input. Documentation exists for this interface,
but not sure if there is a driver. Also a lack of hardware using this
interface.

Regards
Henrik



Henrik Nordström

unread,
Apr 3, 2015, 4:58:30 PM4/3/15
to linux...@googlegroups.com, Rosimildo DaSilva
fre 2015-04-03 klockan 22:41 +0200 skrev Henrik Nordström:

> The chip also seem to have a TV-in function where it supposedly can
> "decode" a PAL/NTSC video signal and encode it in digital form, but very
> little is known about this function. Some binary drivers exists in
> various CedarX releases.

Looking in old archives I found this:
http://dl.linux-sunxi.org/SDK/A20/A20_SDK_20130319/lichee/linux-3.3/drivers/media/video/sun7i_tvd/

There was some discussion about forward porting of this driver some
years ago (2013), but then died.

There is some v4l references in there so maybe it does provide a usable
interface and the binary CedarX blobs is only for encoding in
jpeg/mpeg/whatever after the video have been digitized, not sure. Never
tried this driver at all.

Regards
Henrik

Reply all
Reply to author
Forward
0 new messages