kodo-js v2.0.0 build error

43 views
Skip to first unread message

braun....@aut.bme.hu

unread,
Jul 31, 2018, 10:32:05 AM7/31/18
to steinwurf-dev
Hi,

I'm trying to compile the latest kodo-js, but I get the following error:

Traceback (most recent call last):
....
  File "D:\Work\kodo-js-new\waf-1.9.8-cede92d629572d85573d26cdbc2f2b42\waflib\extras\wurf\git_url_parser.py", line 54, in parse
    return GitUrl(protocol=result.group('protocol'),host=result.group('host'),path=result.group('path'))
AttributeError: 'NoneType' object has no attribute 'group'
(see attachment for full error) 


I'm using the following command: 
python2 waf configure --cxx_mkspec=cxx_default_emscripten --emscripten_path="D:\Work\kodo-js\emsdk\emscripten\1.38.10"

Envionment:
$ python2 --version
Python 2.7.13 :: Anaconda 4.3.1 (64-bit)
// on a windows 10 64bit

works with commit 9218964.

Can you point me a direction to solve this issue?

Thank you!

Best,
Patrik
 

error.txt

Morten V. Pedersen

unread,
Aug 2, 2018, 4:14:36 AM8/2/18
to steinw...@googlegroups.com
Hi Patrik,
Thanks for your message. There should be a log file called something like build/resolve.resolve.log

Could you attache that one?

All the best,
Morten
--
You received this message because you are subscribed to the Google Groups "steinwurf-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to steinwurf-de...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Patrik J. Braun

unread,
Aug 2, 2018, 10:29:01 AM8/2/18
to mvidebae...@gmail.com, steinwurf-dev
Hi Morten,

Thanks for highlight me the log file, I could solve the issue.

The problem was that the .git/config file contained this: 
[remote "origin"]
url = https://github.com/steinwurf/kodo-js
instead of this: 
[remote "origin"]
url = https://github.com/steinwurf/kodo-js.git

Possible issue could have been that I used this url for cloning:
https://github.com/steinwurf/kodo-js
instead of this:
https://github.com/steinwurf/kodo-js.git 

Adding the .git  to the config file solved my issue.

btw this was the content of the logfile:
wurf: Resolve execute resolve
['git', 'config', '--get', 'remote.origin.url']
out: https://github.com/steinwurf/kodo-js

Best,
Patrik 

Morten V. Pedersen <mvidebae...@gmail.com> ezt írta (időpont: 2018. aug. 2., Cs, 4:14):
Hi Patrik,
Thanks for your message. There should be a log file called something like build/resolve.resolve.log

Could you attache that one?

All the best,
Morten

On 07/31/2018 04:32 PM, braun....@aut.bme.hu wrote:
--
You received this message because you are subscribed to the Google Groups "steinwurf-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to steinwurf-de...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "steinwurf-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/steinwurf-dev/A1Z05Nh-W9c/unsubscribe.
To unsubscribe from this group and all its topics, send an email to steinwurf-de...@googlegroups.com.

Morten V. Pedersen

unread,
Aug 3, 2018, 6:23:16 AM8/3/18
to Patrik J. Braun, steinwurf-dev
Hi Patrik,
Great you solved it.

All the best,
Morten

Péter Vingelmann

unread,
Aug 8, 2018, 2:31:53 PM8/8/18
to bra.p...@gmail.com, Steinwurf Developer mailing list
Dear Patrik,

It was really good thinking about adding the .git suffix, we had the exact same issue in this thread: 

I just deployed a new waf binary in kodo-js that should "tolerate" the incomplete https url (no .git suffix), so this won't happen again.
Nevertheless, our recommendation is using the well-formed URL in all cases: https://github.com/steinwurf/kodo-js.git

Cheers,
Peter

Patrik J. Braun

unread,
Aug 8, 2018, 3:25:07 PM8/8/18
to Péter Vingelmann, steinwurf-dev
Thanks Péter,

I also would like to note, that since emscripten v1.38.1: 05/17/2018 the default output is WebAssembly ( so a *.js and *.wasm file instead of *.js and *.mem).
Including the lib should work the same way, but one must wait for the Module.onRuntimeInitialized  to be triggered.
This means, upgrading to the latest emscripten the files in the example folder will fail.

Also an other change with the WebAssembly that setting the Module.TOTAL_MEMORY before loading the lib is not possible anymore, has to be done in compile time. 
Can you maybe give me some hints how to set flags (like -s ALLOW_MEMORY_GROWTH=1) to the emscripten (not for gcc or the linker) through waf.

I also run a quick (not too representative) benchmark, and I found that the setup (coder initialization) time is significantly faster:

old version with asm.js:
Selected benchmark parameters:
  Field: binary8
  Symbols: 64
  Symbol size: 1600
Setup time: 22579 microsec  
Encoding time: 30467.9 microsec
Decoding time: 14249.03 microsec
Encoding rate: 6.721829 MB/s
Decoding rate: 7.186456 MB/s
memory usage: 26.869993 MB

latest with WebAssebmly:
Selected benchmark parameters:
  Field: binary8
  Symbols: 64
  Symbol size: 1600
Setup time: 540.86 microsec  
Encoding time: 29697.89 microsec
Decoding time: 14138.95 microsec
Encoding rate: 6.896114 MB/s
Decoding rate: 7.242403 MB/s
memory usage: 28.3834 MB

* For benchmark, I used the  source provided in the example folder. I run the main code 50 times and calculated an average from it.
** Tested on win 10, node v8.11.2, I7-6700HQ

Best,
Patrik

Péter Vingelmann

unread,
Aug 8, 2018, 4:38:27 PM8/8/18
to bra.p...@gmail.com, Steinwurf Developer mailing list
We haven't tried the latest Emscripten yet, our current testing version is 1.37.x, but thank you for the info about these updates!

Those benchmark results look pretty good, but I still have low confidence in the usefulness of this test. I ran the benchmark on a few machines and the numbers were fluctuating heavily, so we cannot draw any definitive conclusions from the average values. Hopefully, the overall stability of Emscripten programs will improve in future versions. To be honest, the meaningful benchmark would be running this code in a real browser session. We just use node.js, because it can run a JS program in a terminal, but the node.js server-side performance is not important for us (if you run network coding on a server, then do it in real C++).

The "-s ALLOW_MEMORY_GROWTH=1" option is problematic, because this is not a single flag like "-O2".
But you can manually add a cxxflag or linkflag at this point in the wscript: https://github.com/steinwurf/kodo-js/blob/master/wscript#L17
Try something like this:
bld.env.append_value('CXXFLAGS', '-s ALLOW_MEMORY_GROWTH=1')
bld.env.append_value('LINKFLAGS', '-s ALLOW_MEMORY_GROWTH=1')

I don't know if this flag needed for the compiler phase or the linker phase, so you might need both or just one.
If you run waf with "python waf build -j1 -v", then it will print all the flags that are passed to the compiler/linker.

Cheers,
Peter

Patrik János Braun

unread,
Aug 9, 2018, 11:59:41 AM8/9/18
to Péter Vingelmann, steinwurf-dev
Hi Péter,

Thanks fo the help.

This was the solution at the end:
    bld.env.append_value('CXXFLAGS', '-s')
    bld.env.append_value('CXXFLAGS', 'ALLOW_MEMORY_GROWTH=1')
    bld.env.append_value('LINKFLAGS', '-s')
    bld.env.append_value('LINKFLAGS', 'ALLOW_MEMORY_GROWTH=1')
 
Do you happen to have any public benchmark results for kodo-js?
Also about node, as far as I understand its not super straight forward to call c++ from node. You need to use the N-api to build a native module that can call into the c++ lib.
So for small/mockup projects I found much easier just to use the emscipten version of kodo for node too.

Best,
Patrik

Patrik János Braun

unread,
Aug 14, 2018, 10:04:36 AM8/14/18
to Péter Vingelmann, steinwurf-dev
Hi All,

I've ended up building a little benchmark testbed for kodo-js to run some measurements, here you can find the results:
https://github.com/bpatrik/kodo-js-benchmark/tree/master/results

Best,
Patrik

Péter Vingelmann

unread,
Aug 14, 2018, 1:43:28 PM8/14/18
to Braun....@aut.bme.hu, Steinwurf Developer mailing list
Dear Patrik,

I just looked at your code modification below:

template<class Coder>
emscripten::val coder_write_payload(Coder& coder)
{
    std::vector<uint8_t> payload(coder.payload_size());
    coder.write_payload(payload.data());
    return emscripten::val(emscripten::typed_memory_view(payload.size(), payload.data()));
}

This is totally unsafe, because the payload std::vector is destroyed when the function returns! Therefore the typed memory view points to an area where the memory was already freed and the data will not be copied!

If you want to return a valid memory view to the payload buffer, then you have to copy the data to a Uint8Array that is *not* allocated on the Emscripten heap, but somewhere else in the regular JS memory space (just like a user-created array). I tried this approach in this branch: https://github.com/steinwurf/kodo-js/blob/test-arrays/src/kodo_js/coder.hpp#L28
Copying also takes time, so this "heavily-optimized" solution was actually slower in my benchmarks. And it is certainly more complex, so we will stick with the current approach until Emscripten introduces a more efficient way to share memory.


Your benchmarks are interesting, and it is funny to see that binary16 is almost as fast as binary8! We have a huge difference between those 2 fields in C++ (binary16 is painfully slow), so the coding operations are not so relevant here, we are benchmarking something else ;) I think Emscripten gives us a huge overhead just for copying data and calling functions.

So the JS engine is about 50-100x slower than C++, and that's why we cannot recommend using node.js on the server side. If you run node.js, then you can serve 10 HD streams, but with C++ you can serve 1000 streams. I think node.js might be useful for prototyping a web application, but nothing beyond that.

Cheers,
Peter

Patrik János Braun

unread,
Aug 26, 2018, 5:04:51 PM8/26/18
to Péter Vingelmann, steinwurf-dev
Hi Péter,

Sorry for the late reply, I got busy lately. 

Thank you for pointing out the error in my code.
I've fixed that error with a similar solution (I'm using slice instead of set). It seems working.
I've also used this method at each functions that returns an std::string.

I run the my benchmark on it, and I see some improvements:

I saw that:
- using memview helps
- wasm seems to be equal or faster than asm (definitely faster if allow_memory_growth is true)
- using emscipten v38 vs v37 reduces setup time, but decreases decoding throughput (its mostly just an assumption, need to be double checked)

Best,
Patrik



Péter Vingelmann <pe...@steinwurf.com> ezt írta (időpont: 2018. aug. 14., K, 13:43):
Dear Patrik,

I just looked at your code modification below:

template<class Coder>
emscripten::val coder_write_payload(Coder& coder)
{
    std::vector<uint8_t> payload(coder.payload_size());
    coder.write_payload(payload.data());
    return emscripten::val(emscripten::typed_memory_view(payload.size(), payload.data()));
}

This is totally unsafe, because the payload std::vector is destroyed when the function returns! Therefore the typed memory view points to an area where the memory was already freed and the data will not be copied!

If you want to return a valid memory view to the payload buffer, then you have to copy the data to a Uint8Array that is *not* allocated on the Emscripten heap, but somewhere else in the regular JS memory space (just like a user-created array). I tried this approach in this branch: https://github.com/steinwurf/kodo-js/blob/test-arrays/src/kodo_js/coder.hpp#L28
Copying also takes time, so this "heavily-optimized" solution was actually slower in my benchmarks. And it is certainly more complex, so we will stick with the current approach until Emscripten introduces a more efficient way to share memory.


Your benchmarks are interesting, and it is funny to see that binary16 is almost as fast as binary8! We have a huge difference between those 2 fields in C++ (binary16 is painfully slow), so the coding operations are not so relevant here, we are benchmarking something else ;) I think Emscripten gives us a huge overhead just for copying data and calling functions.

So the JS engine is about 50-100x slower than C++, and that's why we cannot recommend using node.js on the server side. If you run node.js, then you can serve 10 HD streams, but with C++ you can serve 1000 streams. I think node.js might be useful for prototyping a web application, but nothing beyond that.

Cheers,
Peter


On Tue, Aug 14, 2018 at 4:04 PM Patrik János Braun <braun....@aut.bme.hu> wrote:

Yair Enrique Rivera Julio

unread,
Aug 18, 2020, 12:38:02 AM8/18/20
to steinwurf-dev
Hello..how do you did by test this?
thank you

Reply all
Reply to author
Forward
0 new messages