--
You received this message because you are subscribed to the Google Groups "SciRuby Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sciruby-dev...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
To unsubscribe from this group and stop receiving emails from it, send an email to sciruby-dev...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
E:\jnmatrix\benchmarking>jruby --profile elements1,000,000.rb
Profiling enabled; ^C shutdown will now dump profile info
#<Java::Nmatrix::Jnmatrix:0x6d7b4f4c>
#<Java::Nmatrix::Jnmatrix:0x3108bc>
Benchmarking for Addition
0.327000 0.000000 0.327000 ( 0.327373)
Benchmarking for Subtraction
0.895000 0.000000 0.895000 ( 0.894524)
Benchmarking for Multiplication
9.604000 0.000000 9.604000 ( 9.612699)
main profile results:
Total time: 18.37
total self children calls method
----------------------------------------------------------------
10.84 0.00 10.84 6 Benchmark.measure
9.61 0.00 9.61 1 Object#multiply
9.47 9.47 0.00 1 Java::Nmatrix::Linear#matrixMultiplicationFunction
5.48 0.04 5.44 53 Kernel.require
4.95 0.00 4.95 8 Kernel.require
1.83 0.00 1.83 148 Class#new
1.64 0.67 0.97 2 Array#initialize
1.29 1.29 0.00 6 Java::Nmatrix::Linear#matrixCreation
0.97 0.87 0.10 2000000 Kernel.rand
0.89 0.00 0.89 1 Object#subtract
0.75 0.08 0.68 27 Kernel.load
0.33 0.00 0.33 1 Object#add
0.23 0.01 0.22 58 Array#each
0.22 0.00 0.22 1 Gem::Specification.load_defaults
0.22 0.00 0.22 1 Gem::Specification.each_spec
0.22 0.00 0.22 1 Gem::Specification.each_gemspec
0.17 0.00 0.17 10 Gem::Specification.load
0.13 0.00 0.13 2 Nmatrix#initialize
0.13 0.00 0.13 5 ConcreteJavaProxy.new
0.13 0.00 0.13 5 ConcreteJavaProxy#initialize
0.13 0.13 0.00 2 Java::Nmatrix::Jnmatrix#__jcreate!
0.10 0.10 0.00 2000000 Fixnum#-
0.08 0.08 0.00 10 IO.read
0.08 0.02 0.06 12 Kernel.eval
0.06 0.00 0.06 10 Gem::Specification#initialize
0.04 0.00 0.04 10 Gem.register_default_spec
0.04 0.04 0.00 1 Java::Nmatrix::Linear#matrixAdditionFunction
0.03 0.01 0.03 3 Object#java_import
0.03 0.03 0.00 1 Java::Nmatrix::Linear#matrixSubtractionFunction
0.03 0.00 0.03 27 Array#map
0.03 0.00 0.03 1 JRuby.runtime
0.02 0.00 0.02 30 Gem::Specification#add_development_dependency
0.02 0.01 0.01 33 Gem::Specification#add_dependency_with_type
0.02 0.02 0.00 64 String#=~
0.02 0.02 0.00 12 JavaUtilities.get_proxy_or_package_under_package
0.02 0.02 0.00 970 String#sub
0.02 0.02 0.00 1 Time.now
0.02 0.00 0.02 9 IO#puts
0.02 0.00 0.02 8 Kernel.puts
0.02 0.02 0.00 3 JavaUtilities.get_proxy_class
0.02 0.00 0.01 1 Gem.win_platform?
0.02 0.01 0.00 1 JRuby.reference0
0.01 0.00 0.01 1 Enumerable.find
0.01 0.00 0.01 5 Java::Java.method_missing
0.01 0.01 0.00 15 IO#write
0.01 0.00 0.01 2 Java::OrgJruby::RubyBasicObject#getRuntime
0.01 0.00 0.01 48 Gem::Requirement#initialize
0.01 0.00 0.01 33 Gem::Dependency#initialize
0.01 0.00 0.01 10 Gem::Specification#files
0.01 0.00 0.01 48 Array#map!
I also tried matrix multiplication directly on Java and got similar results(9.6 seconds for matrix multiplication). What do you suggest?
Initially, when I tried to reach out to Rodrigo regarding creating an mdarray wrapper for Nmatrix, he suggested me to implement multi-threading by writing "the parallel code as a block (closure) in java 8. Then use a library such as Aparapi (or any other) that supports GPU to allow matrix operations on the GPU ."
Should I try to work on it?
I have the following algorithm in mind:
If we have M*N and N*P matrix
start M threads ==> Result[thread][] = 1*N + N*P
return Result
--
You received this message because you are subscribed to a topic in the Google Groups "SciRuby Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sciruby-dev/tqtVFVvdyjU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sciruby-dev...@googlegroups.com.
> 5. mailto:sciruby-dev+unsub...@googlegroups.com
> 6. https://groups.google.com/d/optout
--
> 5. mailto:sciruby-dev...@googlegroups.com
> 6. https://groups.google.com/d/optout
--
--
You received this message because you are subscribed to the Google Groups "SciRuby Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sciruby-dev...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
> 5. mailto:sciruby-dev+unsub...@googlegroups.com
> 6. https://groups.google.com/d/optout
--
> 5. mailto:sciruby-dev...@googlegroups.com
> 6. https://groups.google.com/d/optout
--
--
You received this message because you are subscribed to the Google Groups "SciRuby Development" group.
Hello!
I do not have my machine handy but wanted to offer some suggestions.
I'm not familiar with the algorithms being used here so I can't comment on that. However I can suggest some ways to investigate perf.
Firstly, --profile will only report timings for methods called from Ruby. If you pass --sample you can enable the JVM sampling profiler, which will let us see in more detail where any bottlenecks are. If that doesn't look conclusive we can also try to turn on instrumented (full timing) profiling with a few different JVM flags.
We may also want to see if there's any allocation happening along the hot paths, since that would be a big hit to perf. I wouldn't expect matrix-library authors to make that mistake, though.
I'm not sure I saw warmed up results, and that could change many things too. The fact that JRuby's performance is pretty linear makes me suspect that JIT is not kicking in, and I would generally be surprised to see it kick in for subsecond benchmarks. Pass -Xjit.logging=true to show JRuby JIT and -J-XX:+PrintCompilation to see JVM JIT. We would want to see the hottest methods (e.g. the top items in a sampled profile) getting jitted by JRuby (if Ruby) or JVM (if Java or JRuby-jitted Ruby).
Check GC with -J-XX:+PrintGCDetails.
- Charlie (mobile)
On Mar 22, 2016 06:40, "Prasun Anand" <prasunan...@gmail.com> wrote:
>
> I have implemented jblas for addition, subtraction and matrix multiplication. I have benchmarked the results.
> NMatrix-JBLAS is close to NMatrix-ATLAS but still NMatrix-ATLAS is the clear winner.
> The charts are
>
>