Calling all users of ParallelAccelerator.

454 views
Skip to first unread message

Todd Anderson

unread,
Jul 12, 2016, 4:23:05 PM7/12/16
to julia-users
Hello,

  I'm one of the developers of the Intel ParallelAccelerator package for Julia.  https://github.com/IntelLabs/ParallelAccelerator.jl

  Now that the package has been out for a while, I'd like to poll the user community.

1) Who has used the package to accelerate some real application that they are working on?  If you fall into this category, please drop us a note.
2) If you tried the package but it didn't work for some reason or you need support for some feature also please let us know.  Soon after Julia 0.5 is released we will be releasing an updated version of ParallelAccelerator with support for parallelization via threading through regular Julia codegen.  By going through Julia codegen, code coverage will be greatly improved.  Our current path through C++ with openmp has several restrictions about what Julia features can be converted to C and most of these restrictions are therefore lifted by going through native Julia codegen.
3) If you haven't heard about ParallelAccelerator before and you have an application that is array or stencil oriented and you would like to see if it can be automatically parallelized then please check out our package.

thanks,

Todd


Jeffrey Sarnoff

unread,
Jul 12, 2016, 8:07:07 PM7/12/16
to julia-users
Thank you Todd, thank you Intel. The Black-Scholes example helped me understand this capability.  
I am pleased it continues to be developed, and that it becomes better still  when released for v0.5.

Yaakov Borstein

unread,
Jul 16, 2016, 6:18:20 AM7/16/16
to julia-users
Getting rid of the conversion restrictions would be great as it seems to have been a barrier when I was looking at using your packages some months ago.  Looking forward to new releases under 0.5, keep up the great work.  I have no doubt that over time the amazing performance boosts will lead many to look into applying ParallelAccelerator functionality.

Chris Rackauckas

unread,
Jul 16, 2016, 11:25:27 AM7/16/16
to julia-users
Thank you for this work! I am particularly interested in working with it for the Xeon Phi. I haven't actually gotten to do extensive tests of the work from https://github.com/IntelLabs/CompilerTools.jl/issues/1 yet. Will be doing this over the summer. 

I am trying to incorporate it into DifferentialEquations.jl to speed up some routines. Also will probably use it in VectorizedRoutines.jl. One issue I am having is dealing with ParallelAccelerator as a conditional dependency: I want to add the @acc macro only when the user has the package installed (and working?). This is crucial since the package does work for Windows as well. Conditionally applying macros and packages is difficult.

André Lage

unread,
Jul 21, 2016, 5:30:45 PM7/21/16
to julia-users, Naelson Douglas, Raphael Ribeiro
Hi Todd,

First, congratulations to @acc team for the great job! 

We are implementing a new version of CloudArray (https://github.com/gsd-ufal/CloudArray.jl) by using Parallel.Accelerator.jl. We are implementing a cloud service for processing fully PolSAR images, real PolSAR images from NASA UAVSAR project (http://uavsar.jpl.nasa.gov), we have ~4 TB of fully PolSAR images in Azure SSD disks. We forked JuliaBox and adapt it to Azure, we use Julia on top of Docker and Azure. 

Naelson (Cc'ed) had some troubles after an update, he'll write here if he still hasn't solved the problem yet.

We're glad to hear that ParallelAccelerator.jl will use Julis threads, this will probably save us time in investigating how to take advantage of both @acc and threads.

Best,


André Lage.

pev...@gmail.com

unread,
Jul 22, 2016, 11:07:19 PM7/22/16
to julia-users
Hi Todd,
I have tried several times to use ParallelAccelerator to speed up my toy Neural Network library, but I never had any significant performance boost. I like the idea of the project a lot, sadly I was never able to fully utilise it.

Best wishes,
Tomas 

DrTo...@comcast.net

unread,
Jul 22, 2016, 11:42:57 PM7/22/16
to julia...@googlegroups.com
You may also want to look at another IntelLabs project on GitHub called Latte.  It provides a DSL for deep neural networks in Julia.

Todd


From: pev...@gmail.com
To: "julia-users" <julia...@googlegroups.com>
Sent: Friday, July 22, 2016 8:07:19 PM
Subject: [julia-users] Re: Calling all users of ParallelAccelerator.

pev...@gmail.com

unread,
Jul 25, 2016, 5:59:03 AM7/25/16
to julia-users
Hi Todd,
I have been looking at latte and it does not seem to be useful for me, since I need some special constructs and they are just not available.
Nevertheless, I would like to ask you, if Latte uses parallelization? In my own implementation, I am struggling to exploit multi-core hw.
Thank you very much.
Best wishes,
Tomas

Todd Anderson

unread,
Oct 25, 2016, 12:01:38 PM10/25/16
to julia-users
The next version of ParallelAccelerator for Julia 0.5 has just been released with experimental support for Julia native threading.  Please see the new Julia users post for more details of the new release.
Reply all
Reply to author
Forward
0 new messages