On Jan 13, 2021, at 6:57 PM, Peter Wilson <peter....@bsc.es> wrote:
Folks
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/5a1d1ccb-26f4-4da0-94fb-679c201782dan%40googlegroups.com.
On Jan 13, 2021, at 7:25 PM, Nikolay Dubina <nikolay.d...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/6f3d412d-85a1-4bbb-aaab-cc31a41cdaben%40googlegroups.com.
Why limit yourself to two? Use N routines and have each process every N in the list.
So the sketch of the go implementation is that I would have three threads - main, t0, and t1. (more for a real system, but two suffices for explanation)
On Jan 13, 2021, at 10:04 PM, Kurtis Rader <kra...@skepticism.us> wrote:
I forgot. But it’s to important to mention that different synchronization methods perform differently under contention so what works well for 2 might be really poor with 64.
--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/yqxfGIGDKr4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/24427C92-66CF-4515-ADB4-A3E96059380C%40gmail.com.
On Jan 13, 2021, at 9:58 PM, Robert Engels <ren...@ix.netcom.com> wrote:Why limit yourself to two? Use N routines and have each process every N in the list.On Jan 13, 2021, at 7:25 PM, Nikolay Dubina <nikolay.d...@gmail.com> wrote:
On Jan 13, 2021, at 7:16 PM, Robert Engels <ren...@ix.netcom.com> wrote:This https://github.com/robaho/go-concurrency-test might help. It has some relative performance metrics of some sync primitives. In general though Go uses “fibers” which don’t match up well with busy loops - and the scheduling overhead can be minimal.
--
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/yqxfGIGDKr4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/5a1d1ccb-26f4-4da0-94fb-679c201782dan%40googlegroups.com.
On Jan 16, 2021, at 11:14 PM, Bakul Shah <ba...@iitbombay.org> wrote:You may be better off maintaining two state *arrays*: one that has the current values as input and one for writing the computed outputs. At "negtive" clock edge you swap the arrays. Since the input array is never modified during the half clock cycle when you compute, you can divide it up in N concurrent computations, provided a given output cell is written by just one thread. So theen the synchronization point is when all the threads are done with they computing. That is when you swap I/O state arrays, advance the clock and unblock all the threads to compute again. You can do this with N+1 channels. The tricky part may be in partitioning the computation in more or less equal N parts.
On Jan 17, 2021, at 9:11 AM, Pete Wilson <peter....@bsc.es> wrote:
The problem is that N or so channel communications twice per simulated clock seems to take much longer than the time spent doing useful work.
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/FE3D2098-6E4A-4A84-BF9B-2CA04B343AA6%40bsc.es.
On Jan 17, 2021, at 9:21 AM, Robert Engels <ren...@ix.netcom.com> wrote:If there is very little work to be done - then you have N “threads” do M partitioned work. If M is 10x N you’ve decimated the synchronization cost.
On Jan 17, 2021, at 9:50 AM, Pete Wilson <peter....@bsc.es> wrote:
That’s exactly the plan.
On Jan 17, 2021, at 10:07 AM, Robert Engels <ren...@ix.netcom.com> wrote:If you use GOMAXPROCS as a subset of physical cores and have the same number of routines you can busy wait in a similar fashion.
On Jan 17, 2021, at 7:49 AM, Pete Wilson <peter....@bsc.es> wrote:
That’s exactly the plan.
On Jan 17, 2021, at 10:46 AM, Bakul Shah <ba...@iitbombay.org> wrote:I’d be tempted to just use C for this. That is, generate C code from a register level description of your N simulation cores and run that. That is more or less what “cycle based” verilog simulators (used to) do. The code gen can also split it M ways to run on M physical cores. You can also generate optimal synchronization code.With linked lists you’re wasting half of the memory bandwidth and potentially the cache. Your # of elements are not going to change so a linked list doesn’t buy you anything. An array is ideal from a performance PoV.