--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
What I do is have a Bytes interface which has implementations which wraps heap ByteBuffer, direct ByteBuffer, Unsafe.allocateMemory and memory mapped files. It can be 63-bit sized. Note: it unwraps ByteBuffer thus bypassing some of it's protections. It also supports ObjectOutput, ObjectInput, Appendable (for writing text), ByteStringParser (for parsing text), compressed types, object pooling for deserialization and thread safe constructs such as volatile, ordered, atomic operations, locking. Using an Externalizable to/from Bytes is significantly faster than using ObjectInput/OuputStream.This sort of abstraction is needed to hide the limitations or implementation details of where you get your memory from. It can even have less overhead than the thing you wrap. ;)
On 5 February 2014 15:51, ymo <ymol...@gmail.com> wrote:
Thank you peter for your detailed answer !Now i am wondering if using unsafe.allocateMemory was the right (TM) thing to do ? I imagine it would require quite a change tho :-(
On Wednesday, February 5, 2014 10:35:27 AM UTC-5, Peter Lawrey wrote:
1)ByteBufrers were designed and implemented in 2002 when machines were 32-bit or had less than 2 GB.Some problems with using ByteBuffers;- the size is limited to Integer.MAX_VALUE i.e. 2 GB - 1 !!- the data is zero'd out which has an overhead in clearing and you end up touching every page. i.e. You cannot allocate virtual memory have it turn into memory lazily,- every access e.g. every byte access, has a bounds check which is not optimised away by the JVM. This makes it quite a bit slower for byte accesses. For long/double accesses it is only about 5% slower.2)This is what ByteBuffer does already so "converting" wouldn't help. There is no way to sub-class ByteBuffer to address 2+ GB as all the values are int (except for the address)
On 5 February 2014 14:15, ymo <ymol...@gmail.com> wrote:
Hi All.I am playing with https://github.com/real-logic/simple-binary-encoding and i notice that it uses ByteBuffer.allocateDirect instead of unsafe.allocateMemory(). Unfortunately allocateDirect only takes an int. As such if someone wanted to allocate a huge amount of memory they are just out of luck.1) is it by design ?2) can you convert memory allocated by unsafe.allocateMemory to a ByteBuffer so that it can be used by uk.co.real_logic.sbe.codec.java.DirectBuffer ?p.s.kudos for Martin et. al for this library !
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscribe...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
The reality is you can have a List <ByteBuffer> if you need 2 GB or mote.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
If you are building your own abstractions then using a list of ByteBuffers with wrapper methods can help. This will allow you to still use NIO to fill in ByteBuffers, then put them in a class that provides access to the underlying content. If you want continuous chunks of memory greater than 2 GB then you can possibly allocate using unsafe but you are on your own on how you do IO with the chunk you received. There are JNI wrappers around epoll available that can help you here if network IO is what you need. It might even be faster than the epoll SelectorImpl in Java. But again you are on your own when it comes to processing these chunks since there are no established conventions on how to represent, access and modify chunks greater than 2 GB. Most libraries don't take a long pair(pointer, size).
JNI is the best option but you can create an empty direct ByteBuffer and set the address and capacity via reflection.
From my tests, not zeroing out helps on Linux but not windows.
You can use reflections to get the raw map () and unmap () native methods on FileChannelImpl. The low level native methods all support 64 bit.
--
On the ubuntu system I have, allocations larger than 128 KB are mmap which also has a problem that there appears to be a limit to how many mmaps you program can have which is pretty easy to reach. Needs more research. ...
Martin you are da maaaaan ))) One quick question however :The allocation is still limited to an int as far as capacity is concerned. So what is the advantage that is new ? I would assume that the only reason someone would want to use unsafe.allocateMemory would be to circumvent the capacity as well as the limit checks on the buffer which cannot be avoided.