G1NewSizePercent: These are the important ones. You now can specify percentages of anoverall desired range for the new generation. With these settings, we tell G1 to not use itsdefault 5% for new gen, and instead give it 40%! Minecraft has an extremely high a memoryallocation rate, ranging to at least 800 Megabytes a second on a 30 player server! And this ismostly short-lived objects (Block Position).
G1ReservePercent=20: MC Memory allocation rate in up-to-date versions is really insane. Werun the risk of a dreaded "to-space exhaustion" not having enough memory free to move dataaround. This ensures more memory is waiting to be used for this operation. Default is 10, so weare giving another 10 to it.
Download https://bltlly.com/2yUpu7
MaxTenuringThreshold=1: Minecraft has a really high allocation rate of memory. Of thatmemory, most is reclaimed in the eden generation. However, transient data will overflow intosurvivor. Initially played with completely removing Survivor and had decent results, but doesresult in transient data making its way to Old which is not good.Max Tenuring 1 ensures that wedo not promote transient data to old generation, but anything that survives 2 passes of GarbageCollection is just going to be assumed as longer-lived.
G1HeapRegionSize=8M+: Default is auto calculated. SUPER important for Minecraft, especially1.15, as with low memory situations, the default calculation will in most times be too low. Anymemory allocation half of this size (4MB) will be treated as "Humongous" and promote straight toold generation and is harder to free. If you allow java to use the default, you will bedestroyed with a significant chunk of your memory getting treated as Humongous.
Some of the things mentioned have already been implemented; 1.13 multithreaded chunk generation, though it is somehow still slower than 1.12.2 - and around 10 times slower than my own version's world generator, despite not being multithreaded at all (I presume that by "multithreading" 1.13 simply shoved chunk generation onto another thread, but not multiple threads, so it will still bottleneck as if a single thread is being used, just reduces any lag spikes that may occur on the main server thread). 1.8 likewise multithreaded mob AI and 1.14 multithreaded lighting, but again, the poor optimization of the underlying code has more than offset any improvements made by multithreading (for example, compare the time it takes AMIDST to map the same seed on 1.12.2 and 1.13+ - all because 1.13's overly complicated code is far slower, up to 35 times in fact, (I doubt they improved it much since then, certainly not back to before) and that's just the biome generator); these quotes by the creator of Optifine sum things up:
Minecraft 1.8 has so many performance problems that I just don't know where to start with.
The general trend is that the developers do not care that much about memory allocation and use "best industry practices" without understanding the consequences. The standard reasoning being "immutables are good", "allocating new memory is faster than caching", "the garbage collector is so good these days" and so on.
Rendering system [and world generation, etc since then]
The old Notch code was straightforward and relatively easy to follow. The new rendering system is an over-engineered monster full of factories, builders, bakeries, baked items, managers, dispatchers, states, enums and layers. Object allocation is rampant, small objects are allocated like there is no tomorrow. No wonder that the garbage collector has to work so hard.
The multithreaded chunk loading is crude and it will need a lot of optimizations in order to behave properly. Currently it works best with multi-core systems, quad-core is optimal, dual-core suffers a bit and single-core CPUs are practically doomed with vanilla. Lag spikes are present with all types of CPU.
_am_i_experiencing_lag_spikes_sometimes_even/e3vzakv/
Doubt theyre gonna improve it any time soon, if anything theyre making Minecraft worse in 1.13. sp614x showed us how much worse their object allocations are in 1.13 compared to 1.12 :/
Note that the "chunk loading" mentioned here, and in general, actually refers to chunk rendering on the client, where chunks are loaded far faster than they can be rendered. IMO, actual chunk loading from disk (which has been multithreaded for a very long time) is negligible, worlds load in 1.6.4 so fast the internal server doesn't even output the progress, just "preparing start region for level 0", which is followed by "preparing spawn area %" once per second, so it takes less than a second to load an existing world - no need for a fancy loading screen like 1.14 added; likewise, the game only takes a couple seconds to launch - much faster than 1.13 took to load an existing world - which is all because of how awful Mojang's code has become since then (save files in 1.13 are only slightly larger than before so this can't explain the increase in load time.
In fact, some of the issues may even be due to badly implemented multithreading; for example, if a thread is forced to wait for another thread to finish what it is doing it can significantly degrade performance - and if not, it can result in various glitches; for example when you press save and quit to title in singleplayer the internal server to told to shut down - but it doesn't necessarily do so right away - in fact, the client can even try doing things like deleting the world while it is still open, which will obviously fails and/or the server will recreate deleted region files with chunks which were saved since the last save - hence bugs like MC-315, MC-84800, and MC-150202 (the last two occur due to the client killing the server before it can finish saving - a very simple fix is to force the client to wait for the server to fully shut down before it can do anything else, as I did years ago with none of these issues ever occurring from anything short of a crash. This is also a case where you do want to make a thread (client) wait for another thread (server) to terminate to avoid data corruption).
Enables Java heap optimization. This sets various parameters to be optimal for long-running jobs with intensive memory allocation, based on the configuration of the computer (RAM and CPU). By default, the option is disabled and the heap is not optimized.
Epsilon is a do-nothing (no-op) garbage collector that was released as part of JDK 11. It handles memory allocation but does not implement any actual memory reclamation mechanism. Once the available Java heap is exhausted, the JVM shuts down.
Maybe the biggest and the ugliest problem is the memory allocation. Currently the game allocates (and throws away immediately) 50 MB/sec when standing still and up to 200 MB/sec when moving. That is just crazy.
The general trend is that the developers do not care that much about memory allocation and use "best industry practices" without understanding the consequences. The standard reasoning being "immutables are good", "allocating new memory is faster than caching", "the garbage collector is so good these days" and so on.
Allocating new memory is really faster than caching (Java is even faster than C++ when it comes to dynamic memory), but getting rid of the allocated memory is not faster and it is not predictable at all. Minecraft is a "real-time" application and to get a stable framerate it needs either minimal runtime memory allocation (pre 1.3) or controllable garbage collecting, which is just not possible with the current Java VM.
tldr; When 1.8 is lagging and stuttering the garbage collector is working like crazy and is doing work which has nothing to do with the game itself (rendering, running the internal server, loading chunks, etc). Instead it is constantly cleaning the mess behind the code which thinks that memory allocation is "cheap".
GC is useless for gamedev because the problem it solves (automatically freeing memory of unused objects to make room for new objects) doesn't happen in gamedev (where almost no heap allocations happen after initial setup). Unless you're shitty developer like Notch and don't care about performance at all - then Java, with its undisableable garbage collector, is perfect match for you.
GC is useless for gamedev because the problem it solves (automatically freeing memory of unused objects to make room for new objects) doesn't happen in gamedev (where almost no heap allocations happen after initial setup).
My maps are 8192 and heights of 16384. Like I said before it don't always run out of memory it is random when your version does it. Some times it will happen on the Erosion process and some times it's the drop dirt process and other times it's random in the biome generator. I've yet been able to get to ore generator with out this happening. In budda's and Warlanders versions this hasn't happened to me not even when I do a 16384 X 16384 map. I have 16gigs of ddr3 in my machine every stick is in 100% working order and flawless. 6gigs was devoted to java programs and the rest left to windows.
LCGs are fast and require minimal memory (one modulo-m number, often 32 or 64 bits) to retain state. This makes them valuable for simulating multiple independent streams. LCGs are not intended, and must not be used, for cryptographic applications; use a cryptographically secure pseudorandom number generator for such applications.
aa06259810