Re: Fluidsim 45 BETTER Crack

0 views
Skip to first unread message
Message has been deleted

Joao Charlesbois

unread,
Jul 16, 2024, 7:14:28 AM7/16/24
to consaunelev

Looking around the web, I found this article that compares the performance of fluid simulations on CPU and GPU. While the article itself is a bit dated, I assume that while CPUs and GPUs today are faster, the performance ration between the two has stayed similar enough to keep the article relevant. Actually, comparing their hardware (Core 2 Quad Q6600 and GeForce 8800 GT) to today's hardware (eg. i5 6600K and GTX 1070), CPU has 230% performance gain, while GPU has staggering 1670% performance gain. This would make the article more relevant today than it was before (more than six times).

Fluidsim 45 BETTER Crack


Download Zip https://byltly.com/2yLDZV



Now, using the biggest grid they tested with (GS, 128^3, page 7, table 2), GPU simulation was over 3000 times faster than the CPU simulation. Even if utilizing 4 cores (they were using only a single thread), it's still over 700-fold performance gain (back in the day, today that would go up to 5000-fold). Given the results for smaller grids, that difference would only increase with bigger grids than 128^3.

What's the use of rendering the graphics at 60FPS if fluid simulation runs at 10 steps per second, effectively making the game look like it's running at 10FPS? Wouldn't it be better if I rendered graphics at 30FPS, but could do the simulation at 30 steps per second as well?

Let's go through the paper you mentioned and try to list the factors which will affect your decision on whether to use GPU or CPU for simulation.I will also add some comments about my opinion for each factor.

A GPU program runs in a SIMT(Single instruction, multiple threads) model.In this model, if some threads stall due to memory access or divergence,thread scheduler will schedule another set of threads to runto hide the memory latency.

A warp (a group of 32 threads) in a GPU program can be executed in parallel if they follow the same code path. Otherwise the program diverges and the execution of each branch will be serialized. Divergence will hurt the performance.

In practice, I will always implement a CPU version because it is easier to debug and it will be useful as a baseline.Then I will profile it to locate the bottleneck and decide whether to have a GPU version.

The solution we choose was to build differently depending of the result of import mpi4py and import pythran with try/except statements. It is far from optimal but it works. Of course, it forbids the use of isolated build and therefore pyproject.toml. Note that it is similar to how conda works (the package installed by a command depends on what was installed in the environment).

If you need different wheel filenames, you need to argue for a revision to the wheel spec. If you can explain how the package index would store these variant wheels without a filename spec change, then that would probably clarify how installers like pip could be told which variety to choose.

For example one could publish both projects MyProjectPure (pure Python) and MyProjectC (built with C extensions). And to link them somehow on the metadata-level they could both announce as Provides-Dist: MyProject. This way tools (installers) could react appropriately and for example make sure only one or the other is installed.

It is strange (and unfortunate) to have to do this by hand but it could work. For example fluidsim-purepy could depend on fluidfft-purepy and fluidsim[mpi] (here, just an extras_require since there is no extension using MPI in fluidsim) could depend on fluidfft-mpi.

However, there is only one pyproject.toml in the root directory of a repository, so only one variant of the package can be installed directly from the repository (with something like pip install git+ -sans-paille/pythran#egg=pythran).

The real problems here are not about providing a mechanism to do this (as you can see, such a mechanism already exists), but more about recording metadata about it. How do you tag the wheels? How do you tell pip which version you want? How do you declare dependencies on this?

I understood pyproject.toml and the isolated builds were used in particular to avoid setup_requires and having to specify build dependencies in the file setup.py. I also read that there are potential problems with setup_requires. I would prefer something cleaner than using both [build-system] requires plus setup_requires.

You mention get_requires_for_bdist_wheel and PEP 517. It would be good to provide somewhere simple but realistic examples of how to use these things. I find only the PEP on the web (maybe I was just not able to find the good resource) and this part Build backend interface is not simple. For example, I guess it should not be very difficult to reproduce your example (build dependency depending on an environment variable) with get_requires_for_bdist_wheel but how?

Yes, an environment variable is not sufficient. Something like pip install my-super-package[a-string-about-the-variant] would be nice. Of course the name of the wheel should somehow contain a-string-about-the-variant.

I understood pyproject.toml and the isolated builds were used in particular to avoid setup_requires and having to specify build dependencies in the file setup.py . I also read that there are potential problems with setup_requires . I would prefer something cleaner than using both [build-system] requires plus setup_requires .

Then you would declare that fluidsim depends on fluidsim-purepy-backend (or just have fluidsim-purepy-backend ship with fluidsim unconditionally), and have pythran and pythran-with-mpi extras that add dependencies on the other two.

If we had Recommends-Dist and a system for saying what groups of packages can satisfy a given dependency, you could get the default values you want with the correct fallbacks. For example, you could add a dummy package fluidsim.backend, which can be satisfied by either fluidsim-pythran-backend or fluidsim-purepy, and a recommends-dist on fluidsim-pythran-backend, so that if fluidsim-pythran-backend is unavailable or hard to install for whatever reason, you fall back to fluidsim-purepy. The extras pythran and pythran-with-mpi would add hard dependencies.

As you can see in the video and in the screenshot, these strings are generated and they remain almost intact until they fall together.
I would expect them to separate or act in a more "random" way...

In my experience FLIP has always had an element of stringiness to but there is some stuff you can try to help minimize it. I don't know what your machine constraints are but just lowering the the particle separation will help, its hard to tell from your clip but it looks a bit lowres to me. You can also play with the droplet detection settings on the particle tab that can help break up the splashes a bit more. And lastly if that sup is smashing through the water quickly you could add an extra substep to solver the collisions a bit better.

You can see the speed of the submarine in the video I attached in my previous post.
The sim is in fact low res 0.08 because I am still in the set-up phase.

What you mean by playing with the droplet detection? Right now my droplet setting are the default ones ( min 0.5 - max 1 - blend with fluid - velocity blend 0.2)

OK well then you can expect it to get better and less stringy as you upres after you set-up phase. On a lowres sim there is very little detail in the underlying velocity grid so it wont advect the the particles with much variation.

Well particle separation is relative to the size of your domain so you can't just compare 0.065 with 0.15. Igor's domain is a lot bigger so there will be more voxels and particles even with 0.15.
You can test it for your self scale your domain up to be double the size but keep the separation the same and see what happens.

Had a look at your scene and I think you can get better result not using the splash tank setup you currently have there. I made an example setup for you with a flat tank and a optional velocity volume if you need waves. Then I made some changes to your collider like adding timeblends so you get the correct sub-frame positions for your collisions. I only ran a very lowres test on a few frames to make sure everything works so you will have to run the sim your self to see if the results are what you are looking for.

As my game has a a stationary camera, it would be easy for me to add pre-rendered full screen fluid simulations on top of the image and it should match perfectly. I decided to use the Maglev train sequence as a prototype scenario.

I was going to use a software called EmberGen. It is realtime, but in order to get the fluid simulations to your game engine, you need to export them as pre rendered flip-book animations. Flip-book animations are simply an image with all of the animation frames in it.

But at this point I remembered that I had seen a realtime, AI fluid simulation smoke and fire plugin for Unity -smoke-fire. For sure the fluid simulations would be better rendered in realtime, not baked to choppy flip-books with very limited duration! Maybe the realtime tool was worth a try? It being an AI tool it would also be a great fit for this project!

Luckily, I had direct connection to the Zibra guys and they showed me some demos of their AI fluid simulation running on the iPhone. It ran amazingly well! The quality was great, the frame rate butter smooth. It is amazing what you can do with AI! I have never seen 3D fluid simulations as complex as this on a phone. Ever.

When I loaded up Zibra for the first time (after enabling it in the URP renderer) I am greeted with a simple demo scene that has some fire and a teapot. The fire in this scene looks amazing and runs blazing fast. I asked Zibra what evil AI magic they are doing to make this so fast, how was the AI used to run their simulations?

This element is then linked to the solver component. I also created a ground collision shape and a force field that adds random forces & twirls to the smoke to make it look more interesting and natural.

It really did not take me all that long to set this up and have it going in the scene. Most of the time was actually spent on matching the smoke color to the scene. It was pretty straightforward with the tools provided. Zibra Smoke & Fire has a couple of dozen sliders for controlling the look of the smoke, and I did not get to thoroughly study all of them yet in this quick test.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages