Thebottom line is that all displays are limited. They have three specific lights they can project out of, and a range of emissions of those lights. ACES simply does a brute force 3x3 and a transfer function, and all values are clipped to the displays gamut.
Way back when, we tried wider gamut working spaces such as BT.2020 for Filmic. It sucked without comprehensive gamut mapping. So much so that I was flatly against releasing anything that generates ass looking imagery by default.
Think about a series of lasers displaying colours on a wall; they are the most saturated and intense versions of pure spectra. Now imagine trying to paint a watercolour image of those lasers. Given the limitations of the output medium watercolours and paper, we need to figure out how to form the image that is completely inexpressible on a 1:1 basis.
This is a similar problem to the gamut mapping conundrum. A display may have a more limited range of intensities (SDR vs EDR for example) and a different or more limited range of saturation and hues that can be expressed. This is a nontrivial problem to solve.
The only correct situation today, given all displays have limitations, is to ensure that both the scene referred and display referred primaries are the same. i.e. Use P3 primaries for the scene if going to be displaying on P3 displays.
There are a plethora of approaches to gamut mapping out there. But even when we have the same primaries, we still have a huge problem with how the limited dynamic range of a display cannot fully express the ratios in the colour rendering volume of the rendering space. Filmic is one half assed attempt to resolve this, and it includes a gamut compression of the higher values down into the smaller gamut of the display.
These are the sorts of lower level image formation questions that few imagers are exposed to, but should be asking. They form the fundamentals of image formation that previous generations were not entirely exposed to in the manner they are today.
Ask yourself what your expectation is first. If your display is sRGB and you rendered using lasers like BT.2020, do you expect the image to be fully formed and well crafted or have disruptions of saturation and hue, as well as quantisation and posterization problems? Would you expect an sRGB image to be a fully formed image in its own right?
Remember that at studios they have the upside of having a colourist. That is, not just one but often a colorist and an assistant, whose sole job is to hammer the colours into shape based on the guidance of the director of photography.
However, in doing so, both effects ignore the actual mixtures, and obfuscate the idea that the working space values mean something. Worse, that Digital RGB effect, actually makes it near to fully impossible to mix some colours at the display.
Well for me this version does seem to solve some of my problems, including what you said about the clipped over saturated color, this is what I got with the original version on GitHub, this disgusting over saturated color:
image1560802 186 KB
My concern is only with folks who can see the issues, and if they encounter them, try to explain what they are and why they happen. For folks looking to move to different systems, it is valuable to be well informed of potential issues, as sometimes it requires investments of time, and often, money.
The person that released this config is a famous tutorial maker in the Chinese Blender community, almost all Chinese Blender user would know about him, his is like the Blender Guru in China. So I think it is fine.
Fluid A is thick, viscous and slow fluid. Fluid B is free flowing golden-ish liquid.
Fluid B flows in with a bit of turbulence > attacks the Fluid A with force/impact > Fluid A solidifies and forms cracks > Fluid B flows inside the Fluid A that has now formed cracks/fractures > the increased pressure of the Fluid B flowing inside the fractured Fluid A - disintegrates the Fluid A > Fluid A converts to particles and flies away
You could use the cell fracture addon included with blender:
cell_fracture1680987 303 KB
Here I used the cell fracture to break a cube. Then I used cell fracture again on the results from the first fracture.
I'm using freestyle because that is what they want for the type of render. I'll try to explain what I have to do to explain my problem : An 8 part arch is form by the translation of 4 arches in a concentric direction. The part obtained by this translation is a concentric arch ( above ) and a faceted vault ( below ). I have to animate, as shown below, 4 arches translating, showing their intersection in blue, and separate them apart.
The animation itself is not really the problem, the problem is that I don't know how to find a "clean" way to do it. Right now I'm using moving cubes as booleans to make the arches appear. But as you can see, Blender booleans are really quickly limited and junky, they tend to appear where they should not, and even with the fix proposed by blender ( self-interaction etc.. ), it does not work and they appear multiple times in the animation.
Add a mask modifier to hide those verts, that way we can still keep the solid mesh for Boolean operations but it can look hollow. You might have to invert the Mask modifier with that arrow button like I did.
Duplicate this mesh 2x and rotate on the z-axis 45 and -45 degrees. Use two booleans to subtract them from the main one. Make sure you take the vertex group off of the bool objects Mask modifiers, otherwise it will un mask the "carved out" areas:
- Early on, we identified that Intersect 3D was likely the tool we needed to generate the output we wanted. However, Intersect 3D requires both inputs (In our case, 1. the 3D polygon rock formation, 2. the 2D line) to be closed multipatches.
- Trying to generate a closed multipatch from the 3D polygon shapefile was not possible through a combination of "Layer 3D To Feature Class" and "Enclose 3D". It kept giving me a open multipatch (Is Closed 3D returns false)
- From the 2D polyline, generating a closed multipatch that can meet my requirements was challenging as it was not clear which tool gave a closed multipatch. Only Buffer 3D explicitly stated that it produced a closed multipatch
Can I achieve what I want purely in ArcGIS Pro? Currently, I have to pre-process the rock formation model in Blender (using a custom add-on) before importing back into ArcGIS Pro. If Blender or the custom add-on no longer works, my current solution is no longer valid.
Buffer 3D: gives a closed multipatch around the user drawn line, which I managed to use in Intersect 3D. Which results in a cross-sectional tunnel (since min. buffer quality is 6, which gives a 6 sided buffer). This is useful in another use-case, just not this one.
I suggest that you check out a mining/geology/mapping software system such as Maptek's Vulcan Software. It's designed for this sort of work and has been around for 40+ years. ArcGIS is beginning to catch up on some of the 3-D stuff but still has a ways to go relative to systems such as Vulcan.
Cartesian Caramel, a renowned 3D Artist and Animator whose spectacular creations always brighten my day and make me forget about earthly concerns for a few minutes, has recently shipped a new incredible effect designed to be used for educational purposes.
Leveraging his favorite 3D software of choice, Blender, and a Venom model from Fortnite, the artist has set up a neat Venom formation FX that makes it look like Marvel's iconic baddie gradually appears out of nowhere. According to Cartesian, both Shader Nodes and Geometry Nodes were employed in Blender to achieve the final appealing result. You can download the effect free of charge by visiting the author's Gumroad page, please note that you'll need to upgrade the software to version 4.1 or higher in order for the setup to work.
Earlier, the artist also demonstrated a Geometry Nodes-based rigging system, a method for deforming fractured objects in Blender, an amazing procedural spider robot animation, a delicious baking bread animation, a cool-looking robotic cable arm, and more. You can check out all of these projects by visiting the author's Twitter page.
With the Formations panel of the Formations tab you can import, generate, create, update or remove formations, select, deselect and sort drones in formations, and you can also add formations to your storyboard.
The + button creates a new formation. Remember that formations are essentially sub-collections in your Formations collection, consisting of (stationary or animated) markers that define the desired positions of the drones in a particular scene of your drone show.
Creates a formation that contains one empty mesh for each drone, placed exactly at the current position of the drone. You can use this option to create a "snapshot" of the drone swarm at a given frame and use it again as a formation later on in the show.
(Only in Object mode) Creates a formation that contains the currently selected objects. If the locations of the objects were animated, the formation will be animated as well. Removing any of these objects from the scene will also remove them from the formation.
The Select button on the Formations panel adds the selected formation to the current selection in Blender. Similarly, the Deselect button removes the markers of the selected formation from the current selection.
Since formations may contain meshes as well as vertices of meshes as markers, you may not necessarily see the result of the selection immediately. If you are in Edit mode and you attempt to select a formation that contains meshes, you need to switch back to Object mode. Similarly, if you are in Object mode and you attempt to select a formation that contains vertices a markers, you need to switch to Edit mode to be able to interact with the selected vertices.
3a8082e126