Thisoption creates a new scene with the same settings and contents as the active scene.However, instead of copying the objects,the new scene contains links to the collections in the old scene.Therefore, changes to objects in the new scene will result in the samechanges to the original scene, because the objects used are literally the same.The reverse is also true.
To choose between these options,it is useful to understand the difference between Object and Object Data.The choices for adding a scene, therefore, determine just how much of this information will becopied from the active scene to the new one, and how much will be shared (linked).
Yes its definitely dependent on system specific parameters, but even more, it depends on what concrete datastructures have been filled within blender or if objects make use of instantiation or not, how many mesh attributes are maintained and so on. So yeah there is definitely no such thing as a fix number here that describes how many triangles blender can handle.
As already said, there is no general hard limit, so it is as @Renzatic says, think rather in terms of tweaking it to be more efficient, if you are already in need of it. There should be plenty of youtube vids on Instancing / Linked Duplicates / Collection Instances etc if you havent used these so far. Your railing poles merging eg is a mixed beast. While it can help to merge Objects to minimize the overhead in low level api context switches I think instancing should definitely outperform it.
I guess my biggest asset is 17mil triangles. It works without any problems and tri count really can be bigger without any issue.
Just have some big amount of polys in the viewport - is not something where you will suffer from poor performance of blender.
However, if you make an instance copy, Blender only copies the Object data, so you still end up with Tree.001, but it makes a reference to the geometry data, which would still be, box. You could keep doing that and have Tree.002, Tree.003, etc, etc and be able to position/scale/rotate each Tree as you please, but it is all based on the same geometry data, ie box.
Most complex mesh I have in viewport use is over 16 million polygons, only single object and I bake it to normal map and reduce polycount to something like 262k, or I split some large 4096x4096 heightfield to 1024x1024 or 512x512 pieces and simplify distant pieces.
When I put meshes to scene, I limit total polygons in invidual meshes to 8M per layer. That is 1Gb ram to geometry. Sure I can instancing some mesh but I limit geometry. Exception is fluid simulation where I can put a lot of geometry, up to 512x512x512, as much there is ram left for layer.
Shift + right-click to place the 3d cursor somewhere and then menu View > Align View > Center View to Cursor.
I have assigned Shift+C to this command.
The original command that used to be on Shift+C (Center Cursor and Frame All) is of no use in very large scenes.
I would also increase the viewport end clipping distance. Unfortunately this has to be done manually repeatedly for each 3d viewport.
image601616 100 KB
Also, you will probably have to set the end clipping distance in your render camera settings.
When using the mouse wheel to zoom in, there is often a point where navigation starts to feel broken.
There is an invisible point in space around which orbiting and zooming operates.
When you get too close to that point in when the problems start.
You can make that point in space visible by going to the main settings > Input > NDOF > Show Navigation Guide.
A cyan colored dot will appear (eventually) in the 3d viewport to indicate the point around which you are orbiting and zooming.
If you zoom in close enough to make that point disappear, I can guarantee shift+middle mouse to pan the view will feel extremely broken.
ALT+MiddleClick on a surface to move the point to the clicked location and center your viewpoint there.
When you have an object whose origin point is miles away from the majority of its polygons, moving it will feel either 10x slower than the speed you move your mouse or 10x faster than the speed you move your mouse. A workaround is to place the 3d cursor in the vicinity of the majority of the polygons and change the Transform Pivot to the 3d cursor temporarily.
First of all, I am really new to all of this. This is my first time so its really confusing me if it's better to create a scene and then import it into Unity or is it better to just create objects like chracters, chairs in blender and scenes in Unity? I just want to recreate a scene of my school and include it in my final project as a Unity game where it will have no function other than walking around and some jumping and fighting? Because my project is not based on the game, it will just help it.
There's nothing wrong to creating your scene in blender and then importing it to Unity, in fact you can directly import the blender file and get the lighting just as you did in blender, but the way unity and blender renders the scene is different, so none of the post processing effects will get transferred to unity, and you'll have to tweak a lot of stuff in unity to get the look you want, I suggest you to make your assets in Blender if you feel it's faster to work with, and then export it to unity and setup the rest as per your preferences. It can be a bit of confusing at first but it all comes down to your personal preferences.
Example:You are working on a simple scene that has a room with a table, chair and character. So in Blender create and save a file with a complete room model (without the furnishings). Next, make and save a table file with a table model with all the details (perhaps a couple knives, forks, and plates). Make and save a chair .blend and then a separate character .blend...this continues until you are done with objects needed for your scene. Of course, all of these would be saved in a "room scene" folder (outside of your Unity project). Export all the models as .fbx files. Finally, import these into Unity with their (named) materials and import your textures...put the objects together in our scene and setup lighting, cameras, particle FX...
Example 2:You are working on a game that programmatically generates levels (ie. dungeons, mazes). The best thing to do for this type of game is to create modular components that get generated by scripting. These components would be something like a hallway piece, a room piece, a turning hall etc.. To make these, model a base for the wall and floor maybe structural details (windows, doors...), then save that as a file and of course export to an .fbx for Unity. Make some detail objects like mentioned in example 2 then import the models into Unity. Create prefabs for hallway, room...by parenting the details to the base, then instantiate the prefabs from a script.
However, I immediately had to think about a YouTube video:In the Boundary Break episode about Telltale Games's The Walking Dead Season 1 one of the original developers speaks a bit about their experiences of creating whole scenes for the game on a 3D editor rather than their game engine:
If this is a project that other people will eventually work on, there's a good reason to create individual assets in Blender but assemble the scene in Unity: this is likely to make the project more accessible for other developers who have Unity experience but not Blender experience. The Unity Editor is targeted at game designers, level designers, and developers. Blender is targeted at 3D artists. Many Unity developers who are comfortable assembling scenes, setting up materials, etc in the Unity Editor may not have any idea how to use Blender and may struggle with learning its more complex user interface and material settings. Team members who are learning Blender for the first time may get frustrated with online tutorials, as the Blender team has completely redesigned the entire UI several times and it's extremely difficult to follow older tutorials in the newest interface.
The instant visual feedback of a real-time renderer is just so valuable when it comes to crafting a shot and scene. As a cinematographer myself, it drives me crazy trying to compose a shot with all of the render intensive elements turned off.
When it comes to scene layout, like the overall blocking and placement of objects in the scene (even including lighting and camera placement), I definitely prefer to do all of this in Unreal as opposed to Blender. This is as a result of one major limitation.
When you import your geometry via .fbx (or any other file type for that matter), your object will come in with an origin point set at the world origin point from Blender. Not from Unreal. This means that you theoretically could set up your whole scene in Blender, and import one giant .fbx file.
However, this would mean that every single object no matter the placement would share the same origin point. So moving any one specific object could be a giant headache, as it might be a large distance away from the world origin point.
I think it makes way more sense to make one object at a time in Blender, and just sent it over to Unreal when the time is right. This way you can set each object up in Blender with an appropriate origin point.
There are few things as creatively invigorating as just simply painting plants onto geometry as though you were using a paintbrush, covering hills and plains with pure wavy photoreal grass, and then to still see 60fps up in the corner for the render speed. Magic.
I am working in Blender and trying to follow some You Tube video tutorials. My following certain videos comes to a halt when I am unable to find one particular node that is supposed to be in Geometry Nodes, "Scene Time". I have gotten pretty good at slowing down and pausing the tutorial so I can see what is being clicked and typed, but I am unsuccessful when attempting to locate the elusive "Scene Time" node that is suppose to be in Geometry nodes. If anyone can give me fix as to why I am unable to locate this node and/or guide me towards finding this node, it would be greatly appreciated. Thanks
3a8082e126