2d Collision Gizmos

0 views
Skip to first unread message

Jenn Smotherman

unread,
Aug 3, 2024, 4:12:34 PM8/3/24
to cirlatingsand

if you mean Gizmos.DrawSphere you can not detect it with ray (nor colliders), they are only for debugging and they can not interact with physics.
also hey will not work at all once you build your game. again; gizmos are only for visual debugging for use in editor only,

Does anyone know how I can make a custom gizmo become visible in debug mode?
The gizmo is a EditorNode3DGizmo made of line segments. I would like it to be visible when I run it in debug similar to collision shapes/raycasts with 'Visible Collision Shapes' enabled.

GreatOdds
Did some digging through the source. It seems the collisions shapes seen in debug mode are not the same as the editor gizmos. Instead, the collision object gets the debug mesh of the collision shape and adds it to the scene if 'Visible Collision Shapes' is set and the game is not running in the editor.

The issue is that there seem to be large radiuses around objects when I have not put them there. For example, a 2x2 spite actually seems to have an 8x8 collision radius which stops the player from walking near it.

Just go to the inspector window after you have selected the player object. Make sure you are clicked on the object that has the collider on it. Then in the inspector, go to the collider2d component and edit its size. If you hover your mouse over some of the properties of the collider, you will see a tooltip that says something about editing the size.

Sounds like you know where to start to fix the issue. As a note going forward, when you bring in sprites via the Sprite Editor, Unity tries its darndest to find the size of each sprite, whether it is a single sprite or slicing multiples. Once it determines that size and you hit apply, box and circle (probably all) colliders will default to the size of the sprite. So if you forget to remove some transparent background or have lil floaty stuffs, it may think the sprite is larger than you want it to be and thus, make any colliders larger than you want. But it is all easily fixable

This module is part of the Particle SystemA component that simulates fluid entities such as liquids, clouds and flames by generating and animating large numbers of small 2D images in the scene. More info
See in Glossary component. When you create a new Particle System GameObject, or add a Particle System component to an exiting GameObject, Unity adds the Collision module to the Particle System. By default, Unity disables this module. To create a new Particle System and enable this module:

Since this module is part of the Particle System component, you access it through the ParticleSystem class. For information on how to access it and change values at runtime, see the Collision module API documentation.

When other objects surround a Particle System, the effect is often more convincing when the particles interact with those objects. For example, water or debris should be obstructed by a solid wall rather than simply passing through it. With the CollisionA collision occurs when the physics engine detects that the colliders of two GameObjects make contact or overlap, when at least one has a Rigidbody component and is in motion. More info
See in Glossary module enabled, particles can collide with objects in the Scene.

You can also detect particle collisions from a script if Send Collision Messages is enabled. The script can be attached to the object with the particle system, or the one with the Collider, or both. By detecting collisions, you can use particles as active objects in gameplay, for example as projectiles, magic spells and power-ups. See the script reference page for MonoBehaviour.OnParticleCollision for further details and examples.

This cache consists of a plane in each voxel, where the plane represents the collision surface at that location. On each frame, Unity checks the cache for a plane at the position of the particle, and if there is one, Unity uses it for collision detectionAn automatic process performed by Unity which determines whether a moving GameObject with a Rigidbody and collider component has come into contact with any other colliders. More info
See in Glossary. Otherwise, it asks the physics system. If a collision is returned, it is added to the cache for fast querying on subsequent frames.

The only difference between Medium and Low is how many times per frame the system is allowed to query the physics system. Low makes fewer queries per frame than Medium. Once the per-frame budget has been exceeded, only the cache is used for any remaining particles. This can lead to an increase in missed collisions, until the cache has been more comprehensively populated.

All gizmo drawing has to be done in either MonoBehaviour.OnDrawGizmos orMonoBehaviour.OnDrawGizmosSelected functions of the script.MonoBehaviour.OnDrawGizmos is called when the Scene view or Game view is repainted. All gizmos that render in MonoBehaviour.OnDrawGizmos are pickable.MonoBehaviour.OnDrawGizmosSelected is called only if the object the script is attached to is selected.

I just dont undertand why i cant seem to turn on the nav,pos or even the anti collision lights after my engines are fully up n running.Is there something that im missing?Would appreciate any help or advice on this.

I created a MoveIt config with the setup wizard from my URDF. Why trying out the demo.launch I can plan to random goal positions, but I cannot specify goal positions manually by moving the gizmos. With earlier versions of the URDF, I was only to able to rotate the gizmos along the wrist axis. How do I need to specify my arm such that I can select goals to plan to freely?

the IM may not work as a SCARA is typically a 4dof robot. The default KDL IK solver does not work very well with such kinematic configurations. As the IM relies on the IK solver working, you cannot drag it around like you'd do with a 6 dof robot.

I did not say anything about MoveIt. MoveIt doesn't care. It works with 1 dof to N dof robots. As long as you have a working IK solver it can use it (or actually: if you want to be able to specify Pose goals, you'll need an IK solver. For jointspace goals you would not need one).

Found the solution myself: I had to change the wrist joint to revolute with rotation axis (0,1,0). I guess without it it does have not enough DoF to move the end-effector within the x-y plane with arbitrary rotations. Guess this is required to move the goal position freely in x-y.

Notice in the first image above I also have a circle collider attached to my Player object. In the scene view, that collider is not displaying either. And just like the the gizmos, Unity hides the collider when the component is closed in the Inspector panel. When I open the panel, it shows up was well.

3d shapes created for simulation serve the purpose of bringing realism to a scene. It also helps identify everything in the scene easier. If for example all of the objects in a scene were simple shapes like cubes and circles it can be difficult to distinguish objects in the simulation.

Visual shapes are detailed meshes. These shapes are made with the goal of bringing realism to a simulated scene. These shapes are more dense in polygons and usually have textures attached to them. These shapes are what camera sensors pick up and are also what the viewer sees when viewing simulations.

Blender has many tools to create simple/complex models. A great way to create complex objects is to start with something simple. In the example below we see a complex model created from a simple primitive. Although this is a great strategy for modeling, going from a simple mesh to a complex one usually requires a lot of tools to be used.

Because we used the example of the wheel above, I will be using that shape to go over my modeling workflow. I will also explain some of the tools and modifiers that I use. A basic understanding of the movement keys should be known prior to starting this tutorial: Recommended beginner guide

Oftentimes, problems after importing models into a program occur because of simple mistakes related to these gizmos. These problems are fairly common, but aren't hard to fix. Changing one of those attributes accordingly and exporting it out with the updates fixes most issues.

Shapes are made out of polygons. The higher the polygons on a shape the more detailed it is. A shape with too many polygons can overcomplicate mesh editing or can make simulations run very slowly. This is why it is important to optimize a shape so that it keeps most of its details but without making the poly count too high.

Creating a complex model requires a good understanding of many tools in a 3d modeling program. They all serve their specific purposes and are extremely powerful. They have to be used on top of each other in different ways to get to an end result. The more you add/edit your shape the better it will look in the long run.

After you hit export in whatever file format you decide to go with, the "Blender File View" will appear. This is where you choose the location of your file. You can also choose how you want to export your model.

Drones are getting smarter all the time with all kinds of sensors to avoid collisions and operate safely. Many of them are still not collision resistant though. HiPeR Lab researchers have come with a collision-resilient aerial vehicle with with an icosahedron tensegrity structure. It can withstand high speed impacts and start operation after collisions.

The researchers developed an autonomous re-orientation controller to help these vehicles resume flight after an accident. These tensegrity drones are capable of operating in cluttered environments. They can survive collisions at speeds up to 6.7m/s.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages