mesh quality during deformations using MappingQEulerian

221 views
Skip to first unread message

thomas stephens

unread,
Nov 14, 2016, 11:52:43 PM11/14/16
to deal.II User Group
I'm working on a closed codimension-1 surface (think of an ellipsoid) and my goal is to iteratively evolve this surface to form tubulations and other localized significant deformations.  My current strategy is to compute an euler_vector at each iteration and create a new mapping at each time step through MappingQEulerian. 

So far this is going well, but I am only able to achieve relatively mild deformations before my mesh becomes twisted and deformed, leading to a catastrophic breakdown of my simulation. 

My question for the community is vague, but hopefully there is some discussion that can be had here:

What are some strategies for updating the euler_vector for MappingQEulerian that would help maintain mesh quality?  Currently my quadrilaterals are becoming very skewed as the surface evolves. 

Tom

Wolfgang Bangerth

unread,
Nov 15, 2016, 12:20:36 AM11/15/16
to dea...@googlegroups.com
I can't say whether that's the same problem that is commonly reported in the
literature, but this sort of mesh entanglement is the motivation for people to
look at Arbitrary Lagrangian Eulerian (ALE) methods, instead of just Eulerian
methods.

Can you show a picture of your mesh? Is it that just the mesh is bad, or the
whole geometry?

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Jean-Paul Pelteret

unread,
Nov 15, 2016, 1:27:52 AM11/15/16
to deal.II User Group
Hi Thomas,

Trilinos' Mesquite package provide mesh optimisation tools. I'd be happy to see this integrated into deal.II, so if you decide that this would help you solve your problem then I'd be willing to help you to do so. The challenge here would be the fact that its a co-dim 1 problem, so you of course need to adjust the position of vertices without changing the final topology (too much). I'm not immediately sure how one could manage this without some underlying parametric description of the surface, but perhaps its possible.

Regards,
J-P
Message has been deleted

thomas stephens

unread,
Nov 15, 2016, 10:34:56 PM11/15/16
to deal.II User Group



I can't say whether that's the same problem that is commonly reported in the
literature, but this sort of mesh entanglement is the motivation for people to
look at Arbitrary Lagrangian Eulerian (ALE) methods, instead of just Eulerian
methods.

Can you show a picture of your mesh? Is it that just the mesh is bad, or the
whole geometry?

Best
  W.

My initial mesh is not great - here's where I begin:

GridGenerator::hyper_sphere(triangulation,Point<spacedim>(0,0,0),1.0);

triangulation
.set_all_manifold_ids(0);

triangulation
.refine_global(initial_global_refinement);

/* interpolate the map from triangulation to desired geometry for the first time
   a,b,c are the axes of an ellipsoid */

VectorTools::interpolate(bending_dof_handler,
                           
Initial_map_sphere_to_ellipsoid<spacedim>(a,b,c),
                           global_euler_vector
);

const MappingQEulerian<dim,Vector<double>,spacedim> mapping (2, dof_handler, euler_vector);



double Initial_map_sphere_to_ellipsoid<spacedim>::value(const Point<spacedim> &p, const unsigned int component)  const
{
 
double norm_p = p.distance(Point<spacedim>(0,0,0));
 
// return the difference between the point on the ellipse and the point on the original geometry
 
return abc_coeffs[component]*p(component)/norm_p - p(component);
}
 


This gets me an ellipsoid, but the corners of the original hyper_sphere are just translated onto the ellipsoid by euler_vector.  (The color map shows the mean curvature of the surface.)




I want to deform this shape in response to a scalar function (illustrated by the new colormap) on the surface, so I adaptively refine and then let the evolution go forward (the deformation will reduce a global free energy on the surface) - here's how that looks:




A few timesteps later the surface (and mesh) evolve to look like this: (the color map shows the mean curvature again)




This eventually crashes as the quadrilaterals become distorted.

 J-P, thank you for your generous response here.  This community is so quick to reply to questions that I feel guilty for not getting back to you both sooner than this evening!

I am going to take a closer look at Mesquite, and I am also going to start working toward a strategy published by Andrea Bonito, Ricardo Nochetto, and Miguel Sebastian Paulett i[pdf attached], (which was first developed in Pauletti's Phd thesis in the present context) that involves manually moving vertices around in a way that apparently preserves the surface geometry. 

I still would love to know what you think based on the images above.

Tom
BonitoNochettoPauletti_GeometricallyConsistentMeshModification_2010.pdf

Wolfgang Bangerth

unread,
Nov 15, 2016, 10:42:45 PM11/15/16
to dea...@googlegroups.com

Thomas,

> My initial mesh is not great - here's where I begin:
> |
>
> |GridGenerator::hyper_sphere(triangulation,Point<spacedim>(0,0,0),1.0);
>
> triangulation.set_all_manifold_ids(0);
>
> triangulation.refine_global(initial_global_refinement);
> |
> |/* interpolate the map from triangulation to desired geometry for the first time
> a,b,c are the axes of an ellipsoid */|
> |VectorTools::interpolate(bending_dof_handler,
> Initial_map_sphere_to_ellipsoid<spacedim>(a,b,c),
> global_euler_vector);
> |
> const MappingQEulerian<dim,Vector<double>,spacedim> mapping (2, dof_handler,
> euler_vector);
>
>
>
> doubleInitial_map_sphere_to_ellipsoid<spacedim>::value(constPoint<spacedim>&p,constunsignedintcomponent) const
> {
> doublenorm_p =p.distance(Point<spacedim>(0,0,0));
> // return the difference between the point on the ellipse and the point on
> the original geometry
> returnabc_coeffs[component]*p(component)/norm_p -p(component);
> }
>
> |
>
> This gets me an ellipsoid, but the corners of the original hyper_sphere are
> just translated onto the ellipsoid by euler_vector. (The color map shows the
> mean curvature of the surface.)
>
> <https://lh3.googleusercontent.com/-tO37r5w5pMg/WCvK3kE48oI/AAAAAAAAFq4/kMmbQ6kBMq0GBVGAk4w6g29HP9TOK-q_gCLcB/s1600/Screen%2BShot%2B2016-11-15%2Bat%2B9.55.44%2BPM.png>
>
>
>
>
> I want to deform this shape in response to a scalar function (illustrated by
> the new colormap) on the surface, so I adaptively refine and then let the
> evolution go forward (the deformation will reduce a global free energy on the
> surface) - here's how that looks:
>
>
> <https://lh3.googleusercontent.com/-kJCXHwbjHUA/WCvMEWPD7gI/AAAAAAAAFrA/mdvLijljl7cITiw7lGTR6lOcnqbN-CBIACLcB/s1600/Screen%2BShot%2B2016-11-15%2Bat%2B9.59.21%2BPM.png>
>
>
>
> A few timesteps later the surface (and mesh) evolve to look like this: (the
> color map shows the mean curvature again)
>
> <https://lh3.googleusercontent.com/-dunjJsO_ka8/WCvMd_nofbI/AAAAAAAAFrE/9SQV3Kpp0FcFBgGA6Tu-wA-zjVRNdWodACLcB/s1600/Screen%2BShot%2B2016-11-15%2Bat%2B10.02.30%2BPM.png>
>
>
>
>
> This eventually crashes as the quadrilaterals become distorted.

It's not my research field, but there should be literature on these sorts of
phenomena. I think that ultimately, you need to find a way to relax mesh nodes
in tangential directions.

thomas stephens

unread,
Nov 16, 2016, 11:42:13 AM11/16/16
to deal.II User Group
J-P,

I should not have given Mesquite such short shrift in my reply yesterday.  With regard to the MappingQEulerian approach, I would imagine Mesquite would take the mapped points as the mesh points (basically, what my figures show as the mesh), then smooth those out a bit, and return new vertex locations.  Then I would need to update the euler_vector to reflect the new vertex positions - this seems to me like a difficult step since the euler_vector describes displacements of degrees of freedom rather than vertices.  My difficulty comes from not truly understanding how to think about degrees of freedom in that euler_vector in relation to my naive view of the descriptors of a mesh - i.e. vertices. 

This of course begs the question of how the hell am I going to fix my mesh if I don't understand the relation between dofs and the vertices I am not happy with!  I'll be working on this at my desk, any further discussion would be appreciated! 

Also, thank you again for offering help with adding Mesquite to dealii.  My reluctance to immediately take you up on this offer is that I am afraid of going down a rabbit hole, although I acknowledge that the solution to my mesh problems most likely lies deep in a tunnel somewhere.

Tom

On Tuesday, November 15, 2016 at 1:27:52 AM UTC-5, Jean-Paul Pelteret wrote:

Wolfgang Bangerth

unread,
Nov 16, 2016, 1:35:43 PM11/16/16
to dea...@googlegroups.com

Thomas,

> I should not have given Mesquite such short shrift in my reply
> yesterday. With regard to the MappingQEulerian approach, I would
> imagine Mesquite would take the mapped points as the mesh points
> (basically, what my figures show as the mesh), then smooth those out a
> bit, and return new vertex locations. Then I would need to update the
> euler_vector to reflect the new vertex positions - this seems to me like
> a difficult step since the euler_vector describes displacements of
> degrees of freedom rather than vertices. My difficulty comes from not
> truly understanding how to think about degrees of freedom in that
> euler_vector in relation to my naive view of the descriptors of a mesh -
> i.e. vertices.

The way you ought to see Eulerian mappings is that the geometry is
simply described by the graph of a piecewise polynomial function. It
happens that you represent this piecewise polynomial function by a
vector-valued finite element field, but this field need not have
anything to do with the solution of the PDE you're trying to solve and
may in fact use completely different elements. At least in principle,
the two also don't need to be defined on the same mesh, though that is
convenient in practice.

What mesh relaxation would do is replace the Eulerian (geometric) field
described by one solution vector by another solution vector that leads
to less distortion of cells. That the solution vector may correspond to
an interpolating finite element that also has degrees of freedom located
at vertices is really not all that important. Think simply of the
Eulerian field as a vector field with arrows from every point of the
reference domain to a corresponding point in the domain you're trying to
describe, without thinking of the reference domain being subdivided into
cells.

I hope this helps a bit.

Jean-Paul Pelteret

unread,
Nov 16, 2016, 4:28:36 PM11/16/16
to deal.II User Group
Hi Tom,

Yes, you're essentially correct. With Mesquite I believe one would essentially redefine a triangulation in whatever format it wants (I think you provide nodal coordinate and connectivities), tell it which nodes remain fixed (i.e. provide some constraints) and then simply specify some algorithm with which it must update the nodal coordinate positions. So you would effectively give it a set of mapped coordinates of the triangulation and it returns a new set for you. The difference between these new mapped points and the reference grid nodal coordinates is your euler vector. 

Ignoring  here are only three real complexities with this approach: 
1. Defining the triangulation in its desired format (I think this is documented within the Mesquite documentation)
2. Translating this Euler vector into something thats relevant to the FESystem for your problem (this is what you're struggling to conceptualise, but its really not difficult at all).
3. Dealing with hanging nodes (which if I recall correctly Mesquite does support).

For the third point, I think that one must disconnect the problem that you're solving and solve the mesh update problem as a completely separate problem; this ties in with what Wolfgang has also said below. This is advantageous because one can then reuse this algorithm for various multiphysics problems (e.g. fluid-structure interaction), or just to increase the mesh quality before starting a simulation!

I would create a auxiliary system of dim x linear FE_Q's, which have support points that coincide with the vertices. Then the issue of relating DoFs and vertex locations is trivial because of these three functions, which are valid when using Q1 elements. Once one's solved the mesh update problem, one then populates the Euler solution (displacement) vector corresponding to this auxiliary DoFHandler. Then you can just do an interpolation / projection of this vector onto your current FE space, or update the triangulation vertices if desired. This also eliminates any issues that arise if, say, your primary problem solves some displacement with quadratic or discontinuous elements. or if the displacement is only one component of the global solution.

My thinking is that one could add the following functionality to the library:
1. A function that takes in a triangulation and a (possibly empty) Euler vector defining the initial mapped coordinates of the triangulation vertices, and some boolean constraints vector. This would then return an optimal Q1 Euler vector as computed by Mesquite.
2. A function that actually moves the triangulation vertices for you based on this vector.
3. A function that would interpolate the optimal Q1 Euler vector onto a given FE space, which would presumably represent the displacement solution to some other problem.

Does this make sense? I hope I haven't rambled on too much.
Cheers,
J-P

Jean-Paul Pelteret

unread,
Nov 16, 2016, 4:40:05 PM11/16/16
to deal.II User Group
Hi Tom,

An alternative to implemented a complex mesh update strategy is to define an initial mesh that, when mapped via your Euler vector, produces a higher quality mesh. For example, you could remove the "top" element of the coarse mesh and substitute it with a set of elements with an o-grid topology. Of course you would have to have some rough idea as to what your final grid topology will look like in order to optimise this.

By the way, in the images you've shown can you indicate where the low quality elements are? In Paraview there's a filter that can help with this. If one considers the mesh on this ellipse as something mapped from a cubic grid, then I presume you're referring to the elements at the cube corners.

Regards,
J-P

Wolfgang Bangerth

unread,
Nov 16, 2016, 5:09:45 PM11/16/16
to dea...@googlegroups.com
On 11/16/2016 02:28 PM, Jean-Paul Pelteret wrote:
> 1. Defining the triangulation in its desired format (I think this is
> documented within the Mesquite documentation)
> 2. Translating this Euler vector into something thats relevant to the
> FESystem for your problem (this is what you're struggling to
> conceptualise, but its really not difficult at all).
> 3. Dealing with hanging nodes (which if I recall correctly Mesquite does
> support).
>

In the case that the mesh lives in a higher dimensional space, you also
have to enforce -- either as part of the problem formulation, or as a
postprocess step for the output of Mesquite -- that the new nodes still
need to lie on the geometry as described before. In other words, nodal
points can move *within* the surface, but not perpendicular to it.

I have no idea whether that is possible within Mesquite, but in the
worst case you can always project back to the previous surface.

thomas stephens

unread,
Nov 16, 2016, 5:30:58 PM11/16/16
to deal.II User Group, bang...@colostate.edu

In the case that the mesh lives in a higher dimensional space, you also
have to enforce -- either as part of the problem formulation, or as a
postprocess step for the output of Mesquite -- that the new nodes still
need to lie on the geometry as described before. In other words, nodal
points can move *within* the surface, but not perpendicular to it.

It is my understanding that this is precisely what Bonito et al's approach enforces - at least in the setting that I will be working in.  Each solution I obtain contains a new displacement (velocity field) and a new mean curvature of the surface.  The approach I am referring to, called geometric consistency, uses the identity h = -\laplaceBeltrami x to obtain points that lie on the new surface (at least, up to supporting the same mean curvature h).  Since I have the mean curvature of the new surface at each time step, I can place points x_{n+1} on the (n+1)-st surface according to h_{n+1} = -\lapaceBeltrami x_{n+1}, where h_{n+1} is an interpolation of h_n onto some new mesh, and the laplace beltrami operator is discretized using a mapping that is interpolated onto the new mesh.  This is described in the paper I attached higher up in this thread. I'm working on implementing this right now, but I'm not sure it will redistribute mesh points in tangential directions.  I think there's a second step to this that moves vertices around.  


  
I have no idea whether that is possible within Mesquite, but in the
worst case you can always project back to the previous surface. 

One of the bullet points on the Mesquite page is: *Improve surface meshes, adapt to surface curvature, which at least sounds promising - I'm looking into that this evening.

thomas stephens

unread,
Nov 16, 2016, 5:37:32 PM11/16/16
to deal.II User Group
 With Mesquite I believe one would essentially redefine a triangulation in whatever format it wants (I think you provide nodal coordinate and connectivities), tell it which nodes remain fixed (i.e. provide some constraints) and then simply specify some algorithm with which it must update the nodal coordinate positions. So you would effectively give it a set of mapped coordinates of the triangulation and it returns a new set for you. The difference between these new mapped points and the reference grid nodal coordinates is your euler vector. 

Ignoring  here are only three real complexities with this approach: 
1. ...
2. Translating this Euler vector into something thats relevant to the FESystem for your problem (this is what you're struggling to conceptualise, but its really not difficult at all). 
3. ... 
 
Yes, this is what I am struggling to understand, you have put it very simply.  Along that direction
I would create a auxiliary system of dim x linear FE_Q's, which have support points that coincide with the vertices. Then the issue of relating DoFs and vertex locations is trivial because of these three functions, which are valid when using Q1 elements.
I can see how this is trivial for linear FE_Q's, and these three links should make this concrete. 
 
I think that one must disconnect the problem that you're solving and solve the mesh update problem as a completely separate problem; this ties in with what Wolfgang has also said below. This is advantageous because one can then reuse this algorithm for various multiphysics problems (e.g. fluid-structure interaction), or just to increase the mesh quality before starting a simulation!

 I think that one must disconnect the problem that you're solving and solve the mesh update problem as a completely separate problem; this ties in with what Wolfgang has also said below. This is advantageous because one can then reuse this algorithm for various multiphysics problems (e.g. fluid-structure interaction), or just to increase the mesh quality before starting a simulation!

I do agree with this, which really makes this particular rabbit hole (Mesquite) very attractive. 


I would create a auxiliary system of dim x linear FE_Q's, which have support points that coincide with the vertices. Then the issue of relating DoFs and vertex locations is trivial because of these three functions, which are valid when using Q1 elements. Once one's solved the mesh update problem, one then populates the Euler solution (displacement) vector corresponding to this auxiliary DoFHandler. Then you can just do an interpolation / projection of this vector onto your current FE space, or update the triangulation vertices if desired. This also eliminates any issues that arise if, say, your primary problem solves some displacement with quadratic or discontinuous elements. or if the displacement is only one component of the global solution.

My thinking is that one could add the following functionality to the library:
1. A function that takes in a triangulation and a (possibly empty) Euler vector defining the initial mapped coordinates of the triangulation vertices, and some boolean constraints vector. This would then return an optimal Q1 Euler vector as computed by Mesquite.
2. A function that actually moves the triangulation vertices for you based on this vector.
3. A function that would interpolate the optimal Q1 Euler vector onto a given FE space, which would presumably represent the displacement solution to some other problem.

Does this make sense? 

Yes, this makes sense.  To be clear, the functionality in step 2. A function that actually moves the triangulation vertices for you based on this vector. is not strictly necessary in the present context, is that right?

thomas stephens

unread,
Nov 16, 2016, 5:43:57 PM11/16/16
to deal.II User Group
On Wednesday, November 16, 2016 at 4:40:05 PM UTC-5, Jean-Paul Pelteret wrote:
Hi Tom,

An alternative to implemented a complex mesh update strategy is to define an initial mesh that, when mapped via your Euler vector, produces a higher quality mesh. For example, you could remove the "top" element of the coarse mesh and substitute it with a set of elements with an o-grid topology. Of course you would have to have some rough idea as to what your final grid topology will look like in order to optimise this.
 
This is interesting. 
 
By the way, in the images you've shown can you indicate where the low quality elements are? In Paraview there's a filter that can help with this. If one considers the mesh on this ellipse as something mapped from a cubic grid, then I presume you're referring to the elements at the cube corners.
 
I made those images using visit, but maybe it's time to move to paraview. I'll look for such a function.  The grid elements shown in my last figure above may not be pretty, but they may also not be distorted enough to affect accuracy of my solution. The problem I am having is that very soon after those snapshots the solution breaks down completely.  I agree with Wolfgang that the tangential motions are killing me.  I am looking for very large deformations over time, such as thin tubulations on the scale of the length of the ellipsoid itself.  

Thank you for continuing this conversation with me, I appreciate it.

Tom

Wolfgang Bangerth

unread,
Nov 16, 2016, 6:00:26 PM11/16/16
to thomas stephens, deal.II User Group
On 11/16/2016 03:30 PM, thomas stephens wrote:
> but I'm not sure it will redistribute mesh points in tangential
> directions. I think there's a second step to this that moves vertices
> around.

The important realization is that there are infinitely many ways of
parametrizing the same surface. That's easy to see if you just think
about a curve C that you want to describe as a function (x(s),y(s))
mapped from a reference domain s \in [0,1]. There are many such
functions x(s), y(s) that lead to the same curve. Some move along C
slower in the beginning and faster at the end, some do it the other way
around. One particular one moves at constant speed -- that's the one
that uses the arc length (times a constant) for the parameter s.

The same is true with Eulerian mappings: If you move some of the nodes
tangentially along the surface, it's still the same surface. (At least
up to the discretization accuracy.)

Jean-Paul Pelteret

unread,
Nov 17, 2016, 10:02:53 AM11/17/16
to deal.II User Group
Yes, this is what I am struggling to understand, you have put it very simply. 

Great! I'm glad that gave you a bit more insight. I only hope you didn't take that as a patronising comment :-)
 
I can see how this is trivial for linear FE_Q's, and these three links should make this concrete. 

I forgot to say that it makes sense represent the mesh-motion problem with linear elements, since we consider the underlying geometry to have only straight lines between vertices anyway.

Yes, this makes sense.  To be clear, the functionality in step 2. A function that actually moves the triangulation vertices for you based on this vector. is not strictly necessary in the present context, is that right?

For your purposes, no I don't think that this is necessary. It would just be useful for testing and for demonstration in a tutorial. We can showcase the functionality in step-49, e.g. verifying that it undoes the distortion of a cartesian mesh.

I must admit that I lost sight of the fact that you're working in codim-1. If you choose to go down this road (as opposed to the algorithm you've discussed previously) then it would be good to first check what facilities Mesquite has to deal with this. Maybe internally it can project the optimised node positions back onto the original manifold? 

thomas stephens

unread,
Nov 21, 2016, 11:50:06 AM11/21/16
to deal.II User Group
J-P (and hopefully Oded is still reading this thread, too),

I was looking into Mesquite over the weekend and noticed that the project seems to have stalled, and also that the doxygen documentation was a little weak.  I also reached out to Oded (from this thread: https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!topic/dealii/FeXy6aK2zSs) and he basically confirmed what I am afraid of.  

I have been burned by orphaned third party libraries in my past, do you have a different experience with Mesquite that makes it attractive?  

Oded recommended CUBIT, which I have a license to, but it does require a license. 

I'm being pulled away from my mesh problem until the end of the week, but then I am going to start working on the algorithm that I mentioned here: https://groups.google.com/d/msg/dealii/tZK3R79j9d4/vXZIdhnFCQAJ  


In hopes of providing at least some relief for future visitors looking to solve similar problems: This will be implemented for a codimension-1 surface and it requires a description of the mean curvature and normal vectors on the surface to be meshed, which may not always be relevant, but should always be computable in dealii using the identity Hn = - laplace_beltrami Id, where Hn is the mean curvature times the outward normal vector, and Id is the identity on the surface.  This vector mean curvature can be computed for a parametric surface using the code attached below. 



vector_mean_curvature_identity.cc

Jean-Paul Pelteret

unread,
Nov 22, 2016, 2:27:23 AM11/22/16
to deal.II User Group
Hi Thomas,

Yes, I agree in that the online documentation (like in many other Trilinos packages) is not up to scratch. Have you had a look at the user guide? I think that chapter 6 is particularly relevant to your application. This is much better than the online docs, and gives some worked through examples. However, I think we can both agree that we're spoiled by deal.II's level of detail in the documentation, so its still a little harder to digest and perhaps not all of the driver routines are well demonstrated or discussed in depth. 

In truth, I've never used Mesquite programmatically before, but I've found myself using it many times indirectly through Cubit. This, and the fact that the primary author is clearly a very big name in mesh optimisation, was (and still is) my driver for wanting to include it into deal.II. It also has a lot of functionality that would be very tedious to reproduce individually. If its useable in the context of mesh optimisation of CAD applications (i.e. through Cubit), I reckon it should be good for most applications. I don't have a new version of Cubit/Trellis, but if a more recent version still ships with Mesquite then, in my opinion, that speaks towards the quality of the toolbox, regardless of whether its mature and stagnated or not. But maybe some other folk would disagree with this assessment. Its certainly a risk that one might hit an issue that is unresolvable; but then again, Mesquite is open-source along with the Trilinos suite, so one could always write a patch for it. As a side point, this also gives you access to some worked examples.

Ultimately I can't tell you what path is best to go down for your particular application because (a) I have no real familiarity with the problem you're solving and (b) I'm certainly no expert in mesh optimisation (I've implemented one suite of routines plus extensions to perform such an operation; see Jasak "Automatic mesh motion for the unstructured Finite Volume Method"). So I can't say for certain that this will [be also to] solve your particular problem. I also can't weigh up the effort it takes to implement a set of wrappers for it versus solving the problem some other way. I went the "other way" route, solved my mesh optimisation problem and was left with the impression that (for my application) it would have been better to bite the bullet and have used Mesquite or some other focussed mesh optimisation toolkit. 

If you decide against using it then one day I'll get around to writing my wrappers myself because I am of the opinion that the power of the toolkit for traditional applications outweighs is stagnation. However, I just won't be doing this myself in the immediate future.

J-P

thomas stephens

unread,
Nov 30, 2016, 11:43:22 AM11/30/16
to dea...@googlegroups.com
J-P,

I have taken some good advice from Oded (it turns out we work one building over from each other!), and I started using Cubit to smooth my meshes.  Cubit requires a license but it does seem to have some usage within the dealii community, so I hope whatever comes out of this is useful for somebody.  (I have noted your python scripts interfacing with Cubit on the githib wiki.)

I'm trying hard to maintain the MappingQEulerian approach in my current model setup, and I wanted to return to the outline you put forward to see if 'Mesquite' can simply be replaced with 'Cubit' in your writeup: 

My thinking is that one could add the following functionality to the library:
1. A function that takes in a triangulation and a (possibly empty) Euler vector defining the initial mapped coordinates of the triangulation vertices, and some boolean constraints vector. This would then return an optimal Q1 Euler vector as computed by Mesquite.

At the moment I have a function that takes an euler_vector defining initial mapped coordinates, along with a dof_handler, for an existing surface and returns a new Triangulation object containing the smoothed mapped vertices from Cubit.  The euler_vector is of size dof_handler.n_dofs(), and I would like this to remain arbitrary so I can use higher-order elements (ultimately I'm computing mean curvature of the surface).  The Triangulation holding the smoothed vertices arises because I am using GridIn::read_ucd(std::ifsream in) to get data back from cubit.  The strategy here is to write my original surface to an abaqus file (code to accomplish this is attached), manipulate it and export it back to dealii using the Python API for Cubit during runtime, and read it back using GridIn.  (exporting from cubit is actually a bit of a problem, I have submitted a service email to them - but if I do this through the gui it all works)
 
2. A function that actually moves the triangulation vertices for you based on this vector.

At this point the Triangulation attached to my GridIn object is the new triangulation, so this may not be a problem.
 
3. A function that would interpolate the optimal Q1 Euler vector onto a given FE space, which would presumably represent the displacement solution to some other problem.


Here's where my question for you comes in:  At this stage I have a new Triangulation (call it temp_triangulation) which holds my optimal vertices.  The vertices in temp_triangulation live on some surface in real space.  I have my original Triangulation (call it orig_triangulation) which satisfies orig_triangulation.n_vertices() == temp_triangulation.n_vertices().  How to I obtain a new euler_vector of size dof_handler.n_dofs() at this stage? 

I have done the following so far:

   1   /* update euler_vector to describe smoothed mesh */
   2   FESystem<dim,spacedim> vertex_fe(FE_Q<dim,spacedim>(1), spacedim);
   3   DoFHandler<dim,spacedim> vertex_dof_handler(temp_triangulation);
   4   vertex_dof_handler.distribute_dofs(vertex_fe);
   5   Vector<double> vertex_euler_vector(vertex_dof_handler.n_dofs());
   6 
   7   std::vector<Point<spacedim> > original_vertices = triangulation.get_vertices();
   8   std::vector<bool> vertex_touched(temp_triangulation.n_vertices(),false);
   9 
  10   typename DoFHandler<dim,spacedim>::active_cell_iterator
  11     cell=vertex_dof_handler.begin_active(),
  12     endc=vertex_dof_handler.end();
  13   for (; cell!=endc; ++cell)
  14     for (unsigned int v=0; v < 4; ++v)
  15       if (!vertex_touched[cell->vertex_index(v)])
  16       {
  17         for (unsigned int k=0; k<spacedim; ++k)
  18         {
  19           vertex_euler_vector(cell->vertex_dof_index(v,k))
  20             = cell->vertex(v)(k) - original_vertices[cell->vertex_index(v)](k);
  21         }
  22         vertex_touched[cell->vertex_index(v)] = true;
  23       }

Basically, I have built vertex_euler_vector to map the vertices of my original reference domain to new points which are more or less on my original surface.  This holds essentially Q1 information - only information at the nodes of the original reference domain.  I think that what I need to do now is interpolate vertex_euler_vector onto my original finite element space (which was Q2, but could be anything).  If that is what is needed, I would appreciate some advice on how to accomplish it!

Does any of this sound reasonable?



  •  Create an FESystem<dim,spacedim> vertex_fe and initialize it: vertex_fe(FE_Q<dim,spacedim>(1),spacedim) and create a DoFHandler vertex_dof_handler and initialize it with temp_triangulation:  vertex_dof_handler(temp_triangulation), and call vertex_dof_handler.distribute_dofs(vertex_fe)
  • Build a Vector called vertex_euler_vector and reinit it with vertex_dof_handler.n_dofs(): vertex_euler_vector.reinit(vertex_dof_hander.n_dofs())
  • Loop through the cells of temp_triangulation and put vertex_euler_vector(vertex_dof_index(v)) = v - orig_triangulation()



     



Jean-Paul Pelteret

unread,
Dec 1, 2016, 1:52:08 AM12/1/16
to deal.II User Group
Hi Tom,

Its great that you're getting somewhere with this!

I have taken some good advice from Oded (it turns out we work one building over from each other!),

What a happy coincidence :-)
 
I'm trying hard to maintain the MappingQEulerian approach in my current model setup, and I wanted to return to the outline you put forward to see if 'Mesquite' can simply be replaced with 'Cubit' in your writeup: 

In theory this shouldn't be a problem. This is sort of a "black-box" step.
 
  (exporting from cubit is actually a bit of a problem, I have submitted a service email to them - but if I do this through the gui it all works)

Here's a snippet of one of my cubit scripts that may be of use?

# === Export geometry ===
# - Cubit format
cubit.cmd('save as "' + out_mesh.output_cub_file + '" overwrite')
# - Abaqus format
cubit.cmd('set Abaqus precision 6')
cubit.cmd('export Abaqus "' + out_mesh.output_abaqus_file + '" dimension ' + str(in_geom.dim) + ' overwrite')

2. A function that actually moves the triangulation vertices for you based on this vector.

At this point the Triangulation attached to my GridIn object is the new triangulation, so this may not be a problem.

Ok, so this might be a point of difficulty. See the answer to your next question...
 
 
3. A function that would interpolate the optimal Q1 Euler vector onto a given FE space, which would presumably represent the displacement solution to some other problem.


Here's where my question for you comes in:  At this stage I have a new Triangulation (call it temp_triangulation) which holds my optimal vertices.  The vertices in temp_triangulation live on some surface in real space.  I have my original Triangulation (call it orig_triangulation) which satisfies orig_triangulation.n_vertices() == temp_triangulation.n_vertices().  How to I obtain a new euler_vector of size dof_handler.n_dofs() at this stage? 

I have done the following so far:

   1   /* update euler_vector to describe smoothed mesh */
   2   FESystem<dim,spacedim> vertex_fe(FE_Q<dim,spacedim>(1), spacedim);
   3   DoFHandler<dim,spacedim> vertex_dof_handler(temp_triangulation);
   4   vertex_dof_handler.distribute_dofs(vertex_fe);
   5   Vector<double> vertex_euler_vector(vertex_dof_handler.n_dofs());
   6 
   7   std::vector<Point<spacedim> > original_vertices = triangulation.get_vertices();
   8   std::vector<bool> vertex_touched(temp_triangulation.n_vertices(),false);
   9 
  10   typename DoFHandler<dim,spacedim>::active_cell_iterator
  11     cell=vertex_dof_handler.begin_active(),
  12     endc=vertex_dof_handler.end();
  13   for (; cell!=endc; ++cell)
  14     for (unsigned int v=0; v < 4; ++v)

By "4", you mean GeometryInfo<spacedim>::vertices_per_cell, right?
 
  15       if (!vertex_touched[cell->vertex_index(v)])
  16       {
  17         for (unsigned int k=0; k<spacedim; ++k)
  18         {
  19           vertex_euler_vector(cell->vertex_dof_index(v,k))
  20             = cell->vertex(v)(k) - original_vertices[cell->vertex_index(v)](k);
  21         }
  22         vertex_touched[cell->vertex_index(v)] = true;
  23       }


At first glance this looks ok. My biggest concern is that you've assumed that your old and new triangulations have the same vertex ordering. I'm inclined to think that this is ok because I don't think that Cubit renumbers mesh entities unless you ask it to, and nor does deal.II. But you should probably hand-check this on a small geometry.

Basically, I have built vertex_euler_vector to map the vertices of my original reference domain to new points which are more or less on my original surface.  This holds essentially Q1 information - only information at the nodes of the original reference domain.  I think that what I need to do now is interpolate vertex_euler_vector onto my original finite element space (which was Q2, but could be anything).  If that is what is needed, I would appreciate some advice on how to accomplish it!

I think that VectorTools::interpolate_to_different_mesh could do the job for you; even though it needs the same FE discretisation I think this means that both DoFHandlers (both based on the original triangulation) must be structured like FESystem<dim>(FE_Q(n),dim). If I'm wrong on this then you could wrap your Euler vector in an FEFieldFunction and just use apply of the VectorTools::interpolate functions - this may be slow if the triangulation is huge, but it would be a good first step to take.
 
Does any of this sound reasonable?

Well, none of it sounds crazy :-) 

J-P

thomas stephens

unread,
Dec 1, 2016, 7:45:23 AM12/1/16
to dea...@googlegroups.com
Its great that you're getting somewhere with this!

J-P, you (and the entire dealii community) are a paragon of encouragement, and I appreciate that very much. 

After I wrote you yesterday I was able to figure out how to 'play a journal file' from within an embedded python session within dealii, so my problem

  (exporting from cubit is actually a bit of a problem, I have submitted a service email to them - but if I do this through the gui it all works)

is no longer a problem.  Thank you for your code snippet, and here's what I came up with yesterday: (this is from within my dealii code)


  // taken from: http://stackoverflow.com/questions/12142174/run-a-python-script-with-arguments

  char export_script[50];
  char export_filename[150];
  sprintf(export_script,   "export_codim_1_to_ucd.py");
  sprintf(export_filename, "/home/tds3/com/biomembranes/scratch/mesh_modification/cubit_strategy/data/smoothed_from_cubit.ucd");

  FILE *file;
  file = fopen(export_script, "r");
  size_t argc = 2;
  char *argv[argc];
  argv[0] = export_script;
  argv[1] = export_filename;

  // all this junk requires Python version < 3.0
  // because PySys_SetArgv() does not like the char* to wchar_t* conversion of argv
  Py_SetProgramName(argv[0]);
  Py_Initialize();

  PyRun_SimpleString("import cubit, sys\n"
                     "cubit.init('')\n"
                     "print('cubit imported')\n"
                     "cubit.cmd(\"import abaqus mesh geometry './data/abaqus_test.inp'\")\n"
                     "cubit.cmd(\"smooth surface all laplace\")\n"
                    );
  PySys_SetArgv(argc,argv);
  PyRun_SimpleFile(file, export_script);

  Py_Finalize();


(As a quick aside - I call this export script because I can't get cubit to export codimension 1 surfaces using its export commands.  The output is always flattened in the sense that the z-coordinates of vertices are stripped.  Perhaps your code snippet will help me by setting the dimension to 3.  For now this works nicely, and cubit is slowly responding to my email.
As well, I choose ucd because GridIn::read_abaqus() doesn't like to find that z-vertex from standard abaqus .inp files for codimension 1 surfaces, but read_ucd() works as expected.)


2. A function that actually moves the triangulation vertices for you based on this vector.

At this point the Triangulation attached to my GridIn object is the new triangulation, so this may not be a problem.
 
Ok, so this might be a point of difficulty. See the answer to your next question...
 
3. A function that would interpolate the optimal Q1 Euler vector onto a given FE space, which would presumably represent the displacement solution to some other problem.


Here's where my question for you comes in:  At this stage I have a new Triangulation (call it temp_triangulation) which holds my optimal vertices.  The vertices in temp_triangulation live on some surface in real space.  I have my original Triangulation (call it orig_triangulation) which satisfies orig_triangulation.n_vertices() == temp_triangulation.n_vertices().  How to I obtain a new euler_vector of size dof_handler.n_dofs() at this stage? 
I have done the following so far:
   1   /* update euler_vector to describe smoothed mesh */
   2   FESystem<dim,spacedim> vertex_fe(FE_Q<dim,spacedim>(1), spacedim);
   3   DoFHandler<dim,spacedim> vertex_dof_handler(temp_triangulation);
   4   vertex_dof_handler.distribute_dofs(vertex_fe);
   5   Vector<double> vertex_euler_vector(vertex_dof_handler.n_dofs());
   6 
   7   std::vector<Point<spacedim> > original_vertices = triangulation.get_vertices();
   8   std::vector<bool> vertex_touched(temp_triangulation.n_vertices(),false);
   9 
  10   typename DoFHandler<dim,spacedim>::active_cell_iterator
  11     cell=vertex_dof_handler.begin_active(),
  12     endc=vertex_dof_handler.end();
  13   for (; cell!=endc; ++cell)
  14     for (unsigned int v=0; v < 4; ++v)

By "4", you mean GeometryInfo<spacedim>::vertices_per_cell, right?

YES! Sloppy coding on my part
 
  15       if (!vertex_touched[cell->vertex_index(v)])
  16       {
  17         for (unsigned int k=0; k<spacedim; ++k)
  18         {
  19           vertex_euler_vector(cell->vertex_dof_index(v,k))
  20             = cell->vertex(v)(k) - original_vertices[cell->vertex_index(v)](k);
  21         }
  22         vertex_touched[cell->vertex_index(v)] = true;
  23      


At first glance this looks ok. My biggest concern is that you've assumed that your old and new triangulations have the same vertex ordering. I'm inclined to think that this is ok because I don't think that Cubit renumbers mesh entities unless you ask it to, and nor does deal.II. But you should probably hand-check this on a small geometry.

This works on a small geometry, thankfully.  Even after mesh modifications from within cubit.  This would not work for remeshing from within cubit, unfortunately.  The complete remeshing case is interesting to me, but that's not what I'm aimed at right now.

Basically, I have built vertex_euler_vector to map the vertices of my original reference domain to new points which are more or less on my original surface.  This holds essentially Q1 information - only information at the nodes of the original reference domain.  I think that what I need to do now is interpolate vertex_euler_vector onto my original finite element space (which was Q2, but could be anything).  If that is what is needed, I would appreciate some advice on how to accomplish it!

I think that VectorTools::interpolate_to_different_mesh could do the job for you; even though it needs the same FE discretisation I think this means that both DoFHandlers (both based on the original triangulation) must be structured like FESystem<dim>(FE_Q(n),dim). If I'm wrong on this then you could wrap your Euler vector in an FEFieldFunction and just use apply of the VectorTools::interpolate functions - this may be slow if the triangulation is huge, but it would be a good first step to take.
 
I will try VectorTools::interpolate_to_different_mesh when I get to my desk this morning.  I tried the FEFieldFunction yesterday, but

FEFieldFunction<spacedim, DoFHandler<dim,spacedim> > FEFieldFunction(vector_dof_handler, euler_vector)


is very unhappy - it is not implemented for DoFHandlerType = DoFHandler<dim,spacedim> (and Mapping<dim,spacedim>), rather it is implemented for dim=spacedim problems only.  It's not immediately clear to me why it is only implemented for dim=spacedim, but it is also not immediately clear to me how to make that implementation change - perhaps I will just try to make the obvious changes to that function and see what happens.


Does any of this sound reasonable?

Well, none of it sounds crazy :-) 

Well, it feels a little crazy!!

Thanks so much for your help J-P, and Oded, and Wolfgang (if any of you are still reading this anymore).  We're almost there, and all of this code is yours, so hopefully someone else will find it useful.

Tom
export_codim_1_to_ucd.py

Jean-Paul Pelteret

unread,
Dec 1, 2016, 5:27:35 PM12/1/16
to deal.II User Group
Hi Tom,
 
J-P, you (and the entire dealii community) are a paragon of encouragement, and I appreciate that very much.  

Thank you, your kind words do mean a lot!
 
here's what I came up with yesterday

That's a handy piece of code to keep around. Thanks for sharing it!

This works on a small geometry, thankfully.  Even after mesh modifications from within cubit.  This would not work for remeshing from within cubit, unfortunately.  The complete remeshing case is interesting to me, but that's not what I'm aimed at right now.

Its great! Yeah, a complete remesh is an entirely different story, but if you have a look in the documentation of FE_Field_Function, these two lines  in particular, you will be happy to find that this *might* not be too difficult to implement either.
 
get cubit to export codimension 1 surfaces using its export commands.  The output is always flattened in the sense that the z-coordinates of vertices are stripped.  Perhaps your code snippet will help me by setting the dimension to 3.  For now this works nicely, and cubit is slowly responding to my email.
As well, I choose ucd because GridIn::read_abaqus() doesn't like to find that z-vertex from standard abaqus .inp files for codimension 1 surfaces, but read_ucd() works as expected.)

I keep on forgetting that you're working in codim 1. Yes, GridIn::read_abaqus will definitely not cater for that case as it doesn't distinguish between dim and spacedim, but it might be an easy fix.
 
I will try VectorTools::interpolate_to_different_mesh when I get to my desk this morning.  I tried the FEFieldFunction yesterday, but

FEFieldFunction<spacedim, DoFHandler<dim,spacedim> > FEFieldFunction(vector_dof_handler, euler_vector)


is very unhappy - it is not implemented for DoFHandlerType = DoFHandler<dim,spacedim> (and Mapping<dim,spacedim>), rather it is implemented for dim=spacedim problems only. 

I'm not sure if there's any specific reason as to why its only templatised and instantiated for the codim 0 case, but its an enhancement that we could look into making. This class makes heavy use of the GridTools::find_cell_around_point function, and *perhaps* (pure speculation) it may not be sufficiently robust codim 1. Perhaps someone more knowledgable on the inner functioning / use of this class would like to comment on this point?

Cheers,
J-P 
Reply all
Reply to author
Forward
0 new messages