


Dear Ogzur,
Introductory comments:
In a LS-DYNA bumper crash analysis, "mass scaling" refers to a technique where the software artificially adds mass to certain elements of the model to achieve a larger explicit time step, allowing for faster simulation runtimes, especially when dealing with very small elements in the mesh, often found in complex crash scenarios like a bumper impact; however, it's crucial to carefully monitor the added mass to ensure the simulation results remain accurate and not significantly altered by the artificial mass increase.
Key points about mass scaling in bumper crash analysis with LS-DYNA:
Purpose:
To increase the critical time step, enabling faster simulation by adding mass to specific elements, usually where the mesh is very fine, allowing the analysis to run with larger time steps without compromising stability.
Controlling Mass Scaling:
This is done through the "CONTROL_TIMESTEP" card in LS-DYNA, particularly the "DT2MS" parameter which determines how much mass is added to elements based on their calculated time step.
Potential Issues:
Accuracy Concerns: Adding too much mass can significantly affect the dynamic response of the bumper, potentially compromising the accuracy of the crash simulation.
Energy Conservation: Mass scaling can introduce artificial energy into the system, which needs to be carefully monitored.
Best Practices:
Monitor Added Mass: Always check the "GLSTAT" output file to monitor the total added mass throughout the simulation.
Sensitivity Analysis: Run multiple simulations with different mass scaling levels to assess the impact on the results and ensure the added mass is not significantly affecting the crash behavior.
Selective Mass Scaling: Use techniques to only add mass to specific areas of the model where it is most needed, minimizing the overall mass increase
-------------------------------------------
Several other rmass-scaling presentations
https://skill-lync.com/student-projects/week-8-mass-scaling-88
https://skill-lync.com/student-projects/week-8-mass-scaling-74
https://www.dynasupport.com/howtos/general/mass-scaling
https://blog.d3view.com/overview-of-mass-scaling/
-------------------------------------------
comment
If possible, it is always wise to run your model once without mass scaling and compare critical results to quantify the effects of mass scaling.
note
Please use good engineering judgment when using mass scaling. Adding mass can change the physics of the problem. You need to evaluate how much increase you can allow and not alter the intent of your simulation. Remember that increasing the mass will add kinetic energy to the simulation; usually, you would like to keep that at a minimal.
To monitor the changes in mass, LS-DYNA prints out the added mass and the percentage increase, as shown here:
problem cycle = 49960
time = 2.9999E-02
added mass = 8.1457E-03
percentage increase = 2.6333E-01
The percentage increase provides an excellent evaluator for basing the quality of your simulation. As initially indicated, the allowable increase is that which you must make, based on good engineering judgment that will not alter the solution results significantly. In addition, you can use the change in kinetic energy from glstat or matsum output files for guideline support.
You can validate your results by running selected study problems with and without the mass scaling. The percentage increase allowed usually is problem dependent using the guidelines given above.
note
Anders Jernberg of ERAB provided this very nice presentation regarding termination time and the possible role of mass scaling:
The first rule is that you wish to set it to as low as possible to reduce simulation time. There are basically two reasons for why the termination become a certain value.
(1) You might want to simulate some short (in time) physical phenomena where dynamic effects are essential for the behavior, like car crash, hitting a golf ball or whatever. Here you have to estimate for how long you need to perform the simulation to catch the behavior you wish to analyze. That estimation sets the termination time.
(2) You wish to perform an analysis where you don't want the inertia effects to affect the results, like applying a static load to a structure. Running the explicit solver and applying the load too fast you will get inertia effects that changes the structural response. Applying a load very fast, and it finally becomes more like an impact simulation. I believe the best way to apply a load for (explicit) static analysis is to ramp it up to the final load using a half sine function shape for the load. The duration for the ramp-up time to get reasonable small kinetic energy compared to internal energy correlates to the eigen frequency of the system. A minimum reasonable ramp-up time for quasi static analysis is 1.5-2.0 times the normal period (1./frequency) of the system. That sets the termination time in this case.
You can reduce the CPU time by specifying mass scaling. For quasi-static analysis, I believe selective mass scaling is superior. It will not let you set shorter termination time but the time step can be larger.
simple examples
A rod of steel is forged between two dies. The billet upset problem is a measure of friction under forming conditions. Download is available in the download section of this document.
http://www.dynaexamples.com/examples-manual/control/timestep
This problem includes three tools a punch, a binder and a die and also includes a blank to be formed. The blank is deep drawn by the punch while the binder and die hold the blank edges and help prevent wrinkling. During the process, adaptivity is employed to refine the mesh of the blank to improve accuracy.
http://www.dynaexamples.com/examples-manual/control/adaptive-1
-------------------------------------------
Sincerely,
James M. Kennedy
KBS2 Inc.
March 3, 2025
--
You received this message because you are subscribed to the Google Groups "LS-DYNA2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
ls-dyna2+u...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/ls-dyna2/CAAyTUifu_QPeGqOUELX02s%3DAj%2BF%2BF-be%2B%2B4hbA3c_UL1XfTDjw%40mail.gmail.com.
My guess is when you omit mass scaling your initial time steps are too large.
I suggest using a time step scale factor (TSSFAC) via a *Define_Curve with a much reduced TSSFAC initially and increasing later in the simulation; see *CONTROL_TIMESTEP.
BTW – anytime a negative Jacobian occurs you should end the simulation. In the Old days, LS-DYNA would terminate when a negative Jacobian occurred. But “users” did not like their runs terminating so LSTC implemented erosion of the element – two wrongs do not make a right.
--len



--
You received this message because you are subscribed to the Google Groups "LS-DYNA2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ls-dyna2+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ls-dyna2/CAAyTUifu_QPeGqOUELX02s%3DAj%2BF%2BF-be%2B%2B4hbA3c_UL1XfTDjw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "LS-DYNA2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ls-dyna2+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ls-dyna2/DM6PR13MB3307FFBA51E48D30C9D7E91091C92%40DM6PR13MB3307.namprd13.prod.outlook.com.
1\ TSSFAC – It does not make sense to me to use TSSFAC with mass scaling – two ways of “controlling” the time step is no way.
2\ In general, mass scaling should be avoided – there are a few cases where it makes sense, but typically “users” want to decrease run time, which is NOT a good reason to use mass scaling.
3\ My guess as to what is happening in your simulation is the initial time step is too large relative to the projectile speed and significant penetration occurs very soon in the simulation. Once this penetration occurs, LS-DYNA struggles to recover – apparently not successfully.
To avoid this significant early penetration, a smaller time step is required to allow LS-DYNA to deal with a much smaller initial penetration. Subsequently, a larger time step may be possible, thus my suggestion to use a TSSFAC with Define Curve and increasing values.
4\ The optimum/accurate time step is the so called Courant Condition, i.e. the time it takes an elastic wave to transit the element. Time steps small that this value, e.g. those archived nominally with default TSSFAC=0.9, induce some time integration error. The smaller the time step, than the Courant Condition, the larger the numerical integration error.
Stability and convergence depend on more than just time step, e.g. material models and mesh sizes.
5\ Much like mass scaling, element erosion, in general, should be avoided. In the case there one or a few elements causing a problem, it may be justified, in an engineering sense, not a mathematical sense, to erode elements.
To view this discussion visit https://groups.google.com/d/msgid/ls-dyna2/CAAyTUifdfvXyjkhLGXHdGFCYnzkKW0Z44_nmKMB0YzjW8rogJg%40mail.gmail.com.