Though these are math people, I think that relating the model to physical behaviors will increase their understanding by showing how these behaviors reduce to properties of a mathematical model. One option is to follow the historical path, starting from the first LEM formulation and showing how each modification adds physics, step by step.
There can be a lesson on LEM, one or two on ODT (maybe DPE the first day, see below), and one (or part of the last lesson) on coupling of either to 3D solvers (RANS or LES).
LEM:
In Part 1, the map is just a flip, with no compression, creating discontinuities at each end, equivalent spectrally to transferring fluctuations directly to wavenumber infinity, i.e., the opposite of a local cascade. There is only one map size, the integral scale. Yet the distinction between diffusion and advection leads to a useful representation of plume dispersion regimes. There is only one parameter, Pe. In Part 2, the size distribution is introduced, so transport now scales properly with eddy size, but the cascade is still nonlocal because the simple flip is still used. However, the model now has Re as well as Sc and is able to capture the Sc dependence (air vs. water) of planar shear layer mixing and features of round jet mixing (Part 3), notably differential diffusion effects. Showing Part 3 figs. 14 and 15 can illustrate how a simple mechanism captured by this formulation can lead to subtle features of measured properties. The triplet map is not introduced until Part 4. Its benefits, are best illustrated in Part 6, where the scalar inertial and viscous/convective spectral scalings and cutoffs (Kolmogorov, Batchelor Obukhov-Corrsin scales) are captured for the various Sc ranges (somewhat inaccurately for low Sc, which illustrates a limitation of representing an eddy as an instantaneous map). As time permits, you can explain how this formulation captures non-trivial behaviors such as the Re dependence of differential diffusion [A. R. Kerstein, M. A. Cremer, and P. A. McMurtry, "Scaling Properties of Differential Molecular Diffusion Effects in Turbulence," Phys. Fluids 7, 1999 (1995)] and leads to physics discovery such as weird spectral properties of far-field mixing in pipe flow [Predictions: A. R. Kerstein and P. A. McMurtry, "Low-Wave-Number Statistics of Randomly Advected Passive Scalars," Phys. Rev. E 50, 2057 (1994); experimental confirmation and exploration: J. E. Guilkey, A. R. Kerstein, P. A. McMurtry, and J. C. Klewicki, "Mixing Mechanisms in Turbulent Pipe Flow," Phys. Fluids 9, 717 (1997); J. E. Guilkey, A. R. Kerstein, P. A. McMurtry, and J. C. Klewicki, "Long-Tailed Probability Distributions in Turbulent-Pipe-Flow Mixing," Phys. Rev. E 56, 1753 (1997)].
The simplest justification of the triplet map is that it appears to be the most local map, in the sense of minimizing the maximum compression factor, that is possible in 1D subject to the conservation and continuity requirement. I don't have a proof that there is no 1D map that everywhere has a compression factor below three and satisfies these requirements. In fact, I don't have a proof that the triplet map is the unique 1D map whose maximum compression factor is 3, though I believe it based on trial and error. Perhaps the math students can come up with a proof (it would be publishable).
We don't necessarily need the most local map. The triplet map with unequal-size daughter images might capture some properties, such as intermittency, better, but this introduces at least one new parameter in the map definition, with no compelling benefit as far as we presently know.
ODT:
The simplest formulation is density profile evolution (DPE), applied in the 1999 JFM paper to Rayleigh and penetrative convection and in A. R. Kerstein, "One-Dimensional Turbulence - Part 2. Staircases in Double-Diffusive Convection," Dyn. Atmos. Oceans 30, 25 (1999) to the thermohaline staircase. The latter and Rayleigh convection are good examples of getting significant results from a simple formulation. Also, DPE highlights the energy basis of map selection, which sets the stage for the role of the velocity profiles. Your smoke cloud results, especially temperature vs. dextrose results, also illustrate this. (You could have used DPE to get basically the same results.)
DPE does not conserve energy. It is the analog of zeroth-order (production = dissipation) RANS modeling in that any TKE resulting from the convectively driven maps is assumed to be dissipated instantly. ODT, with velocity profiles, and kernels to enforce energy conservation, is analogous to first-order RANS if there is one velocity component (production, transport, dissipation) or, in one respect, to second-order RANS if there are three (because kernels capture return-to-isotropy due to pressure scrambling, which is also in second-order RANS).
I suggest explaining some of the details of the sampling-and-acceptance eddy-selection algorithm so people have an idea of how the code works.
Coupling to 3D solvers:
Things you can discuss include the contructed-pdf approach (chemistry closure using LEM tabulation), subgrid modeling for the level-set approach (your Ph.D. work and the Sc project), LES/LEM, LEM3D that Sigurd and Torleif are coupling to RANS (it will also work for LES mixing closure), ODT near-wall closure of LES, ODTLES, AME.
I hope these suggestions help; we can discuss this further if you wish.
Alan
-----Original Message-----
From: Heiko Schmidt [mailto:heis...@math.fu-berlin.de]
Sent: Friday, November 21, 2008 9:06 AM
To: Kerstein, Alan
Subject: (no subject)
Hi again,
one student wants to know more about the mapping and the sensitivity of results to change of the type of the kernel.
In my first 5 lessons I focused on heterogeneous multi scale models in combustion and clouds (Metstroem). They have some ideas on LEM/ODT now, but want to go deeper inside.
I could spend 3-4 lessons on it.
Have you suggestions how to split it up? Should I take the chapters of your book as red line?
In which sequence?
Any hints are welcome. The guys are from math. Since they have seen some very nice results, the main interest is now in the statistic process and why you use a triplet map, and not any other rule.
Have a nice week-end
Heiko