Ok, this is unprompted, but with all the discussions floating around in the cryo-EM community recently, I thought it was worth posting EMAN2's 'official' opinion on the topic here. These concepts will be included in the paper we're preparing now on EMAN2.1's refinement strategy.
There have been a lot of new buzzwords flying around the Cryo-EM community recently about resolution measures and making sure you don't over-refine your maps. First there was "gold standard" refinement (or FSC), which simply means computing the FSC as it was originally supposed to be computed, with two 'completely' independent maps. That is, you start with a split data set, and refine the two halves completely independently from each other, using independently generated starting models, then measure the resolution using the FSC between these maps. When using this more robust refinement/FSC curve, you can use Henderson's 0.143 resolution cutoff (which I maintain should actually be 0.2, due to a math error in the 0.143 paper, but it has so little effect, I'm not pushing the issue). This is the normal refinement procedure used in EMAN2.1, which has additional benefits, including automatic Wiener filtration of the final maps at their 'true' resolution.
However, this doesn't completely satisfy everyone. There is still some question about what 'completely' really means up above. Relion, for example, typically uses starting maps which aren't completely independent, but have been low-pass filtered to 0 amplitude at something like 60 Å resolution. ie - it gives the refinement a general starting shape, but contributes no high resolution information, which must be derived entirely from the data. EMAN2 generally uses a starting map which has been phase-randomized beyond 1.5 - 2x the target resolution. ie- if you are targeting 10 Å resolution, you would randomize the phases of the starting model beyond 20 Å (for example). Not only does this mean there are no correlations at high resolution, it actually biases the starting maps away from each other, forcing the refinement to get rid of the bad phases before it can fill in the good ones. In practice, both of these methods seem to work quite well.
However, there are other issues. One almost always imposes a mask of some sort on the reconstruction before computing a FSC. If these masks are not also independent (and low resolution) the masks can lead to false correlation between the maps. Now, EMAN2.1 uses an independently and automatically generated mask for each of the two output maps, which is additionally very 'soft' (Gaussian edge), so I maintain that this should be fine, and not exaggerate resolution.
Step forward in time a bit, and you'll see that Richard Henderson published an additional paper suggesting a method for proving that your refinements really are independent, and adjusting your resolution (if it turns out that they were not). The idea is straightforward, take your raw data, do a normal (in EMAN2.1 this would be gold-standard) refinement, then phase-randomize the particle data beyond some target resolution. This resolution should be worse than the resolution your normal refinement claimed to have achieved. Since the particle data no longer has any actual data/signal past some cutoff resolution, if you re-refine it exactly the same way, you should find that the FSC falls rapidly at the exact resolution where you started randomizing the phases. If this does not happen, it implies that your refinement procedure has some hidden correlation between the even/odd maps, and your resolution was over-estimated.
Of course, simple interpolation issues will mean that the curve is never going to fall off perfectly to 0 at the exact pixel where you filtered it, but hopefully it is possible to get close. Attached you will see the plot of what happens when you do this in EMAN2.1. As you can see, the gold-standard refinement approaches the 'perfect' result of no correlation beyond the phase-randomized resolution, so I'm quite confident that the refinement is quite robust.
That doesn't mean you couldn't override the automatic options, and impose a mask which causes resolutions to be over-estimated again, but the default refinements should generally be pretty reliable. If you have any concerns about a particular case, it never hurts to try Richard's little test. It is quite straightforward to perform, and may give you more confidence in your map.