This is correct. The only way to mutate an environment is to use $define! while evaluating an expression inside of it (not in a child of the environment, but on the environment itself) or to use $set! passing it a reference to the environment (which is semantically the same as eval'ing a $define! in the passed environment, as a matter of fact they are equipowerful, see the report for the definitions of $set! in terms of $define! and viceversa)
The thing here (and something that trips people coming from scheme and almost any other language) is that environment mutation is restricted in kernel to expressions having a direct reference to the environment to be mutated (direct as in not through child environments). That's a design restriction made by John and has a number of reasons going on for it, but ultimately it boils down to the stability of environment in the presence of the eval applicative.
If you let any expression mutate any parent of the dynamic environment, you have no way to reason about the bindings of environments that have an unknown or unanalyzable expression evaluated in a descendant. In the particular case of the standard environment, you would have no way of knowing if any of the standard bindings are going to be changed (let's say in a REPL), and so there are no ways to reason about programs or even think about compilation in this scenario.
By restricting environment mutation to expressions that have an explicit reference to the environment to be mutated, you maintain environment mutability (although with some inconveniences in some cases) but at the same time allow for stability of the environments that aren't referenced from arbitrary code. In kernel, a standard environment is a descendant of the core environment, for which no reference can exist (in a conforming kernel implementation) and so no mutation of the core environment can be performed. This means that this environment is stable, and any references to it could in principle be compiled with no problem, irregardless of any other code that may be evaluated in this or other descendants of the core environment.
As for the use of $set!, in general it's probably best to encapsulate its use (and the environment used by it) in an applicative/operative.
In this case I would prefer:
(for-each ($let ((env (get-current-environment)))
($lambda (b) ($set! env a b) (display a)))
(list 5))
And in general, I would encourage the use of mutators instead of making
the environment visible:
($define! a 4)
($define set-a! ($let ((env (get-current-environment)))
($lambda (val) ($set! env a val)))
Of course, these matters are still open to discussion, and kernel (and klisp) are vehicles for experimenting with these and other ideas and concepts (like the matter of fexprs vs macros / explicit vs implicit evaluation, unlimited vs limited continuations, first class objects everywhere, encapsulation, user-defined types, core applicatives extensibility, etc).
As these issues are explored, it is to be expected that new patterns for dealing with these things will emerge and hopefully by elegantly expressible in kernel. For example, many uses of environment mutation in scheme can be replaced by mutation to an object of the box type (like a mutable pair, but with only one element instead of two), or transformed to the setter pattern shown above.
Regards,
Andrés Navarro