My feeling is: I'd rather not let one case dictate the function signature, if varargs feel more elegant for the majority of call sites.
### (skip to next ###, when skimming)
Please let me introduce you to my motivating use case in point:
I'm creating a DSL for java source code generation.
There is a DSL entity: (defn field [& {:keys [modifiers type name init]}] ...) which is used to emit class members and local variables.
e.g. (field :modifiers ["public" "static"] :type "int" :name "memberOne") becomes "public static int memberOne;" during emission.
You can see that an end user of the DSL might use that a lot. I'd rather not write an additional set of curlies there. But of course, that `field` entity is also used in a few higher level constructs, like generating a builder pattern. I need to pass a map there, but it's internal to the library.
So I have two choices there:
- have the function take a map and create a stub for end users => not very DRY
- use apply-kw
Needless to say I went with the third choice:
- create a macro `defkwfn`, that generates a polyadic function, which can take a single map argument or kw args
which is actually the first choice, only with the function and the stub collapsed into one fn.
I dont expect that `defkwfn` macro to show up in core anytime soon ;-)
### (skip blurb to here)
I feel core is appropriate, because `apply-kw` is complementary to that nifty kw-arg destructuring syntax we received (not so) recently.
We've got `apply` as a complement to regular varargs.
OTOH dissoc-in is still in incubator, so I understand symmetry and conceptional completeness are not the only relevant factors.
-S
To be clear: I see _a lot_ of value in having a small core, but I also see value in symmetry and having basic building blocks there, when you need them. Especially those that promote changes to APIs, when missing.
Also: functional, algorithmic code doesn't rot.
kind regards