Good eye re: MIN/MAX, I've fixed that now.
With weights in AVG, my thought was that it would be good to allow a weighted-average calculation, where one set counts more heavily than the other(s).
For example, if sets represent population groups, and members represent attributes (already reduced to a single score per attribute per group), then using a weighted AVG function where weight equals population size makes sense. Specifically, imagine that each set represents a country, and each member represents liters per year per capita consumption of a particular alcoholic beverage. Then you could use AVG, weighted by country population to calculate beverage popularity by continent or worldwide.
If weight is applied only to the sum component of AVG, and not to the count component, I can't think of any realistic application.
For the use case where you might want to negate the scores of a set and then average it with another set, you could accomplish it in two passes (unary SUM with negative weight, then AVG).
Negative (and zero) weights in AVG does make it possible to cause a division-by-zero, but that is already somewhat handled in my code (though perhaps it should result in +inf/-inf/0 instead of just 0 - fixed now). Not sure what you might use negative weights with this calculation for, but I see no harm in allowing it.
As far as COUNT, since it is fairly trivial to do (float addition instead of integer increment), why not allow weights to have an effect? The default weight of 1.0 means the default behavior still as expected, but leaves open the possibility of other calculations if the user happens to need them.
I've now had a chance to run some benchmarks on my code vs the antirez/redis/unstable branch, and here are the results:
$ # prep the DB
$ ./redis-cli --eval /dev/stdin << 'EOF'> for n=1,1000000 do
> for k=1,3 do
> if(math.random(3) ~= 3) then
> redis.call('ZADD', 'set'..k, math.random(), 'm'..n)
> end
> end
> end
> EOF
(nil)$ redis-cli zcount set1 -inf +inf
(integer) 666431
$ redis-cli zcount set2 -inf +inf
(integer) 666122
$ redis-cli zcount set3 -inf +inf
(integer) 666756
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate min
(integer) 962659
real 4.21
$ # tests against p120ph37/redis/unstable
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate min
(integer) 962659
real 4.71
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate max
(integer) 962659
real 4.71
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate sum
(integer) 962659
real 5.26
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate count
(integer) 962659
real 8.65
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate avg
(integer) 962659
real 5.37
$ # tests against antirez/redis/unstable
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate min
(integer) 962659
real 4.71
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate max
(integer) 962659
real 4.70
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate sum
(integer) 962659
real 5.25
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate count
(error) ERR syntax error
real 0.01
$ time -p redis-cli zunionstore out 3 set1 set2 set3 aggregate avg
(error) ERR syntax error
real 0.00 This shows negligible performance impact for union of 3 x 2/3 million-member sets.
The longer time for COUNT is because the resulting set has very low cardinality, so writing to it is less efficient.
The faster first run of MIN is because no time was needed to delete the not-yet-polulated "out" set.
I've now also incorporated your unit tests, and added a few more to cover additional cases (SUM, weighted aggregations, and the ZINTER counterparts.)
-Aaron
P.S.
I noticed that you compulsively edited the "*target = *target + val;" line too. :-)