It should also be noted that the comparison to bedmap is a special case. In all problems related to bedops/bedmap/closest-features, the 1 thing that affects wall time most is simple I/O. Stepping back a bit, you realize that bedmap is meant to minimize this largest of problems. In a single bedmap call, you can have any number of (often) simple statistics calculated. In this one case, they compare an intersectBed with a count of the number of overlapping elements to bedmap --count. This is the simplest and least efficient way to use bedmap (without --faster, and computing just 1 statistic). If you want to know more statistics, like --bases, --bases-uniq, --echo-overlap-size, etc. All of this can be calculated by sweeping through the inputs 1 time only with bedmap. In the bedtools suite, each of these stats, if available - not all bedmap stats are available in bedtools, requires yet another full sweep through the inputs. The time to sort data cannot be ignored either. Their tool, in this case, requires sorted data. Ours requires a (different) sorted order too. Our sort util is still much faster. Fundamentally, our bedops -e is most like their intersectBed tool, though they combine a couple of smaller features from our bedmap suite into their intersectBed program (as in the count example).
The two tool suites often do similar things, but we have fundamentally different philosophies. Our approach is to keep the number of tools small and try to keep them orthogonal to each other. While both try to handle the problem of very large-scale analysis, our tools have always been faster and more memory efficient, though they've made some headway on some items over the last couple of years. We accept few input file formats (only bed-like and deeply compressed bed-like) and push back on the community to reconsider all of the formats in common use, while they accommodate a larger variety. We encourage you to sort data once, and we maintain that sorted order on downstream calls to make larger pipelines much more efficient. Data stored from the outputs of our tools can be reused with any BEDOPS utils later on without re-sorting.
We directly encourage clustered usage over chromosomes, and continue to believe this is the most meaningful partition of data sets in most cases. There is almost no performance difference in storing every chromosome in individual files versus using simple bedops --chrom chr17 sort of options, yet keeping a single file is a lot less messy. Going a bit further on that, in a pipeline you'd have to hardcode the chromosomes or use `find` in some way to gather all of the individual chromosome files (messy and not efficient). bedextract --list-chr <my-file.bed> will give you that same information with no assumptions (other than the usual sort) and it's faster than a find system call would be.
We often combine simple one-line *nix commands in pipelines with our tools, and try to keep the tools orthogonal to these standard utils in simple cases. This philosophy encourages users to know about cut, paste, join, very simple awk statements, etc. Not everyone wants to know about those things and prefer to depend upon the nuances of a 'BED' tool suite. This works until you need to solve similar problems in a non-BED context or need a calculation on BED that is not built-in. Honestly, we cater to programmers and not so much to biologists. Biologists are smart people and many have learned how to use BEDOPS in their work though. In some cases, I think BEDTools is probably better for people without a programming background and little interest in such things.
The bedops tool is built from the ground up to accept any number of inputs at one time. We don't need another utility to take a multi-file intersection, for example. Our intersection option works with any number of files. And, our utility will do any set-like operation (not just a simple intersection) with any number of files. Consider what needs to be done with BEDTools to find the differences between 3, 4, or 5 input files. bedops -s file1.bed file2.bed file3.bed .... fileN.bed is just as efficient as merging that number of files.
I'm biased, of course. BEDTools offers many nice features. I think what makes them particularly popular is their support of SAM/BAM. Since SAM can be converted with no loss of information to something BED-like (but that is not true in the other direction) it's tough for us to want to support it. In other words, BED-like is a more powerful representation than SAM, VCF, GFF, GTF, etc. etc. Sam requires no sorted order. BAM, while often sorted, is not required to be. It requires an external indexing file when it is sorted. Well, a compressed BED file is also smaller than a BAM file (in our starch format). BAM has that 'quick lookup' mechanism that everyone seems to love, but I claim is overhyped. A BED-like format could easily support this even in the general case of fully-nested elements. Again, BED-like is more general, with really little-to-no drawbacks compared to most formats. We require sorted inputs to our utils to keep larger pipelines as efficient as they can be. Supporting every format under the bioinformatic sun does not play well with our "sort once" philosophy. It isn't likely that this particular philosophy will change. We strive for very high efficiency and do a pretty good job when it comes to large data sets and pipelines.
Shane