Profiling decoding is a very complex topic. Here some ideas on it:
Images compressed with lower quality decode faster -- to be able to profile decoding one has to first find out what constitutes the same quality level for an application. In my experience webp lossy has lower quality in the higher range of quality settings and a better quality at the lowest range, i.e., to match jpeg quality 85 you might need webp lossy quality 90, but to match jpeg quality 30 you might be ok with webp quality 20. These differences can have a significant effect on decoding speed.
WebP allows for two new ways of encoding, webp near lossless and delta palettization. The near losslessly compressed images decode faster than full lossless, and are often 30-40 % smaller -- and the quality remains still much higher than with WebP lossy. The more experimental 'delta palette' mode only allows for non-alpha images to be compressed. They are more bloated than WebP lossy, but have typically less compression artefacts and decode twice as fast. Delta palettization is likely not worth your time to test, but near lossless might be.
Last, there is a new really slow jpeg encoder for high quality images, guetzli. When you compare WebP images against jpeg you might try with guetzli -- it generates jpegs that can be 30 % smaller than respective quality images with other methods (such as mozjpeg and libjpeg). Also, guetzli generates non-progressive jpegs, i.e., they decode really fast. Guetzli itself is hideously slow in encoding, but the resulting jpegs decode about 30 % faster than jpegs of respective quality non-progressive jpegs generated with other means -- and possibly 2-3x faster than progressive jpegs. Guetzli is available at
https://github.com/google/guetzli