Decode Animated WebP consume much time than expected.

86 views
Skip to first unread message

fur...@gmail.com

unread,
Sep 8, 2015, 1:31:11 AM9/8/15
to WebP Discussion
I Decode the animated WebP with the ObjC libwebp framework. However, the decode time is too long. Here is the log:
2015-09-08 08:14:07.543 kCMSocial[3811:731058] imageData length 710936, decode time 3.251322031021118
-------
2015-09-08 08:14:10.397 kCMSocial[3811:731058] imageData length 710936, decode time 2.763020992279053
-------
2015-09-08 08:14:11.168 kCMSocial[3811:731058] imageData length 97114, decode time 0.7185349464416504
-------
2015-09-08 08:14:11.505 kCMSocial[3811:731090] imageData length 55264, decode time 0.5790450572967529
-------
2015-09-08 08:14:11.952 kCMSocial[3811:731058] imageData length 94732, decode time 0.7459831237792969
the imageData length unit is byte. Here is my decode code:

+ (UIImage *)sd_animatedWebPWithData:(NSData *)imageData {

    

    WebPData data;

    WebPDataInit(&data);

    

    data.bytes = (const uint8_t *)[imageData bytes];

    data.size = [imageData length];

    

    NSTimeInterval decodeStartTime = [[NSDate date] timeIntervalSince1970];

    WebPDemuxer* demux = WebPDemux(&data);

    

    int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);

    int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);

    uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);

    

    CGFloat duration = 0;

    NSMutableArray *images = [NSMutableArray array];

    int i = 0;

    if (flags & ANIMATION_FLAG) {

        WebPIterator iter;

        if (WebPDemuxGetFrame(demux, 1, &iter)) {

            WebPDecoderConfig config;

            WebPInitDecoderConfig(&config);

            

            config.input.height = height;

            config.input.width = width;

            config.input.has_alpha = iter.has_alpha;

            config.input.has_animation = 1;

            config.options.no_fancy_upsampling = 1;

            config.options.bypass_filtering = 1;

            config.options.use_threads = 1;

            config.output.colorspace = MODE_RGBA;

            CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();

            do {

                WebPData frame = iter.fragment;

                CGFloat delayTime = iter.duration / 1000.0;

                if (delayTime < 0.01)

                    delayTime = 0.1;

                VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);

                if (status != VP8_STATUS_OK)

                    continue;

                int imageWidth, imageHeight;

                uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &imageWidth, &imageHeight);

                if (data == NULL)

                    continue;

                CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, imageWidth * imageHeight * 4, __freeWebpFrameImageData);

                CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;

                CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

                CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, 8, 32, 4 * imageWidth, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);

                

                UIImage *image = [UIImage imageWithCGImage:imageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];

                [images addObject:image];

                

                

                CGImageRelease(imageRef);

                CGDataProviderRelease(provider);

                duration += delayTime;

                i += 1;

            } while (WebPDemuxNextFrame(&iter));

            

            CGColorSpaceRelease(colorSpaceRef);

            WebPDemuxReleaseIterator(&iter);

            WebPFreeDecBuffer(&config.output);

        }

    }

    WebPDemuxDelete(demux);

    NSTimeInterval decodeEndTime = [[NSDate date] timeIntervalSince1970];

    DLog(@"imageData length %@, decode time %@", @([imageData length]), @(decodeEndTime - decodeStartTime));

    if (images.count == 0)

        return NULL;

    

    UIImage *animatedImage = [UIImage animatedImageWithImages:images duration:duration];

    return animatedImage;

}


Can you help me check the code, whether there is any improvements?

James Zern

unread,
Sep 8, 2015, 10:46:32 PM9/8/15
to WebP Discussion
Hi,
These are ignored they're filled out by WebPDecode(). This field can also be used in a call to WebPGetFeatures().
 

            config.options.no_fancy_upsampling = 1;

            config.options.bypass_filtering = 1;


Hopefully in the end you don't need to set these, they can affect output quality.
 

            config.options.use_threads = 1;

            config.output.colorspace = MODE_RGBA;

            CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();

            do {

                WebPData frame = iter.fragment;

                CGFloat delayTime = iter.duration / 1000.0;

                if (delayTime < 0.01)

                    delayTime = 0.1;

                VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);

                if (status != VP8_STATUS_OK)

                    continue;

                int imageWidth, imageHeight;

                uint8_t *data = WebPDecodeRGBA(frame.bytes, frame.size, &imageWidth, &imageHeight);


You're doing 2 decodes here, one with WebPDecode() followed by this one. After WebPDecode() 'config.output' will contain the dimensions and decoded data.
 

                if (data == NULL)

                    continue;

                CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, imageWidth * imageHeight * 4, __freeWebpFrameImageData);


It is possible to decode directly to a user-supplied buffer if you can get one here. Have a look at 'is_external_memory' in WebPDecBuffer -- you'll need to set RGBA properly in that case.
 

                CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;

                CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

                CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, 8, 32, 4 * imageWidth, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);

                

                UIImage *image = [UIImage imageWithCGImage:imageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];

                [images addObject:image];

                

                

                CGImageRelease(imageRef);

                CGDataProviderRelease(provider);

                duration += delayTime;

                i += 1;

            } while (WebPDemuxNextFrame(&iter));

            

            CGColorSpaceRelease(colorSpaceRef);

            WebPDemuxReleaseIterator(&iter);

            WebPFreeDecBuffer(&config.output);

        }

    }

    WebPDemuxDelete(demux);

    NSTimeInterval decodeEndTime = [[NSDate date] timeIntervalSince1970];

    DLog(@"imageData length %@, decode time %@", @([imageData length]), @(decodeEndTime - decodeStartTime));


For a start you should try to time the decode separately from the iOS specific calls, this should help isolate the slowness.

Yue Liu

unread,
Sep 10, 2015, 3:54:16 AM9/10/15
to WebP Discussion
Hi,

I found that I decode two times, and I remove the WebPDecodeRGBA(); and I provide the config config.output.u.RGBA.rgba as the bitmap of 
CGDataProviderRef. It works for me. I found that the filter action consume much time with instruments, and I get rid of the filter action by set the 

bypass_filtering = 1. As you mentioned, without filter, the quality of the movie will be lower. However, I found that the quality is not changed. So can you explain why should use the fileter action when decode.


here is my code:


      WebPData data;

   WebPDataInit(&data);

   data.bytes = (const uint8_t *)[imageData bytes];

   data.size = [imageData length];


   


    WebPDemuxer* demux = WebPDemux(&data);


   


    int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);


    int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);


    uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);


   


    CGFloat duration = 0;


    NSMutableArray *images = [NSMutableArray array];


    int i = 0;


    NSTimeInterval decodeStart = [[NSDate date] timeIntervalSince1970];


    if (flags & ANIMATION_FLAG) {


        WebPIterator iter;


        if (WebPDemuxGetFrame(demux, 1, &iter)) {


            CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();


            do {


                // Init the decoder config, it hold the image bitmap in the output.u.RGBA.rgba. NB: don't release the config.output.u.RGBA.rgba mannually, or the the CGDataProvider will lost the bitmap (maybe it need it), so crash. Provide CGDataProvider a callback to release the bitmap.


                WebPDecoderConfig config;


                WebPInitDecoderConfig(&config);


                config.input.height = height;


                config.input.width = width;


                config.input.has_alpha = iter.has_alpha;


                config.options.no_fancy_upsampling = 1;


                config.options.bypass_filtering = 1;


                config.options.use_threads = 1;


                config.output.colorspace = MODE_RGBA;


                WebPData frame = iter.fragment;


                VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);


                if (status != VP8_STATUS_OK)


                    continue;


               


                // Convert the image bitmap into image.


                CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, config.output.u.RGBA.rgba, width * height * 4, __freeWebpFrameImageData);


                CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;


                CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;


                CGImageRef imageRef = CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent);


                UIImage *image = [UIImage imageWithCGImage:imageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];


                [images addObject:image];


                CGImageRelease(imageRef);


                CGDataProviderRelease(provider);


               


                // Calculate the duration.


                CGFloat delayTime = iter.duration / 1000.0;


                if (delayTime < 0.01) {


                    delayTime = 0.1;


                }


                duration += delayTime;


                i += 1;


            } while (WebPDemuxNextFrame(&iter));


           


            CGColorSpaceRelease(colorSpaceRef);


            WebPDemuxReleaseIterator(&iter);


        }


    }


    WebPDemuxDelete(demux);


    NSTimeInterval decodeEnd = [[NSDate date] timeIntervalSince1970];


    DLog(@"imageData length %@, decode duration %@", @([imageData length]), @(decodeEnd - decodeStart));    //Try hard to minize the duration.


    if (images.count == 0)


        return NULL;


   


    UIImage *animatedImage = [UIImage animatedImageWithImages:images duration:duration];


    return animatedImage;

James Zern

unread,
Sep 11, 2015, 2:17:06 AM9/11/15
to WebP Discussion


On Thursday, September 10, 2015 at 12:54:16 AM UTC-7, Yue Liu wrote:
Hi,

I found that I decode two times, and I remove the WebPDecodeRGBA(); and I provide the config config.output.u.RGBA.rgba as the bitmap of 
CGDataProviderRef. It works for me. I found that the filter action consume much time with instruments, and I get rid of the filter action by set the 

bypass_filtering = 1. As you mentioned, without filter, the quality of the movie will be lower. However, I found that the quality is not changed. So can you explain why should use the fileter action when decode.


It depends on the quality of the encode and whether it was encoded lossily, with lossless it will have no effect. The filter is a post process to smooth edges introduced by the lossy compression.
 


here is my code:


      WebPData data;

   WebPDataInit(&data);

   data.bytes = (const uint8_t *)[imageData bytes];

   data.size = [imageData length];


   


    WebPDemuxer* demux = WebPDemux(&data);


   


    int width = WebPDemuxGetI(demux, WEBP_FF_CANVAS_WIDTH);


    int height = WebPDemuxGetI(demux, WEBP_FF_CANVAS_HEIGHT);


    uint32_t flags = WebPDemuxGetI(demux, WEBP_FF_FORMAT_FLAGS);


   


    CGFloat duration = 0;


    NSMutableArray *images = [NSMutableArray array];


    int i = 0;


    NSTimeInterval decodeStart = [[NSDate date] timeIntervalSince1970];


    if (flags & ANIMATION_FLAG) {


Note this same loop can be used for still images too.
 

        WebPIterator iter;


        if (WebPDemuxGetFrame(demux, 1, &iter)) {


            CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();


            do {


                // Init the decoder config, it hold the image bitmap in the output.u.RGBA.rgba. NB: don't release the config.output.u.RGBA.rgba mannually, or the the CGDataProvider will lost the bitmap (maybe it need it), so crash. Provide CGDataProvider a callback to release the bitmap.


                WebPDecoderConfig config;


                WebPInitDecoderConfig(&config);


                config.input.height = height;


                config.input.width = width;


                config.input.has_alpha = iter.has_alpha;


You don't need to set the input fields, these are ignored and overwritten by the decode.
 

                config.options.no_fancy_upsampling = 1;


                config.options.bypass_filtering = 1;


                config.options.use_threads = 1;


                config.output.colorspace = MODE_RGBA;


                WebPData frame = iter.fragment;


                VP8StatusCode status = WebPDecode(frame.bytes, frame.size, &config);


Note you're not taking into account the frame blending. If you're working with the latest code from git you might want to have a look at the recently added WebPAnimDecoder interface in demux.h
It would be helpful if you timed the individual decodes to ensure there isn't something from the other calls contributing a large amount of time. On a related note how did you build libwebp? There is NEON assembly code available which will improve decoding on iOS devices, but it needs to be enabled at compile time. iosbuild.sh is provided to simplify the creation of a framework.
 

    return animatedImage;


You don't necessarily need to decode all images before rendering.
Reply all
Reply to author
Forward
0 new messages