Block high compression/large dimension images, that crash the system on resize

Showing 1-30 of 30 messages
Block high compression/large dimension images, that crash the system on resize Martimiz 10/22/12 4:42 AM
Hi all,

I took the liberty to start a new thread for this topic, because it was very mixed up in this thread (my fault initially). I hope that's allright with you.

Original thread: UploadField documentation

To summarize:
Files with very large dimensions but very high compression (like CAD files) will often slip the max file size settings, and then crash PHP once GD loads them in memory for resizing.

1. possible solution: use getImageSize() to check the file dimensions.
GetImageSize() seems to extract width and hight from a file without having to create a memory map.

Simon:
> <?php
> var_dump($start = memory_get_usage());
> $size = getimagesize('/Users/simon/
> Downloads/20121016_190253.jpg');
> var_dump($size, $end = memory_get_usage());
> var_dump($end - $start);

> This showed a difference of about 2 KB between $end and $start. The image is around 3.2 MB.


Question: does anyone know if 24 bit images create larger maps then 8 bit images? Or are memorymaps all the same anyway?)

Issues:

1. Decide on a max value:
a. do we set an (adjustable) limit on the number of pixels (h*w)?
b. do we try to use existing memory settings, as Jacob is proposing?
   (see:
c. both?

My first proposal: https://github.com/Martimiz/sapphire/commit/b6c29366f00489ff349df32bb72358a4e467b605


2. Decide on where/what to check 'at the gate':
a. On upload - before ever creating the file
b. On File Synchronization - ignore large files, rmove large files from DB when a user has overwritten the file from FTP?
c. In GD, before any resizing is done, just to catch any file that has slipped through?

As Sam suggested:
> If that's the case, I would suggest adding the ability to set maxWidth, maxHeight
> or maxPixels (just W x H) for either site-wide, or for specific upload fields.

> As a default, perhaps we set max pixels to 10,000,000?


3. Extra checks for images slipping through.
As jacob suggests:
a. check the dimensions before resizing, just in case an image slipped through
b. create a lock-file for any image before resizing, and then remove it again on completion, to prevent the system from ever crashing on the same image more then once.

See Jacobs proposal here: https://github.com/jakr/sapphire/tree/trac5284

I hope I've done right by everone and this sort of sums it up. Now I'm wondering how far should we go with this. And should protecting users against 'illegal' FTP uploads at all cost be part of it?

Thanks, Martine

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize banal 10/22/12 5:52 AM
Hi Martine

Thanks a lot for your work on this.

I'm pretty sure GD has two different modes when loading images into memory:
Palette images will have 8 bits per pixels and true-color images 32 bits per pixel. Not sure how monochrome images are handled and I think it highly depends on the parser at hand.

To be entirely sure how memory will be allocated by the GD library, it would probably be best to look at its source code directly: https://bitbucket.org/pierrejoye/gd-libgd/src
But I think it's safe to assume, images will load as 32bit in most of the cases, which means the memory consumption will be at least width * height * 4 bytes
That's for every image buffer that will be created, and since you usually have at least a source and a target buffer, these have to be added up.

In the comments of the PHP documentation, there's a code example of somebody using a dynamic approach, where he checks the available memory before processing the image: http://www.php.net/manual/en/function.imagecreatetruecolor.php#99623
This approach seems flawed though, as it doesn't factor in the size of the target buffer. If the target buffer is large (eg. resizing a 4000x3000 pixels source to a 3000x2000 target), the multiplier constant of 1.7 won't be enough.

Personally I think it would be best to allow configuration of a max. pixel sum for uploaded images. The default setting should be something sensible that works well with the SS3 system requirements (48mb memory). Also a customizable handler-function for such images would be nice... so that it would be possible to write your own routine that deals with images that are too large (using "ImageMagick" as fallback comes to mind).

The most important use-case to cover is when a user uploads an image that will consume too much memory. Ideally, the user would get an error-message, prompting him to upload a low-res version of the image. This won't cover the asset-sync cases or the unlikely situation where a huge image will be generated by the system itself though. Since this check is really simple and computationally inexpensive compared to the image-manipulation itself, I suggest that a memory-check should be done whenever a GD object is about to be created. Not sure how error-handling should work in this case though. Throwing an exception won't do any good, as it could stall the system as well. Maybe dynamically creating a "Not enough memory" image and using this instead would be a possible solution...

If there is a customizable handler function for these cases, the default implementation could be a dynamic "Not enough memory" image, but developers could implement their own handler and use ImageMagick instead or come up with other forms of dealing with these images.

Just my $0.02

Best - Roman
--
You received this message because you are subscribed to the Google Groups "SilverStripe Core Development" group.
To view this discussion on the web visit https://groups.google.com/d/msg/silverstripe-dev/-/fUjsQQnfEPAJ.
To post to this group, send email to silverst...@googlegroups.com.
To unsubscribe from this group, send email to silverstripe-d...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/silverstripe-dev?hl=en.

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Jakob Kristoferitsch 10/22/12 8:50 AM
Martimiz wrote:
> 2. Decide on where/what to check 'at the gate':
> a. On upload - before ever creating the file
> b. On File Synchronization - ignore large files, rmove large files from
> DB when a user has overwritten the file from FTP?
> c. In GD, before any resizing is done, just to catch any file that has
> slipped through?

a) We need to receive the file on the server first, but we could skip
creating a DB entry if it is too large.
b) I don't think that is necessary - see below.
c) If we already have the code to check the File in GD, we might as well
reuse it before resizing.

> [...]should protecting users
> against 'illegal' FTP uploads at all cost be part of it?

I don't think we need to protect against FTP uploads. If you upload a
file via FTP you are usually also able to delete or replace it. The
problem with the current state is that a user can upload a file that
crashes SilverStripe. If he does not have FTP access, there is no way to
delete it again.

Yours
Jakob

P.S.: Thanks for the summary.
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Martimiz 10/23/12 2:33 AM
Thanks very much,  Roman, Jacob for your input :-)

> [...]best to allow configuration of a max. pixel sum for uploaded images.
> The default setting should be something sensible that works well with
> the SS3 system requirements (48mb memory).

I would agree with that - especially for an initial setup

> [...]dynamically creating a "Not enough memory" image and using this
> instead would be a possible solution...

I very much like the idea, as we'd no longer need other checks/error messages. It could also protect against other large files slipping in by FTP, give the user fair warning at all time, allow uploading images for download purposes only (archives are always better, but not every user will realize that).  And it would all be  kept within GD class, like Jacob is working towards.

> [...] developers could implement their own handler and use ImageMagick
> instead or come up with other forms of dealing with these images

Question: would we use a validation-function in GD, extended to accomodate a decorator, or some sort of Validator class that can be subclassed?

The initial setup could very well use an existing image in assets, and a simple check against max file size as well as dimensions. Custom handlers could use other images or GD to generate an image with some text (translatable) and custom checks could be implemented - like the one Jacob suggests.

What do you think?

Martine



 
 
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize dospuntocero 2/15/13 1:51 PM
i think the problem is not the pixel dimention but pixel density. if you have a not that large 300dpi image it will kill any server.

El martes, 23 de octubre de 2012 06:33:10 UTC-3, Martimiz escribió:
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Jonathon Menz 6/17/13 9:06 AM
Has there been any more work done on this? I was about to start a new thread on this but then found this one. This is a constant cause of issues for me, and trying to explain the difference between resolution and compression to the average CMS user is not easy :)

Client: "But I made it smaller like you said - it's only 100kb!"
Me: "Yeah but it's still 6000 pixels wide. You didn't make it smaller the right way"
Client: "..."

If the CMS rejected images that were too many megapixels it would save me at least a couple of hours each month. The biggest problem for me is that when the CMS crashes in this way I don't get any information about what caused the crash so I have to manually track down the offending images.

I don't have the know-how to work on a fix for this but I'll be very grateful if anyone is able to solve this issue. Thanks guys for your work so far, hope to see some more discussion on this!
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Martimiz 6/17/13 9:32 AM
OOps, sort of forgot about this... Has it not been tackled for 3.1? I'm really busy at the moment, so unless anyone wants to pick this up, it might take some time :(

Martine


Op maandag 17 juni 2013 18:06:40 UTC+2 schreef Jonathon Menz het volgende:
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Jonathon Menz 6/17/13 11:48 AM
I'm running SS 3.1.x-dev and am encountering this issue so I assume it hasn't been tackled yet. Hopefully someone out there may have the skills and time to take a look. Otherwise I will wait and hope :)

In the mean time Mac users may find attached Automator workflow useful for reducing size of images before uploading.
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Florian Thoma 6/18/13 3:53 PM
Jakob Kristoferitsch wrote:
I don't think we need to protect against FTP uploads. If you upload a
file via FTP you are usually also able to delete or replace it. The
problem with the current state is that a user can upload a file that
crashes SilverStripe. If he does not have FTP access, there is no way to
delete it again.


I disagree. I definitely think that FTP uploads should be protected as well. The result of SilverStripe crashing is a white screen. So for a user to know that it is the image he uploaded that causes the error is impossible.
Cheers, Flo
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Ingo Schommer 6/19/13 1:25 AM
I like Jakob's solution since it provides more sensible default protection
than the max_resolution approach which needs explicit configuration to allow for larger images.
Jakob's solution assumes that _all_ PHP memory is available for GD, which isn't the case.
Maybe we should subtract a certain "bootstrap" amount? That'll vary greatly based
on your own logic and installed modules, but its better than nothing, right?
Also, the lock file should have a dot prefix to make it hidden from the CMS UI.
I would suggest that we continue discussion on a pull request.

--
You received this message because you are subscribed to the Google Groups "SilverStripe Core Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to silverstripe-d...@googlegroups.com.

To post to this group, send email to silverst...@googlegroups.com.
Visit this group at http://groups.google.com/group/silverstripe-dev.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Simon 6/19/13 1:27 AM
On 19/06/2013, at 8:25 PM, Ingo Schommer <ingo.s...@gmail.com> wrote:

> Maybe we should subtract a certain "bootstrap" amount? That'll vary greatly based
> on your own logic and installed modules, but its better than nothing, right?

memory_get_usage().
---
Simon Welsh
Admin of http://simon.geek.nz/

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Ingo Schommer 6/19/13 1:41 AM
True, I thought that's inaccurate because the file is validated at upload,
which has a different memory usage from when its modified (e.g. thumbnail generation).
We could check it before modification as well, which should happen rarely enough
to take the performance hit of opening and writing another file?
But you're right, its a more accurate approach than simply hardcoding *some* value :)
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Gordon Anderson 6/19/13 9:13 AM
hi

As an aside, but possibly relevant to this post, what about a configurable option to remove EXIF data as well.  The reason I mention this is that I came across yesterday an example for a client where the users had uploaded data that included geographical coordinates, probably without the end user realising

Regards

Gordon
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Loz Calver 6/19/13 3:06 PM
Unless I'm mistaken, GD doesn't support handling EXIF data - so any generated images should have the data stripped. The only exception would be if you called, for example, $SetWidth(500) on an image that was already 500px wide, so it wouldn't be processed.

I can't really see any way around it though. I don't think you can actually strip the EXIF data with PHP, just resave the image without it. You couldn't really force that, though, as you'd have to resave every image as it's uploaded, as well as when they're 'synced' from asset admin. That aside, there's the quality loss from resaving the image to contend with as well.

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Daniel Hensby 6/20/13 2:12 AM
Why would you want the framework to deliberately strip this data out? Some applications will want it... If a dev wants to make the decision to remove it, then they can add that functionality in



On 19 June 2013 23:06, kinglozzer <kingl...@gmail.com> wrote:
Unless I'm mistaken, GD doesn't support handling EXIF data - so any generated images should have the data stripped. The only exception would be if you called, for example, $SetWidth(500) on an image that was already 500px wide, so it wouldn't be processed.

I can't really see any way around it though. I don't think you can actually strip the EXIF data with PHP, just resave the image without it. You couldn't really force that, though, as you'd have to resave every image as it's uploaded, as well as when they're 'synced' from asset admin. That aside, there's the quality loss from resaving the image to contend with as well.

--
You received this message because you are subscribed to the Google Groups "SilverStripe Core Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to silverstripe-d...@googlegroups.com.
To post to this group, send email to silverst...@googlegroups.com.
Visit this group at http://groups.google.com/group/silverstripe-dev.
For more options, visit https://groups.google.com/groups/opt_out.



Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Gordon Anderson 6/20/13 2:18 AM
hi Daniel

i did say configurable :)  And of course the default behaviour should be the same as it currently is.  There is a program called exiftool than can remove EXIF data without resaving the image, but this would cause a non PHP dependency on core functionality, something that is probably not an option.

Cheers

Gordon

Re: Block high compression/large dimension images, that crash the system on resize Loz Calver 9/30/13 3:30 AM
*bump*

I don't have a solution, I just don't want this to be forgotten about :)
Re: Block high compression/large dimension images, that crash the system on resize Will Morgan 9/30/13 7:57 AM
Has anyone explored using the JavaScript file API for things like this? Some preparatory work could be done on the client side before the server handles the rest.
Re: Block high compression/large dimension images, that crash the system on resize Jonathon Menz 9/30/13 9:15 AM
Javascript would be great for the earlier idea of rejecting files based on some specific resolution, but as Ingo pointed out this approach could reject files that could be manipulated okay (or allow files which cause a crash if the resolution isn't appropriate). Ingo also referenced this pull request from Jacob which I missed but looks great to me! The approach appears to prevent manipulation of an image if it has already crashed the server once, and that is the killer feature as far as I'm concerned.

The big issue here is that these problem images can stop people from accessing the file sections of the CMS all together, (as it will crash whenever a thumbnail for the problem image is needed), and it's a huge headache to find the problem image(s) and delete them. In it's current state the patch may allow the occasional image to be uploaded that will cause a problem, but it will only cause a problem once. After that the image wouldn't be able to be manipulated, but the CMS wouldn't crash either, so you could just delete the problem file from there, or locate it on the server by searching for those lock files. Validating the image on upload will be nice but the most important thing I think is to make sure that if an image causes a crash, it doesn't cause a crash twice.

So I'm a +1 for Jacob's solution with Ingo's suggestion of adding a dot prefix. This solves the crux of the problem, accurate validation before attempting to manipulate the image will just be icing on the cake.

If we could get this basic fix rolled in to core, we could add some polish later, such as a warning message in the CMS if a problem image is found, prompting the user to delete the image and re-upload a smaller version.
Re: Block high compression/large dimension images, that crash the system on resize Loz Calver 9/30/13 9:22 AM
I'm 100% in favour of adding in that patch, even if it's only as a temporary solution prior to something more robust post-3.1.0 - giving clients the ability to unwittingly crash entire sections of their website is a significant issue in my book.
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Ingo Schommer 10/2/13 5:08 AM
I've responded to @jakr on his commit: https://github.com/jakr/sapphire/commit/344a4648d3c93250a187e9474012d489f8ec2885
There's some stuff to fix up, but I like the general approach. Does anybody have a link to that pull request? I can't find it.

--
You received this message because you are subscribed to the Google Groups "SilverStripe Core Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to silverstripe-d...@googlegroups.com.
To post to this group, send email to silverst...@googlegroups.com.
Visit this group at http://groups.google.com/group/silverstripe-dev.
For more options, visit https://groups.google.com/groups/opt_out.

Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Jonathon Menz 10/4/13 7:39 AM
I don't think a pull request was ever made from what I can tell. Can I suggest that we start with committing just the lock file functionality (prevents crash after already crashed once), and then move on to memory testing (prevents crash altogether)? It seems there is debate about the best way to predict a crash, but everyone seems to agree that the lock file idea is solid. Just having the lock file functionality fixes 95% of the problem I think.

I would be happy to revise the patch to remove the memory tests for now, include a dot prefix, test it a bit and submit a pull request - but that seems like a git faux pas as it's not my code :) so if you're there Jakob and happy to submit a pull request I'll be very happy to test it as much as I'm able.

Cheers
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Ingo Schommer 10/10/13 4:11 AM
Hello Jonathon, sorry for the late reply, yes your approach sounds good.
Looking forward to the pull request for the lock file.
Re: [silverstripe-dev] Block high compression/large dimension images, that crash the system on resize Jakob Kristoferitsch 10/20/13 5:08 AM
Hello Ingo, Hello Jonathon, Hello Martine,
there is now finally a pull request for this:
<https://github.com/silverstripe/silverstripe-framework/pull/2569>

I have run the new test case locally, but my installation is currently
in a very unstable state. So I am thankful if someone else could also
test it.

There is still some room for improvement, but I think it tackles the
main issue.

Future improvements:
   * Issue an exception if we detect the problem using the memory
estimation (i.e. if we detect that the image will be too large), so that
the user gets direct feedback.
   * Maybe refactor the logic that determines the lockfile name, so that
it is in a single location (add a method to Image_Backend?).
   * Make the CMS aware of the lockfile and display a special thumbnail
in the backend if a lockfile exists. The user can see why his image does
not work, delete the old one and reupload a smaller version.

Cheers
Jakob


Am 10.10.2013 13:11, schrieb Ingo Schommer:
> Hello Jonathon, sorry for the late reply, yes your approach sounds good.
> Looking forward to the pull request for the lock file.
>
> On 4/10/2013, at 4:39 PM, Jonathon Menz <jono...@gmail.com
> <mailto:jono...@gmail.com>> wrote:
>
>> I don't think a pull request was ever made from what I can tell. Can I
>> suggest that we start with committing just the lock file functionality
>> (prevents crash after already crashed once), and then move on to
>> memory testing (prevents crash altogether)? It seems there is debate
>> about the best way to predict a crash, but everyone seems to agree
>> that the lock file idea is solid. Just having the lock file
>> functionality fixes 95% of the problem I think.
>>
>> I would be happy to revise the patch to remove the memory tests for
>> now, include a dot prefix, test it a bit and submit a pull request -
>> but that seems like a git faux pas as it's not my code :) so if you're
>> there Jakob and happy to submit a pull request I'll be very happy to
>> test it as much as I'm able.
>>
>> Cheers
>>
>> On Wednesday, October 2, 2013 2:08:52 PM UTC+2, Ingo Schommer wrote:
>>
>>     I've responded to @jakr on his commit:
>>     https://github.com/jakr/sapphire/commit/344a4648d3c93250a187e9474012d489f8ec2885
>>     <https://github.com/jakr/sapphire/commit/344a4648d3c93250a187e9474012d489f8ec2885>
>>     There's some stuff to fix up, but I like the general approach.
>>     Does anybody have a link to that pull request? I can't find it.
>>
>>     On 30/09/2013, at 6:22 PM, Loz Calver <kingl...@gmail.com
>>     <javascript:>> wrote:
>>
>>>     I'm 100% in favour of adding in that patch, even if it's only as
>>>     a temporary solution prior to something more robust post-3.1.0 -
>>>     giving clients the ability to unwittingly crash entire sections
>>>     of their website is a significant issue in my book.
>>>
>>>     On Monday, September 30, 2013 5:15:02 PM UTC+1, Jonathon Menz wrote:
>>>
>>>         Javascript would be great for the earlier idea of rejecting
>>>         files based on some specific resolution, but as Ingo pointed
>>>         out this approach could reject files that could be
>>>         manipulated okay (or allow files which cause a crash if the
>>>         resolution isn't appropriate). Ingo also referenced this pull
>>>         request from Jacob
>>>         <https://github.com/jakr/sapphire/commit/344a4648d3c93250a187e9474012d489f8ec2885>
>>>         which I missed but looks great to me! The approach appears to
>>>         prevent manipulation of an image if it has already crashed
>>>         the server once, and /*that is the killer feature*/ as far as
>>>                 <https://groups.google.com/forum/?fromgroups=#!topic/silverstripe-dev/RwWB3hoholI>
>>>     send an email to silverstripe-d...@googlegroups.com <javascript:>.
>>>     To post to this group, send email to silverst...@googlegroups.com
>>>     <javascript:>.
>>>     Visit this group at
>>>     http://groups.google.com/group/silverstripe-dev
>>>     <http://groups.google.com/group/silverstripe-dev>.
>>>     For more options, visit https://groups.google.com/groups/opt_out
>>>     <https://groups.google.com/groups/opt_out>.
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "SilverStripe Core Development" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to silverstripe-d...@googlegroups.com
>> <mailto:silverstripe-dev+unsubscribe@googlegroups.com>.
>> To post to this group, send email to silverst...@googlegroups.com
>> <mailto:silverst...@googlegroups.com>.
Re: Block high compression/large dimension images, that crash the system on resize Loz Calver 10/21/13 2:22 AM
My $0.02 on your pull request (didn't want to comment on Github - I'll leave the discussion there to the SS team):

The `GD` class is, in my opinion, the right place for your 'get_lockfile_name()' method - we're using it to address memory issues with GD - not with the 'Image' or 'Image_Backend' classes. It may not be applicable if someone is using a different image library, so it doesn't make sense to me to clutter the other classes with it. I'd use 'GD:: get_lockfile_name()' in all three situations you described in your pull request.

I'm not sure how we'd show a "special thumbnail" in the CMS without coupling UploadField to GD - again, if someone uses a different image library the lockfile and thumbnail might not apply, so we'd need to factor that in. I guess we could use the same method we use for 'fileexists', and check: $backend = Image::get_backend; if (method_exists($backend, 'get_lockfile_name') {... etc.
Re: Block high compression/large dimension images, that crash the system on resize Daniel Hensby 10/21/13 2:26 AM
I agree. Imagine if I create an image backend that sends resize requests to another service or process, the computation of memory footprint shouldn't take place in this instance.

However, the special thumbnail can still work by creating a consistent image_backed API - Again, if we send to a another process and it doesn't respond a "image unavailable" or "generating" thumbnail should be available.

On Monday, 21 October 2013 10:22:31 UTC+1, Loz Calver wrote:
My $0.02 on your pull request (didn't want to comment on Github - I'll leave the discussion there to the SS team):

The `GD` class is, in my opinion, the right place for your 'get_lockfile_name()' method - we're using it to address memory issues with GD - not with the 'Image' or 'Image_Backend' classes. It may not be applicable if someone is using a different image library, so it doesn't make sense to me to clutter the other classes with it. I'd use 'GD:: get_lockfile_name()' in all three situations you described in your pull request.

I'm not sure how we'd show a "special thumbnail" in the CMS without coupling UploadField to GD - again, if someone uses a different image library the lockfile and thumbnail might not apply, so we'd need to factor that in. I guess we could use the same method we use for 'fileexists', and check: $backend = Image::get_backend; if (method_exists($backend, 'get_lockfile_name') {... etc.

On Monday, October 22, 2012 12:42:30 PM UTC+1, Martimiz wrote:
Hi all,

I took the liberty to start a new thread for this topic, because it was very mixed up in this thread (my fault initially). I hope that's allright with you.

Original thread: UploadField documentation

To summarize:
Files with very large dimensions but very high compression (like CAD files) will often slip the max file size settings, and then crash PHP once GD loads them in memory for resizing.

1. possible solution: use getImageSize() to check the file dimensions.
GetImageSize() seems to extract width and hight from a file without having to create a memory map.

Simon:
> <?php
> var_dump($start = memory_get_usage());
> $size = getimagesize('/Users/simon/
> Downloads/20121016_190253.jpg');
> var_dump($size, $end = memory_get_usage());
> var_dump($end - $start);

> This showed a difference of about 2 KB between $end and $start. The image is around 3.2 MB.


Question: does anyone know if 24 bit images create larger maps then 8 bit images? Or are memorymaps all the same anyway?)

Issues:

1. Decide on a max value:
a. do we set an (adjustable) limit on the number of pixels (h*w)?
b. do we try to use existing memory settings, as Jacob is proposing?
   (see:
c. both?

My first proposal: https://github.com/Martimiz/sapphire/commit/b6c29366f00489ff349df32bb72358a4e467b605


2. Decide on where/what to check 'at the gate':
a. On upload - before ever creating the file
b. On File Synchronization - ignore large files, rmove large files from DB when a user has overwritten the file from FTP?
c. In GD, before any resizing is done, just to catch any file that has slipped through?

As Sam suggested:
> If that's the case, I would suggest adding the ability to set maxWidth, maxHeight
> or maxPixels (just W x H) for either site-wide, or for specific upload fields.

> As a default, perhaps we set max pixels to 10,000,000?


3. Extra checks for images slipping through.
As jacob suggests:
a. check the dimensions before resizing, just in case an image slipped through
b. create a lock-file for any image before resizing, and then remove it again on completion, to prevent the system from ever crashing on the same image more then once.

See Jacobs proposal here: https://github.com/jakr/sapphire/tree/trac5284

I hope I've done right by everone and this sort of sums it up. Now I'm wondering how far should we go with this. And should protecting users against 'illegal' FTP uploads at all cost be part of it?

Thanks, Martine

Re: Block high compression/large dimension images, that crash the system on resize Loz Calver 10/21/13 2:54 AM
That's true, it could be a useful API. How about an 'image_unavailable' method added to the 'Image_Backend' interface? For this particular example, rather than doing if(file_exists(self::get_lockfile_name($filename))... in GD->__construct(), we'd do if(self::image_unavailable($filename)).... If a dev with a different library doesn't need this behaviour, they can just return false; anyway.

The onBeforeDelete() seems wrong (https://github.com/silverstripe/silverstripe-framework/pull/2569/files#diff-bf03e641675b755b14906a961cf8485fL644) - it's GD-specific behaviour, so should we try to make this consistent with the Image_Backend API as well? I was thinking if (self::get_backend()->get_lockfile_name()) {..., but defining get_lockfile_name() on the Image_Backend interface also seems very specific.
Re: [silverstripe-dev] Re: Block high compression/large dimension images, that crash the system on resize Daniel Hensby 10/21/13 3:13 AM

I think we need to separate out the idea of a lock file and just a way of knowing if the file is available or can be resized.

Perhaps with a local image library it can be implemented with lock files, but that doesn't necessarily need to be the name of the function.

If we use a service the image may already be in a process queue for resizing so we'd want a function that would prevent another resize request, but it wouldn't be by querying a lock file.

--
You received this message because you are subscribed to the Google Groups "SilverStripe Core Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to silverstripe-d...@googlegroups.com.
To post to this group, send email to silverst...@googlegroups.com.
Visit this group at http://groups.google.com/group/silverstripe-dev.
For more options, visit https://groups.google.com/groups/opt_out.
Re: [silverstripe-dev] Re: Block high compression/large dimension images, that crash the system on resize Jakob Kristoferitsch 10/24/13 2:14 PM
Hi,
I agree that Image should not care about lock files, since they are GD
specific. My current idea is to add onBeforeDelete to Image_Backend and
to call that method in Image->onBeforeDelete. This way, GD can do the
cleanup itself. I'll try to code that this weekend.

Knowing if an image is available and handling that in the backend is a
next step. I'll probably leave that to someone else.

Yours
Jakob

Am 21.10.2013 12:13, schrieb Daniel Hensby:
> I think we need to separate out the idea of a lock file and just a way
> of knowing if the file is available or can be resized.
>
> Perhaps with a local image library it can be implemented with lock
> files, but that doesn't necessarily need to be the name of the function.
>
> If we use a service the image may already be in a process queue for
> resizing so we'd want a function that would prevent another resize
> request, but it wouldn't be by querying a lock file.
>
> On 21 Oct 2013 10:55, "Loz Calver" <kingl...@gmail.com
Re: Block high compression/large dimension images, that crash the system on resize Loz Calver 1/2/14 9:01 AM
More topics »