Failing to load images over 100000 KB

13 views
Skip to first unread message

henri laurent

unread,
Feb 17, 2020, 9:06:26 AM2/17/20
to Warp 10 users
Hi,

I am trying to save images in a standalone Warp10-2.4.0 database.
I have no problem saving images with a base64 encoded size less than 100000 bytes (see attached photo_loaded_successfully.txt file).
I cannot save larger images, I am getting a (Value too large for GTS ) error message while uploading my image with curl (see attached photo_failing_to_load.txt file).

I saw in the documentation (https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/02_GTS_input_format) that the "Maximum size is linked to the max.encoder.size" parameter.
I modified standalone.max.encoder.size in the /warp10-2.4.0/etc/conf.d/00-warp.conf file. from the default 100000 to 
standalone.max.encoder.size = 400000
and restarted Warp10.

I still can't save pictures.

Can you help?

Thanks,

Henri

photo_failing_to_load.txt
photo_loaded_successfully.txt

Fabien Tencé

unread,
Feb 17, 2020, 10:31:13 AM2/17/20
to Warp 10 users
Hi Henri,

I think there's missing an information in the doc: you also need to change standalone.value.maxsize for standalone or ingress.value.maxsize for distributed.
Keep in mind that storing images in a time series database is far from being optimal.

Regards,
Fabien

henri laurent

unread,
Feb 17, 2020, 11:46:45 AM2/17/20
to Warp 10 users
Thanks Fabien,

I added 
standalone.value.maxsize = 400000
to the  00-warp.conf file and it now works.

You mention that it is "far from being optimal" to save images in Warp10.
Is that a real issue knowing that the images will be very infrequent, once an hour at most for time series recorded at 1Hz, and not that big, limited to 200KB in size.

best,

Henri

Mathias Herberts

unread,
Feb 18, 2020, 12:35:13 PM2/18/20
to Warp 10 users
You can store blobs as much as you want in Warp 10 (up to the configured max size), but the actual access to that data will not be as efficient as if you were storing the same blobs in a more traditional K/V store. The reason being that when reading the blobs back, you will create a GTS with STRING values that will need further conversion to byte arrays.

In the standalone version LevelDB files are kept at 2Mb, storing anything larger will create larger file which will only contain a single value.

In the distributed version, the maximum size you can store is limited by the maximum size of Kafka messages since all stored data will transit via Kafka.

So while not optimal, it is still feasible.
Reply all
Reply to author
Forward
0 new messages