ZODB eating CPU but RAM is free

260 views
Skip to first unread message

Khurram Shahzad

unread,
Jul 19, 2017, 8:20:02 AM7/19/17
to zodb
Hi All,

I am encountering very serious performance issues and I observed that mostly 100% CPU is used and RAM utilization remains less than 15%. Is there anyway that I can configure to increase the RAM?

The problem mostly arises when we query portal_catalog which has about 30,000 objects.

Following is my Plone environment:

    Plone 4.3.10rc1 (4313)
    CMF 2.2.9
    Zope 2.13.24
    Python 2.7.10
    PIL 3.2.0 (Pillow)

And this is the buildout

[zeoserver]
<= zeoserver_base
recipe = plone.recipe.zeoserver
zeo-address = 127.0.0.1:8100
zserver-threads = 8
zodb-cache-size = 200000

[client1]
<= client_base
recipe = plone.recipe.zope2instance
zeo-address = ${zeoserver:zeo-address}
http-address = 8080
zeo-client-cache-size = 128MB

Regards,
Khurram.

Hanno Schlichting

unread,
Jul 19, 2017, 8:42:34 AM7/19/17
to zo...@googlegroups.com
Try increasing the zodb-cache-size for the zope2instance (client1). It defaults to 30000, try increasing it to 100000 and monitor the RAM usage.

Increasing this option means the client side (Zope instance) caches a lot more database objects in its own RAM. This avoids database roundtrips and the latency this incurs. Especially the catalog benefits from having all its many tiny objects inside the client side cache.

Hope this helps,
Hanno

Joni Orponen

unread,
Jul 19, 2017, 9:08:58 AM7/19/17
to zo...@googlegroups.com
On Wed, Jul 19, 2017 at 2:42 PM, Hanno Schlichting <ha...@hannosch.eu> wrote:
Try increasing the zodb-cache-size for the zope2instance (client1). It defaults to 30000, try increasing it to 100000 and monitor the RAM usage.


Is the cache_size_bytes option still considered to be experimental and/or broken?

Hanno Schlichting

unread,
Jul 19, 2017, 9:28:50 AM7/19/17
to zo...@googlegroups.com
On Wed, Jul 19, 2017, at 14:53, Joni Orponen wrote:
On Wed, Jul 19, 2017 at 2:42 PM, Hanno Schlichting <ha...@hannosch.eu> wrote:

Try increasing the zodb-cache-size for the zope2instance (client1). It defaults to 30000, try increasing it to 100000 and monitor the RAM usage.

Is the cache_size_bytes option still considered to be experimental and/or broken?

I'd still consider it to be experimental. I did my last tests with it 8 or so years ago and it didn't work reliably back than. The software stack you are using is from around the same time (at least the major Zope and ZODB versions), so I'd be surprised if things had changed in the bug fix releases of those versions.

Hanno

Héctor Velarde

unread,
Jul 19, 2017, 9:34:47 AM7/19/17
to zo...@googlegroups.com
this week we had a similar thread on the Plone community forum:

https://community.plone.org/t/plone-recipe-zope2instance-memory-usage/4528?u=hvelarde

the problem there was high memory usage and low CPU, but the rationale
applies in your case also.

anyway, as Hanno suggested long time ago, you can increase the size of
your ZODB cache to a number around your catalog size + 5.000 objects:

https://mail.zope.org/pipermail/zodb-dev/2010-March/013200.html

Jim Fulton

unread,
Jul 19, 2017, 12:48:15 PM7/19/17
to Khurram Shahzad, zodb
On Wed, Jul 19, 2017 at 12:14 AM, Khurram Shahzad <min2...@gmail.com> wrote:
Hi All,

I am encountering very serious performance issues and I observed that mostly 100% CPU is used and RAM utilization remains less than 15%. Is there anyway that I can configure to increase the RAM?

I assume this is in your application, not, say, in a ZEO server.

What makes you think ZODB is consuming lots of CPU?

 

The problem mostly arises when we query portal_catalog which has about 30,000 objects.

Following is my Plone environment:

    Plone 4.3.10rc1 (4313)
    CMF 2.2.9
    Zope 2.13.24
    Python 2.7.10
    PIL 3.2.0 (Pillow)

And this is the buildout

[zeoserver]
<= zeoserver_base
recipe = plone.recipe.zeoserver
zeo-address = 127.0.0.1:8100
zserver-threads = 8
zodb-cache-size = 200000

[client1]
<= client_base
recipe = plone.recipe.zope2instance
zeo-address = ${zeoserver:zeo-address}
http-address = 8080
zeo-client-cache-size = 128MB

I've never seen ZODB consume lots of CPU.  Generally, a well tuned ZODB application is CPU bound because the application isn't being slowed down waiting for data.  If you're CPU bound, that's a good thing wrt ZODB.

Unless you're swapping or waiting for ZODB (low CPU) I wouldn't expect adding more RAM usage to help.

Jim

Jim Fulton

unread,
Jul 19, 2017, 12:49:04 PM7/19/17
to Joni Orponen, zo...@googlegroups.com
On Wed, Jul 19, 2017 at 6:53 AM, Joni Orponen <j.or...@4teamwork.ch> wrote:
On Wed, Jul 19, 2017 at 2:42 PM, Hanno Schlichting <ha...@hannosch.eu> wrote:
Try increasing the zodb-cache-size for the zope2instance (client1). It defaults to 30000, try increasing it to 100000 and monitor the RAM usage.


Is the cache_size_bytes option still considered to be experimental and/or broken?

No. I don't consider it broken.

Jim

--

Jim Fulton

unread,
Jul 19, 2017, 12:54:05 PM7/19/17
to Hanno Schlichting, zo...@googlegroups.com
On Wed, Jul 19, 2017 at 7:28 AM, Hanno Schlichting <ha...@hannosch.eu> wrote:
On Wed, Jul 19, 2017, at 14:53, Joni Orponen wrote:
On Wed, Jul 19, 2017 at 2:42 PM, Hanno Schlichting <ha...@hannosch.eu> wrote:

Try increasing the zodb-cache-size for the zope2instance (client1). It defaults to 30000, try increasing it to 100000 and monitor the RAM usage.

Is the cache_size_bytes option still considered to be experimental and/or broken?

I'd still consider it to be experimental. I did my last tests with it 8 or so years ago and it didn't work reliably back than.

How so?

One thing to understand is that ZODB doesn't limit memory usage.  Ghostifies objects to meet memory settings at certain times like transaction boundaries and when asked to explicitly.  So a transaction that loads a lot of objects can grow ram without bound.  This is why applications that load lots of objects should explicitly reduce cache size by calling _p_jar.cacheGC() occasionally. 

Jim

Jim Fulton

unread,
Jul 19, 2017, 12:55:52 PM7/19/17
to Héctor Velarde, zo...@googlegroups.com
On Wed, Jul 19, 2017 at 7:34 AM, Héctor Velarde <hector....@gmail.com> wrote:
this week we had a similar thread on the Plone community forum:

https://community.plone.org/t/plone-recipe-zope2instance-memory-usage/4528?u=hvelarde

the problem there was high memory usage and low CPU, but the rationale applies in your case also.

I don't see how. Generally I consider high CPU usage in a ZODB app to be a good sign because the app isn't waiting for ZODB. I've never seen ZODB itself use lots of CPU.

Jim

Hanno Schlichting

unread,
Jul 19, 2017, 1:34:05 PM7/19/17
to Jim Fulton, zo...@googlegroups.com
On Wed, Jul 19, 2017, at 18:53, Jim Fulton wrote:
On Wed, Jul 19, 2017 at 7:28 AM, Hanno Schlichting <ha...@hannosch.eu> wrote:
I'd still consider it to be experimental. I did my last tests with it 8 or so years ago and it didn't work reliably back than.

How so?

This was 8 years ago, so I'm spotty on the details.

IIRC I tried this on a couple different Plone sites back than. After some fairly short time of normal usage (maybe a couple hours) the ZODB connection cache managed to report a negative size. After that the site became unusable. One way to get there earlier was to do frequent manual connection cache GC or minimize.

As a second issue the target cache byte value was almost uncorrelated to the actual memory used on the sites.

Back than I concluded that the way the size estimation works wasn't really good enough yet. I think ZODB/Connection uses the byte length of the pickle of a persistent object as an estimated size. But at least back than in a Plone site that wasn't a good way to predict memory usage of unghosted objects.

So you could either put in a magic count like 30000 and figure out through monitoring what kind of actual memory usage that would result in for your application. Or you could put in 100mb and use monitoring to figure out what the unknown multiplier was to get the actual memory usage.

Since in either case you had an unknown multiplier to figure out, there wasn't really any advantage of using the less proven byte based cache size rather than the traditional count based one.

And of course I was bad and didn't try to distill this further or make bug reports about it back than, sorry!

Hanno

Héctor Velarde

unread,
Jul 19, 2017, 1:49:47 PM7/19/17
to zo...@googlegroups.com
I think Khurram erroneusly blamed the ZODB for the CPU consumption; as
you can see on the buildout configuration, the ZEO server and the ZEO
clients are in the same server.

the CPU consumption could be high if the ZEO clients have to
continuously access the ZEO server to get new objects.

and the performance of the application could be very bad also.

best regards

Héctor Velarde

unread,
Jul 19, 2017, 5:50:34 PM7/19/17
to zo...@googlegroups.com
On 07/19/2017 03:14 AM, Khurram Shahzad wrote:
> [zeoserver]
> <= zeoserver_base
> recipe = plone.recipe.zeoserver
> zeo-address = 127.0.0.1:8100
> zserver-threads = 8
> zodb-cache-size = 200000

HV> BTW, those last 2 directives must be included in your clients and
not in your server part.

Joni Orponen

unread,
Sep 26, 2018, 10:13:05 AM9/26/18
to zodb
The estimates it produces are even more off than ballparking it via object count. This could deserve a closer look, or do we differ on the semantics of 'broken'? Or is there a known point onwards from which this behaves better?

It is difficult to provision resources for a database the cache sizes of which are unruly.

--
Joni Orponen

Vlad Oles

unread,
Dec 14, 2018, 11:54:24 PM12/14/18
to zodb
On Wednesday, July 19, 2017 at 9:54:05 AM UTC-7, Jim Fulton wrote:

One thing to understand is that ZODB doesn't limit memory usage.  Ghostifies objects to meet memory settings at certain times like transaction boundaries and when asked to explicitly.  So a transaction that loads a lot of objects can grow ram without bound.  This is why applications that load lots of objects should explicitly reduce cache size by calling _p_jar.cacheGC() occasionally.

Does it mean that after a database is finalized (no more writing and thus transactions), there is no way to bound its RAM cache size other than manually resetting cache (via cacheGC() or cacheMinimize()) after every N reads?

(I've set a bounty on StackOverfow question about the same, if anybody is interested: https://stackoverflow.com/questions/53605884/zodb-ignores-target-cached-object-count-and-target-cache-memory-size)

Jim Fulton

unread,
Dec 15, 2018, 11:30:16 AM12/15/18
to vladysl...@gmail.com, zo...@googlegroups.com
On Fri, Dec 14, 2018 at 9:54 PM Vlad Oles <vladysl...@gmail.com> wrote:
On Wednesday, July 19, 2017 at 9:54:05 AM UTC-7, Jim Fulton wrote:

One thing to understand is that ZODB doesn't limit memory usage.  Ghostifies objects to meet memory settings at certain times like transaction boundaries and when asked to explicitly.  So a transaction that loads a lot of objects can grow ram without bound.  This is why applications that load lots of objects should explicitly reduce cache size by calling _p_jar.cacheGC() occasionally.

Does it mean that after a database is finalized (no more writing and thus transactions), there is no way to bound its RAM cache size other than manually resetting cache (via cacheGC() or cacheMinimize()) after every N reads?

Define finalized.

When a transaction is committed or a connection closed, the connection's cache is reduced to it's target size. When a database is closed, the connections and their memory is freed. (If there are cycles, the Python garbage collection may take some time to deal with them.)

Jim

Vlad Oles

unread,
Dec 16, 2018, 7:11:38 PM12/16/18
to zodb
By "finalized" I meant the connection is still open but no more transactions are being commited, so the database is used only for reads.

I think I can confirm my initial assumption from your answer though — the target cache size configuration is not going to be applied automatically in this scenario, and I'll have to free database cache manually by caling cacheMinimize().

Jim Fulton

unread,
Dec 17, 2018, 9:09:45 AM12/17/18
to Vlad Oles, zodb


On Sun, Dec 16, 2018, 5:11 PM Vlad Oles <vladysl...@gmail.com wrote:
On Saturday, December 15, 2018 at 8:30:16 AM UTC-8, Jim Fulton wrote:


On Fri, Dec 14, 2018 at 9:54 PM Vlad Oles <vladysl...@gmail.com> wrote:
On Wednesday, July 19, 2017 at 9:54:05 AM UTC-7, Jim Fulton wrote:

One thing to understand is that ZODB doesn't limit memory usage.  Ghostifies objects to meet memory settings at certain times like transaction boundaries and when asked to explicitly.  So a transaction that loads a lot of objects can grow ram without bound.  This is why applications that load lots of objects should explicitly reduce cache size by calling _p_jar.cacheGC() occasionally.

Does it mean that after a database is finalized (no more writing and thus transactions), there is no way to bound its RAM cache size other than manually resetting cache (via cacheGC() or cacheMinimize()) after every N reads?

Define finalized.

When a transaction is committed or a connection closed, the connection's cache is reduced to it's target size. When a database is closed, the connections and their memory is freed. (If there are cycles, the Python garbage collection may take some time to deal with them.)

By "finalized" I meant the connection is still open but no more transactions are being commited, so the database is used only for reads.

Reads are executed in transactions. In fact, most transactions are read transactions.



I think I can confirm my initial assumption from your answer though — the target cache size configuration is not going to be applied automatically in this scenario, and I'll have to free database cache manually by caling cacheMinimize().

Cache  management is applied at transaction boundaries. You can and generally should commit periodically when reading. If you don't, you won't see updates from other users or processes. 

Historically, cacheMinimize is used for large write transactions that also read a lot of data.

Jim







--
You received this message because you are subscribed to the Google Groups "zodb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zodb+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vlad Oles

unread,
Dec 17, 2018, 9:57:58 AM12/17/18
to zodb
On Monday, December 17, 2018 at 6:09:45 AM UTC-8, Jim Fulton wrote:

By "finalized" I meant the connection is still open but no more transactions are being commited, so the database is used only for reads.

Reads are executed in transactions. In fact, most transactions are read transactions.

Good to know, thanks.
 


I think I can confirm my initial assumption from your answer though — the target cache size configuration is not going to be applied automatically in this scenario, and I'll have to free database cache manually by caling cacheMinimize().

Cache  management is applied at transaction boundaries. You can and generally should commit periodically when reading. If you don't, you won't see updates from other users or processes.

I know for a fact no other user or process writes to (or reads from, for what it matters) the database. Would you still suggest me to commit when reading, as opposed to calling cacheMinimize instead?

Christopher Lozinski

unread,
Jan 16, 2019, 5:33:38 AM1/16/19
to zodb
I found this talk quite interesting.  

Pickle is a compact serialization protocol for Python objects. Great for communication between #distributed #Python programs, but it is not safe. What can be done?




Warm Regards
Chris

Christopher Lozinski

unread,
Jan 22, 2019, 1:16:49 PM1/22/19
to zodb
Jim Fulton gave a very interesting talk at the Plone Python conference.

https://www.youtube.com/watch?v=ovKEz5uWSBw

In particular he spoke in praise of Gevent. Scroll to time 19 32

I think I agree with him.

He also mentioned that the maintainer of Gevent is also the primary maintainer of
the ZODB.

I am curious to hear people’s thoughts about Gevent and ZODB.

Are there any Event/ZODB libraries I should be aware of?

Is anyone using them together?

Warm Regards
Christopher Lozinski

https://PythonLinks.info
tel: +48 12 361 3136
Skype: clozinski

Juergen Herrmann

unread,
Jan 23, 2019, 5:00:18 PM1/23/19
to Christopher Lozinski, zodb
What would be necessary to limit pickling in ZODB to read only specific/whitelisted classes?

I chose ZODB as the file storage format for my speaker crossover program and wrote a very thin wrapper around ZODB which puts a FileStorage .fs into the tmp dir, opens it and copies it back when saving. Works great but now that I hear that pickles are inherently unsafe I wonder how I can mitigate this.

Best regards,
Jürgen

--

Jim Fulton

unread,
Jan 24, 2019, 9:04:17 AM1/24/19
to Juergen Herrmann, Christopher Lozinski, zodb
On Wed, Jan 23, 2019 at 3:00 PM 'Juergen Herrmann' via zodb <zo...@googlegroups.com> wrote:
What would be necessary to limit pickling in ZODB to read only specific/whitelisted classes?

This could be done with a storage wrapper. 

There is a pickle hook for getting global objects that can be used to limit what pickle is willing to call. For example, ZEO does this at the protocol level avoid accepting all but a limited number of globals in pickles in it's wire protocol. Pickle is really way more powerful than ZEO needs and I wouldn't use it today and ZEO recently got support for msgpack.
 
I chose ZODB as the file storage format for my speaker crossover program and wrote a very thin wrapper around ZODB which puts a FileStorage .fs into the tmp dir, opens it and copies it back when saving. Works great but now that I hear that pickles are inherently unsafe I wonder how I can mitigate this.

They aren't inherently unsafe.  By default, you shouldn't accept pickles from untrusted sources. Are you doing that?  If not, you have nothing to worry about.

Jim



 

Best regards,
Jürgen

Am Mi., 16. Jan. 2019 um 11:33 Uhr schrieb Christopher Lozinski <lozi...@specialtyjobmarkets.com>:
I found this talk quite interesting.  

Pickle is a compact serialization protocol for Python objects. Great for communication between #distributed #Python programs, but it is not safe. What can be done?




Warm Regards
Chris

--
You received this message because you are subscribed to the Google Groups "zodb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zodb+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "zodb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zodb+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Juergen Herrmann

unread,
Jan 24, 2019, 1:09:22 PM1/24/19
to Jim Fulton, zodb
I use the ZODB as file format for a application that is used by end users. So telling them "don't open files from untrusted sources" seems a bit weird :)

I just looked at the ZEO code and think i found the relevant code sections where unpickling is restricted. Could you please give me a hint where i should look in the ZODB code and what i have to wrap exactly to use a restricted Unpickler?

Best regards,
Jürgen

Sean Upton

unread,
Jan 24, 2019, 2:31:29 PM1/24/19
to Juergen Herrmann, Jim Fulton, zodb

On Jan 24, 2019, at 11:09 AM, 'Juergen Herrmann' via zodb <zo...@googlegroups.com> wrote:

I use the ZODB as file format for a application that is used by end users. So telling them "don't open files from untrusted sources" seems a bit weird :)

Your ZEO isn't open to the public on a TCP port, only your application on machines you control, no?  That makes the "injection" security question an application concern, not a ZODB or pickle problem?

If you open your ZEO (or RelStorage) to Python applications running on machines you don't control, you are doing something ZODB was not designed to directly address.

Sean

Juergen Herrmann

unread,
Jan 26, 2019, 10:10:12 AM1/26/19
to Sean Upton, Jim Fulton, zodb
My application is a GUI app that uses ZODB Filestorage files as it's storage format and these files can be sent to other people who expect to be able to open them without risk of their machine getting taken over. Makes sense?

I think I solved the problem by overriding classFactory() in ZODB.DB.DB:
class XoverDB(DB):
"""
derived from ZODB.DB.DB - overrides classFactory() to only allow
import of very specific classes
"""
def classFactory(self, connection, modulename, globalname):
if (modulename, globalname) not in ALLOWED_MODULE_GLOBALS:
raise TypeError("Not allowed to import global %r from module %r" % (globalname, modulename))
return super().classFactory(connection, modulename, globalname)
ALLOWED_MODULE_GLOBALS is a lsit of (modulename, globalname) tuples.

Did I miss anything?

Best regards,
Jürgen Herrmann

Jim Fulton

unread,
Jan 26, 2019, 1:00:36 PM1/26/19
to Juergen Herrmann, Sean Upton, Jim Fulton, zodb
On Sat, Jan 26, 2019 at 8:10 AM 'Juergen Herrmann' via zodb <zo...@googlegroups.com> wrote:
My application is a GUI app that uses ZODB Filestorage files as it's storage format and these files can be sent to other people who expect to be able to open them without risk of their machine getting taken over. Makes sense?

Yup. Interesting.
 

I think I solved the problem by overriding classFactory() in ZODB.DB.DB:
class XoverDB(DB):
"""
derived from ZODB.DB.DB - overrides classFactory() to only allow
import of very specific classes
"""
def classFactory(self, connection, modulename, globalname):
if (modulename, globalname) not in ALLOWED_MODULE_GLOBALS:
raise TypeError("Not allowed to import global %r from module %r" % (globalname, modulename))
return super().classFactory(connection, modulename, globalname)
ALLOWED_MODULE_GLOBALS is a lsit of (modulename, globalname) tuples.

Did I miss anything?

That seems reasonable to me.  If you're feeling especially paranoid, perhaps you want some data validation logic in your application's __setstate__ methods (which you're probably not defining now) to protect against some of the other attacks mentioned in the subject video.  There are lots of schema libraries around that might help.

Jim
 

Best regards,
Jürgen Herrmann

Am Do., 24. Jan. 2019 um 20:31 Uhr schrieb Sean Upton <sdu...@gmail.com>:

On Jan 24, 2019, at 11:09 AM, 'Juergen Herrmann' via zodb <zo...@googlegroups.com> wrote:

I use the ZODB as file format for a application that is used by end users. So telling them "don't open files from untrusted sources" seems a bit weird :)

Your ZEO isn't open to the public on a TCP port, only your application on machines you control, no?  That makes the "injection" security question an application concern, not a ZODB or pickle problem?

If you open your ZEO (or RelStorage) to Python applications running on machines you don't control, you are doing something ZODB was not designed to directly address.

Sean

--
You received this message because you are subscribed to the Google Groups "zodb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zodb+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jim Fulton

unread,
Jan 26, 2019, 1:11:38 PM1/26/19
to Christopher Lozinski, zodb
On Wed, Jan 16, 2019 at 3:33 AM Christopher Lozinski <lozi...@specialtyjobmarkets.com> wrote:
I found this talk quite interesting.  

I finally found some time to watch this. Me too.
 

9 minutes Pickle is a compact serialization protocol for Python objects. Great for communication between #distributed #Python programs, but it is not safe.

What can be done?



The original and I would say dominant use case for pickle, as its name implies, is for saving things.

It wouldn't be among my first choices for implementing network protocols (OK, anymore, but give me a break, I implemented ZEO many years ago).

I take issue with the assertion that it shouldn't be used for long term storage. (Leaving aside the Py2/3 fiasco.)

There is coupling between stored data and application code, but this is true of any database.  Applications that use databases are dependent on the database schema and will break if the database schema changes.  The danger is a bit greater with anything like ZODB (or Mongo or ..) that doesn't enforce schemas on servers, but changes in schemas over time are an issue regardless of the serialization format or database technology used.

(BTW, IMO, there are no schemaless databases.  There's always a schema for application data even if it's only expressed in application code.)

Jim

Juergen Herrmann

unread,
Jan 27, 2019, 3:37:57 PM1/27/19
to Jim Fulton, zodb
Am Sa., 26. Jan. 2019 um 19:00 Uhr schrieb Jim Fulton <j...@jimfulton.info>:

On Sat, Jan 26, 2019 at 8:10 AM 'Juergen Herrmann' via zodb <zo...@googlegroups.com> wrote:
My application is a GUI app that uses ZODB Filestorage files as it's storage format and these files can be sent to other people who expect to be able to open them without risk of their machine getting taken over. Makes sense?

Yup. Interesting.

In case anybody is interesed i pasted a stripped down version of my FileStorage wrapper class here:

I found the whole process of working with the ZODB as application file format simply wonderful and very straightforward!
 
 

I think I solved the problem by overriding classFactory() in ZODB.DB.DB:
class XoverDB(DB):
"""
derived from ZODB.DB.DB - overrides classFactory() to only allow
import of very specific classes
"""
def classFactory(self, connection, modulename, globalname):
if (modulename, globalname) not in ALLOWED_MODULE_GLOBALS:
raise TypeError("Not allowed to import global %r from module %r" % (globalname, modulename))
return super().classFactory(connection, modulename, globalname)
ALLOWED_MODULE_GLOBALS is a lsit of (modulename, globalname) tuples.

Did I miss anything?

That seems reasonable to me.  If you're feeling especially paranoid, perhaps you want some data validation logic in your application's __setstate__ methods (which you're probably not defining now) to protect against some of the other attacks mentioned in the subject video.  There are lots of schema libraries around that might help.

I guess you're talking about the "DOS" type attacks like nested arrays and stuff? Actually I think I will ignore these as the only "damage" this might cause is one hanging process in a desktop environment which should be easy to kill by the user...

Best regards,
Jürgen 

tk

unread,
Jan 27, 2019, 6:12:05 PM1/27/19
to Christopher Lozinski, zodb


Le 22/01/2019 à 1:16 PM, Christopher Lozinski a écrit :

> I am curious to hear people’s thoughts about Gevent and ZODB.
>
> Are there any Event/ZODB libraries I should be aware of?
>
> Is anyone using them together?

Yes... gevent+zodb works very well with CPython. But now i prefer using
only ZODB with pypy... :-)

kind regards,

tk


> Warm Regards
> Christopher Lozinski
>
> https://PythonLinks.info
> tel: +48 12 361 3136
> Skype: clozinski
>

--
tka...@yandex.com | Twitter: @wise_project
https://www.isotoperesearch.ca/
Reply all
Reply to author
Forward
0 new messages