Memory management & Garbage Collection
Memory management & Garbage Collection - pickle/cPickle
Gary Pennington - UK Performance Centre Gary.Pennington at uk.sun.comThu Jul 8 12:47:11 EDT 1999
- Previous message (by thread): Any *disadvantage* to using xrange() instead of range()?
- Next message (by thread): Memory management & Garbage Collection - pickle/cPickle
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi,
I've written some server side code which makes extensive use of shelve
(and indirectly pickle). I've noticed that over time the amount of heap
used by the server grows considerably. Investigation showed that objects
create from pickle or cPickle cause memory to be allocated that is never
freed.
The following test program soon uses up all the memory on my machine :-
import sys
sys.path.insert(0,"/home/garyp/dev/webJudge/dev/cgi")
import shelve
d=shelve.open("gameDB")
print d.keys()
def getObj():
return d["remote1"]
for i in range(0,100000):
x=getObj()
del x
x=None
gameDB contains some objects previously pickled and about 30k in size.
Running the above and using truss I can see memory being allocated (lots
of brk calls) as the data is continuosly re-read from the file.
The del calls and x=None seem to make no difference. Also, it makes no
difference whether I go through shelve or access the data files direct.
My system is using dumdbm.py, it's Solaris 2.6.
Any suggestions as to what the problem is?
Gary Pennington
- Previous message (by thread): Any *disadvantage* to using xrange() instead of range()?
- Next message (by thread): Memory management & Garbage Collection - pickle/cPickle
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list