As you discovered the flexibility of storing labels and other
information with the graph nodes comes with some memory cost.
If the database has a standard interface there is probably a Python
module to grab the data directly from your database. That might
remove some of the steps of dumping the data to intermediate files and
allow you to load and update the graph faster directly from the
database.
But I think the main problem you are having is running out of memory
when you load the graph. If you are swapping memory it's going to be
very slow to load. If you have a machine with more memory it should
be possible - for example I just loaded a multigraph on my laptop that
has 5k nodes and 4M edges and it took about one minute to read from a
text file of edges and loaded to about 2GB of memory. So you can
roughly guess that you'll need about 10 times that for your graph
depending on exactly what you are storing in the graph.
We have explored using on-disk storage for graphs that are too big to
fit into memory. Obviously there would be some performance reduction
when accessing the data but if you can't fit it in memory it might be
the best solution. Currently there is no code in NetworkX for that
but I did make a demo implementation using Python shelve to store the
data (see https://networkx.lanl.gov/trac/ticket/224 and the linked
email discussion there). That code probably won't work with the
latest version of NetworkX but it wouldn't be hard to update it.
Aric