It seems to me that copying a sparse matrix of this size can be done WAY faster. It can definitely be done by implementing a special-purpose copying routine for this class of sparse matrices, but it might be worth seeing if the generic code can be sped up already.
Basis for my claim that this can be done is based on the amount of data involved. It is a 1430x1430 sparse matrix, with entries from {0,1} (yet represented by a matrix over the integers), with on average 83 nonzero entries per row. So the matrix has a fill ratio of about 5%. Not supersparse, but definitely deserves the name.
One thing is that creating a dense matrix out of it is already faster and then copying that is WAY faster:
sage: %time Lc=copy(L)
CPU times: user 1.36 s, sys: 7.55 ms, total: 1.37 s
Wall time: 1.37 s
sage: %time M=L.dense_matrix()
CPU times: user 420 ms, sys: 5.9 ms, total: 426 ms
Wall time: 427 ms
sage: %time Mc=copy(M)
CPU times: user 14.5 ms, sys: 0 ns, total: 14.5 ms
Wall time: 14.7 ms
Creating similar data in a non-optimized way is also way faster. The following makes a location-value type list representation of the matrix (also as a dict):
sage: %time v=[(i,j,1) for i in [0..1429] for j in [0..82]]
CPU times: user 96.9 ms, sys: 13.2 ms, total: 110 ms
Wall time: 105 ms
sage: %time v={(i,j):1 for i in [0..1429] for j in [0..82]}
CPU times: user 125 ms, sys: 25.6 ms, total: 150 ms
Wall time: 132 ms
and as a row-sparse list presentation:
sage: %time v=[(i,j,1) for i in [0..1429] for j in [0..82]]
CPU times: user 96.9 ms, sys: 13.2 ms, total: 110 ms
Wall time: 105 ms
sage: %time v={(i,j):1 for i in [0..1429] for j in [0..82]}
CPU times: user 125 ms, sys: 25.6 ms, total: 150 ms
Wall time: 132 ms
Based on this, I'm convinced a factor 10 speed-up is very easily attained. Based on the copying of the dense one, possibly a factor 100.