I wonder if the tortoiseSVN developers would consider making an enhancement request. It is to do with the way the network is handled. There are two issues, both to do with the behaviour of tortoise in the presence of a badly performing corporate network.
* When the connection drops up to a couple of retries would be nice. At the moment it bombs out and can leave the repo in a state where one has to say 'svn cleanup'. I am working with a very unreliable and poorly performing corporate network and have seen this problem many times. A full SVN checkout has to be incremented several times before the complete repo can be retrieved.
* When a large retrieve is going on, .e.g. getting a folder that consists of many sub-folders and files, the retrieval is very slow. This is not the fault of tortoise. I think it's the fault of the way the network has been setup. But tortoise could mitigate this by fetching files in parallel. Not a trivial implementation I know but it would be very helpful. I recently tackled a similar problem in a python script of mine which was fetching files via SFTP. Serially it took over 20 minutes to fetch the files one at a time but with parallelisation I got this down to 9 minutes.
I realise it might seem like a lot to ask to get tortoise to change to cope with a network that has been setup badly. After all, that's not tortoises fault! But bad networks are a fact of life. Furthermore, sometimes the network is not actually configured badly but for various reasons has an unreliable connection. Perhaps its operating over wifi with a weak signal. A more robust client would be very welcome to deal with these situations. One way would be to ask svn developers to address it but I'm not sure how they would take that. And the pace of development there seems to be quite slow. I note that tortoise development seems to be in a much more healthy state, e.g. the new release of version 1.10.
Regards,
Andrew Marlow