On Tue, Mar 22, 2022 at 8:13 PM Nition via TortoiseSVN
<
torto...@googlegroups.com> wrote:
>
> The former, an SVN repository. My test process is:
> - Create a new repository on the USB drive
> - Check out the empty repository onto an SSD
> - Add all the files (3.68GB)
> - Commit all that as the initial commit (takes 16 hours)
>
> I realise committing gigabytes at once is far from best practice. In this case I had a project which I'd worked on for a little while before putting it under version control.
Ah, wow! Okay, that makes this an entirely different issue than the
one you referred to on StackOverflow
(
https://stackoverflow.com/questions/68847008/checkout-speed-toirtoise-svn-on-usb-drive),
which is about having a *working copy* on a USB drive.
So, naturally, my suggestion of using "exclusive locking" for the
client on the working copy, would not help at all.
I think this pretty much means this is not the ideal mailinglist for
this issue, as you are talking about a server-side / repository
performance problem (TortoiseSVN is an SVN *client*, offering a GUI
for the client-side of things). If you're using the file:// protocol
to access the repository then the client is also acting as an internal
"svn server". But also in that case, the issue is with the server-side
(repository) libraries contained in the client, and the way the
back-end files are organised inside the repository.
I suggest you repeat your question on the
us...@subversion.apache.org
mailinglist. See
https://subversion.apache.org/mailing-lists.html#users-ml for
instructions on subscribing (or not, if you prefer -- you don't have
to be subscribed to post, but then do mention that you want to be
cc'ed). Several people (including me) follow both lists. Also, do
mention which protocol you use to access the repository (file:// ?),
and the precise version with which you created the repository (the
output of 'svnadmin --version' which you used to run 'svnadmin
create'), as well as the version of (Tortoise)SVN you're using at the
end.
I'm not sure whether you'll receive adequate suggestions there, but in
any case there are a bit more "server / back-end oriented" people on
there that might have experience with this sort of thing. Do keep in
mind that the repository back-end is not simply a filesystem where
these files are copied, you have to view it as a *database*. An
incoming commit is built up as a transaction, with a lot of operations
happening on the fly to optimize the storage etc (for instance
"deltification", compression, "representation sharing") -- some of
these things can be tuned for certain workloads to, for instance, get
you more speed at the cost of storage.
--
Johan