hg clone for large repository

334 views
Skip to first unread message

karkrish

unread,
Jul 31, 2010, 11:42:17 AM7/31/10
to merc...@selenic.com

hi ,

I tried to clone a repository of size ~1.7G in windows . It takes long time
to clone

destination directory: mytest
requesting all changes
adding changesets
adding manifests
adding file changes
added 2908 changesets with 23759 changes to 14424 files
updating to branch default
Time: real 2626.080 secs (user 160.734+0.000 sys 139.766+0.000)

Is there any way to clone separate directory .
How to speed up the clone process for such a repository .

Thanks,
Karthi


--
View this message in context: http://mercurial.808500.n3.nabble.com/hg-clone-for-large-repository-tp1011704p1011704.html
Sent from the General mailing list archive at Nabble.com.
_______________________________________________
Mercurial mailing list
Merc...@selenic.com
http://selenic.com/mailman/listinfo/mercurial

Tony Mechelynck

unread,
Aug 1, 2010, 12:59:12 AM8/1/10
to karkrish, merc...@selenic.com
On 31/07/10 17:42, karkrish wrote:
>
> hi ,
>
> I tried to clone a repository of size ~1.7G in windows . It takes long time
> to clone
>
> destination directory: mytest
> requesting all changes
> adding changesets
> adding manifests
> adding file changes
> added 2908 changesets with 23759 changes to 14424 files
> updating to branch default
> Time: real 2626.080 secs (user 160.734+0.000 sys 139.766+0.000)
>
> Is there any way to clone separate directory .
> How to speed up the clone process for such a repository .
>
> Thanks,
> Karthi
>
>

If you have successfully cloned it once, further updates of that new
clone will be much faster.

If you want to clone it on a third machine, produce a bundle and
transfer that, then most of setting up the new clone can be done locally
on the machine where the new clone will sit.

See
http://benjamin.smedbergs.us/blog/2008-06-05/getting-mozilla-central-with-limited-bandwidth/
for a step-by-step procedure (for a big Mercurial repo, probably not the
one you want to clone, but you can get inspiration from it).


Best regards,
Tony.
--
Law of Selective Gravity:
An object will fall so as to do the most damage.

Jenning's Corollary:
The chance of the bread falling with the buttered side down is
directly proportional to the cost of the carpet.

Matt Mackall

unread,
Aug 1, 2010, 10:05:47 AM8/1/10
to karkrish, merc...@selenic.com
On Sat, 2010-07-31 at 08:42 -0700, karkrish wrote:
> hi ,
>
> I tried to clone a repository of size ~1.7G in windows . It takes long time
> to clone
>
> destination directory: mytest
> requesting all changes
> adding changesets
> adding manifests
> adding file changes
> added 2908 changesets with 23759 changes to 14424 files
> updating to branch default
> Time: real 2626.080 secs (user 160.734+0.000 sys 139.766+0.000)

Note that most of that is time to send and receive data. Here you're
averaging 647kB/s aka 5.2Mb/s, which isn't bad for over the internet.

If this is on a LAN, you might find the --uncompressed flag to clone
useful, it can go -much- faster.

--
Mathematics is the supreme nostalgia of our time.

Chad Dombrova

unread,
Aug 1, 2010, 4:22:13 PM8/1/10
to mercurial
is it possible and desirable to have hg detect that the clone destination is "local" (i.e. not ssh/http) and use --uncompressed by default?

-chad

Tony Mechelynck

unread,
Aug 1, 2010, 4:35:18 PM8/1/10
to Chad Dombrova, mercurial

IIUC, if you clone a local repo, hg will detect it and by default use
hard links on systems (Unix, Linux, and IIUC WinNT and later) which
support them. That's even faster than --uncompressed.


Best regards,
Tony.
--
The truth is what is; what should be is a dirty lie.
-- Lenny Bruce

Chad Dombrova

unread,
Aug 1, 2010, 6:21:02 PM8/1/10
to mercurial

> On 01/08/10 22:22, Chad Dombrova wrote:
>> is it possible and desirable to have hg detect that the clone destination is "local" (i.e. not ssh/http) and use --uncompressed by default?
> IIUC, if you clone a local repo, hg will detect it and by default use hard links on systems (Unix, Linux, and IIUC WinNT and later) which support them. That's even faster than --uncompressed.

yes, but hard links only work if source and destination are on the same file system. so in your terms "local" means "same file system", but as i used it, it means "locally mounted file system" (i.e accessible by a path like /Volumes/server/whatever). while it's possible to use tools like FUSE to mount remote storage such as FTP, *usually* a local file path indicates something on a LAN or an internal or external hard drive, thumb drive, etc. so, 90% of the time[1], when a local path is used and it's not on the same file system, it could benefit from the --uncompressed flag. perhaps in these cases clone should default to --uncompressed.

in other words, the current defaults are:

local path on same file system: hard links
all others: compressed

new defaults would be:

local path on same file system: hard links
local path not on same file system: uncompressed
http: compressed
ssh: compressed

-chad


sources:
(1) dubiousfactoids.com

Reply all
Reply to author
Forward
0 new messages