I tried to clone a repository of size ~1.7G in windows . It takes long time
to clone
destination directory: mytest
requesting all changes
adding changesets
adding manifests
adding file changes
added 2908 changesets with 23759 changes to 14424 files
updating to branch default
Time: real 2626.080 secs (user 160.734+0.000 sys 139.766+0.000)
Is there any way to clone separate directory .
How to speed up the clone process for such a repository .
Thanks,
Karthi
--
View this message in context: http://mercurial.808500.n3.nabble.com/hg-clone-for-large-repository-tp1011704p1011704.html
Sent from the General mailing list archive at Nabble.com.
_______________________________________________
Mercurial mailing list
Merc...@selenic.com
http://selenic.com/mailman/listinfo/mercurial
If you have successfully cloned it once, further updates of that new
clone will be much faster.
If you want to clone it on a third machine, produce a bundle and
transfer that, then most of setting up the new clone can be done locally
on the machine where the new clone will sit.
See
http://benjamin.smedbergs.us/blog/2008-06-05/getting-mozilla-central-with-limited-bandwidth/
for a step-by-step procedure (for a big Mercurial repo, probably not the
one you want to clone, but you can get inspiration from it).
Best regards,
Tony.
--
Law of Selective Gravity:
An object will fall so as to do the most damage.
Jenning's Corollary:
The chance of the bread falling with the buttered side down is
directly proportional to the cost of the carpet.
Note that most of that is time to send and receive data. Here you're
averaging 647kB/s aka 5.2Mb/s, which isn't bad for over the internet.
If this is on a LAN, you might find the --uncompressed flag to clone
useful, it can go -much- faster.
--
Mathematics is the supreme nostalgia of our time.
-chad
IIUC, if you clone a local repo, hg will detect it and by default use
hard links on systems (Unix, Linux, and IIUC WinNT and later) which
support them. That's even faster than --uncompressed.
Best regards,
Tony.
--
The truth is what is; what should be is a dirty lie.
-- Lenny Bruce
yes, but hard links only work if source and destination are on the same file system. so in your terms "local" means "same file system", but as i used it, it means "locally mounted file system" (i.e accessible by a path like /Volumes/server/whatever). while it's possible to use tools like FUSE to mount remote storage such as FTP, *usually* a local file path indicates something on a LAN or an internal or external hard drive, thumb drive, etc. so, 90% of the time[1], when a local path is used and it's not on the same file system, it could benefit from the --uncompressed flag. perhaps in these cases clone should default to --uncompressed.
in other words, the current defaults are:
local path on same file system: hard links
all others: compressed
new defaults would be:
local path on same file system: hard links
local path not on same file system: uncompressed
http: compressed
ssh: compressed
-chad
sources:
(1) dubiousfactoids.com