Maximum number of bytes to map simultaneously into memory from pack files. If Git needs to access more than this many bytes at once to complete an operation it will unmap existing regions to reclaim virtual address space within the process.
Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.
Common unit suffixes of k, m, or g are supported.
Maximum number of bytes per thread to reserve for caching base objects that may be referenced by multiple deltified objects. By storing the entire decompressed base objects in a cache Git is able to avoid unpacking and decompressing frequently used base objects multiple times.
Default is 96 MiB on all platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.
Common unit suffixes of k, m, or g are supported.
Files larger than this size are stored deflated, without attempting delta compression. Storing large files without delta compression avoids excessive memory usage, at the slight expense of increased disk usage. Additionally files larger than this size are always treated as binary.
Default is 512 MiB on all platforms. This should be reasonable for most projects as source code and other text files can still be delta compressed, but larger binary media files won’t be.
Common unit suffixes of k, m, or g are supported.
Enable "sparse checkout" feature. See git-sparse-checkout[1] for more information.
core.sparseCheckoutConeEnables the "cone mode" of the sparse checkout feature. When the sparse-checkout file contains a limited set of patterns, then this mode provides significant performance advantages. See git-sparse-checkout[1] for more information.
Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch.
The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the git diff option -l. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off.
If the number of objects imported by git-fast-import[1] is below this limit, then the objects will be unpacked into loose object files. However if the number of imported objects equals or exceeds this limit then the pack will be stored as a pack. Storing the pack from a fast-import can make the import operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead.
When there are approximately more than this many loose objects in the repository, git gc --auto will pack them. Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is 6700.
Setting this to 0 disables not only automatic packing based on the number of loose objects, but any other heuristic git gc --auto will otherwise use to determine if there’s work to do, such as gc.autoPackLimit.
When there are more than this many packs that are not marked with *.keep file in the repository, git gc --auto consolidates them into one larger pack. The default value is 50. Setting this to 0 disables it. Setting gc.auto to 0 will also disable this.
See the gc.bigPackThreshold configuration variable below. When in use, it’ll affect how the auto pack limit works.
If non-zero, all packs larger than this limit are kept when git gc is run. This is very similar to --keep-largest-pack except that all packs that meet the threshold are kept, not just the largest pack. Defaults to zero. Common unit suffixes of k, m, or g are supported.
Note that if the number of kept packs is more than gc.autoPackLimit, this configuration variable is ignored, all packs except the base pack will be repacked. After this the number of packs should go below gc.autoPackLimit and gc.bigPackThreshold should be respected again.
If the amount of memory estimated for git repack to run smoothly is not available and gc.bigPackThreshold is not set, the largest pack will also be excluded (this is the equivalent of running git gc with --keep-largest-pack).
Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests.
Note that raising this limit is only effective for disabling chunked transfer encoding and therefore should be used only where the remote server or a proxy only supports HTTP/1.0 or is noncompliant with the HTTP standard. Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.
If the HTTP transfer speed is less than http.lowSpeedLimit for longer than http.lowSpeedTime seconds, the transfer is aborted. Can be overridden by the GIT_HTTP_LOW_SPEED_LIMIT and GIT_HTTP_LOW_SPEED_TIME environment variables.
The number of files to consider in the exhaustive portion of rename detection during a merge. If not specified, defaults to the value of diff.renameLimit. If neither merge.renameLimit nor diff.renameLimit are specified, currently defaults to 7000. This setting has no effect if rename detection is turned off.
The maximum size of memory that is consumed by each thread in git-pack-objects[1] for pack window memory when no limit is given on the command line. The value can be suffixed with "k", "m", or "g". When left unconfigured (or set explicitly to 0), there will be no limit.
The maximum memory in bytes used for caching deltas in git-pack-objects[1] before writing them out to a pack. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Repacking large repositories on machines which are tight with memory might be badly impacted by this though, especially if this cache pushes the system into swapping. A value of 0 means no limit. The smallest size of 1 byte may be used to virtually disable this cache. Defaults to 256 MiB.
The maximum size of a delta, that is cached in git-pack-objects[1]. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Defaults to 1000. Maximum value is 65535.
The maximum size of a pack. This setting only affects packing to a file when repacking, i.e. the git:// protocol is unaffected. It can be overridden by the --max-pack-size option of git-repack[1]. Reaching this limit results in the creation of multiple packfiles.
Note that this option is rarely useful, and may result in a larger total on-disk size (because Git will not store deltas between packs), as well as worse runtime performance (object lookup within multiple packs is slower than a single pack, and optimizations like reachability bitmaps cannot cope with multiple packs).
If you need to actively run Git using smaller packfiles (e.g., because your filesystem does not support large files), this option may help. But if your goal is to transmit a packfile over a medium that supports limited sizes (e.g., removable media that cannot store the whole repository), you are likely better off creating a single large packfile and splitting it using a generic multi-volume archive tool (e.g., Unix split).
The minimum size allowed is limited to 1 MiB. The default is unlimited. Common unit suffixes of k, m, or g are supported.
If the number of objects received in a push is below this limit then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead.
If the size of the incoming pack stream is larger than this limit, then git-receive-pack will error out, instead of accepting the pack file. If not set or set to 0, then the size is unlimited.
The number of files to consider when performing rename detection in git-status[1] and git-commit[1]. Defaults to the value of diff.renameLimit.
When fetch.unpackLimit or receive.unpackLimit are not set, the value of this variable is used instead. The default value is 100.
Explicitly allow or ban the object filter corresponding to <filter>, where <filter> may be one of: blob:none, blob:limit, object:type, tree, sparse:oid, or combine. If using combined filters, both combine and all of the nested filter kinds must be allowed. Defaults to uploadpackfilter.allow.
This might be it, had a feeling it was some kind of file or filename issue.I don't think it produces some kind of report, but perhaps I will try again sometime.
I am not really working with the linux kernel, at least not yet.Just curious how many branches and what kind of branches there are.So far I would only see a master branch and some lines and that is, that was quite surprising.Makes me wonder if the linux kernel is too big for branching ?Why are there so few branches in linux kernel ???
Bye,Skybuck.
On Thu, Dec 2, 2021 at 9:55 PM skybuck2000 <skybu...@hotmail.com> wrote:This might be it, had a feeling it was some kind of file or filename issue.I don't think it produces some kind of report, but perhaps I will try again sometime.If Git fails to do something, it's typically very good about providing some sort of output to at least try to say why.
I am not really working with the linux kernel, at least not yet.Just curious how many branches and what kind of branches there are.So far I would only see a master branch and some lines and that is, that was quite surprising.Makes me wonder if the linux kernel is too big for branching ?Why are there so few branches in linux kernel ???I don't know where you got your copy of it from, but the answer almost certainly is not that it's too big for branching (I doubt there's a threshold where that ever becomes true); it's because Linux kernel development is done via a mailing list and everyone who works on it (essentially) has their own personal fork. They're not all working in some shared copy hosted on Github (or any other site). Things like https://github.com/torvalds/linux are just mirrors--they're not where real, mainline development happens.
Bye,Skybuck.P.S. Please stop top-posting. It makes your responses unnecessarily harder to follow.
The paging algorithm in windows 7 is flawed. It unnecessarily pages to disk.
Another good reason to disable paging is to learn how much RAM a system really needs.
Another good reason to disable paging is to learn how many firefox tabs or applications can be open without hitting the page file.Another reason is to learn which applications can handle out of memory situations, not many apperently.Another reason and this is the main one, is very speed reasons.It's faster to crash an application because out of memory and restart it then to wait forever on the pagefile to catch up.Also paging on windows can create big disk queues waiting for all the memory request to go through, again big fail algorith.
I also tried ubuntu today 20.04.3, it works much better and doesn't seem to be swapping a lot, comparison is not fair between windows 7 old os and ubunty brand new... (1 year old tops) but still =D
Did you turn off paging in Ubuntu?
Oh wait, is that an option?
--
It's called "swapping" in Unix, btw.
Oh wait, is that an option?
Sure it is. Just don't configure any swap space.
I'm old.
The paging algorithm in windows 7 is flawed. It unnecessarily pages to disk.
Paging always pages to disk. Win 7 (and NT) may have been a little aggressive. How is it flawed? Please be specific.
Another good reason to disable paging is to learn how much RAM a system really needs.Another good reason to disable paging is to learn how many firefox tabs or applications can be open without hitting the page file.Another reason is to learn which applications can handle out of memory situations, not many apperently.Another reason and this is the main one, is very speed reasons.It's faster to crash an application because out of memory and restart it then to wait forever on the pagefile to catch up.Also paging on windows can create big disk queues waiting for all the memory request to go through, again big fail algorith.
Modern Windows and *nix variants all use virtual memory.
Real memory (and I like lots of it) only make everything run faster (less paging).
Since the OS is designed to use paging,
limiting/eliminating paging does not tell you how much RAM a system needs.
It just puts up roadblocks.
Sure, some RTOS systems need to avoid paging (specialized devices or space shuttles back in the day?). Not sure you understand virtual memory.
I also tried ubuntu today 20.04.3, it works much better and doesn't seem to be swapping a lot, comparison is not fair between windows 7 old os and ubunty brand new... (1 year old tops) but still =D
Did you turn off paging in Ubuntu? Oh wait, is that an option?