GIT failed to checkout the linux kernel on a 6 GB system

149 views
Skip to first unread message

skybuck2000

unread,
Dec 2, 2021, 12:02:24 AM12/2/21
to git-for-windows
Hello,

I would like to report that GIT failed to checkout the linux kernel on a 6 GB system.

For now I believe it ran out of memory. I only tried once.

PAGEFILE.SYS on this system was disabled.

Apperently GIT relies on PAGEFILE.SYS to cover any out of memory situations.

This kinda sucks.

My recommendation is to disable PAGEFILE.SYS on your system, or try removing some RAM chips.

And then try and checkout the linux kernel yourself to see how GIT fails to checkout.

Also displaying a LOG seemed to consume also a lot of RAM.

I am not exactly sure why GIT needs so much memory. Perhaps it's a unzipping issue or a delta-ing issue, not sure.

But it would be nice if GIT can do it's operations in batches/chunks/pieces of memory.

Gradually so it can handle any database size. Right now it seems to be limited to the ammount of system RAM/virtual memory available.

Bye for now,
  Skybuck.


skybuck2000

unread,
Dec 2, 2021, 12:32:23 AM12/2/21
to git-for-windows
It would be nice if GIT would respect the following limits:

new@new-PC MINGW64 /
$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 2036
cpu time               (seconds, -t) unlimited
max user processes              (-u) 256
virtual memory          (kbytes, -v) unlimited

new@new-PC MINGW64 /
$

Then I could try and set these limits, however I fear if I set them now it will limit GIT even more and it will lead to even sooner crashes... though it's worth a try to see what happens.

Right now I have no time to experiment with this, but could be an idea for the future.

Setting limitations for GIT itself does not seem to be well documented or at least google don't take me there, I found this via google:

Not sure if this will help or what it does exactly:

git config --global pack.windowMemory "100m"
git config --global pack.packSizeLimit "100m"
git config --global pack.threads "1"

How this is related to checkout I don't know, could try it out, but kinda hate to try things out and waste my time if it doesn't work. Things like this should be tested by git developers.

Here is official documentation, many commands and settings to go through, low on time right now:


Searching for limit gives:

core.packedGitLimit

Maximum number of bytes to map simultaneously into memory from pack files. If Git needs to access more than this many bytes at once to complete an operation it will unmap existing regions to reclaim virtual address space within the process.

Default is 256 MiB on 32 bit platforms and 32 TiB (effectively unlimited) on 64 bit platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.

Common unit suffixes of k, m, or g are supported.


core.deltaBaseCacheLimit

Maximum number of bytes per thread to reserve for caching base objects that may be referenced by multiple deltified objects. By storing the entire decompressed base objects in a cache Git is able to avoid unpacking and decompressing frequently used base objects multiple times.

Default is 96 MiB on all platforms. This should be reasonable for all users/operating systems, except on the largest projects. You probably do not need to adjust this value.

Common unit suffixes of k, m, or g are supported.



core.bigFileThreshold

Files larger than this size are stored deflated, without attempting delta compression. Storing large files without delta compression avoids excessive memory usage, at the slight expense of increased disk usage. Additionally files larger than this size are always treated as binary.

Default is 512 MiB on all platforms. This should be reasonable for most projects as source code and other text files can still be delta compressed, but larger binary media files won’t be.

Common unit suffixes of k, m, or g are supported.


THIS COULD BE INTERESTING:

core.sparseCheckout

Enable "sparse checkout" feature. See git-sparse-checkout[1] for more information.

core.sparseCheckoutCone

Enables the "cone mode" of the sparse checkout feature. When the sparse-checkout file contains a limited set of patterns, then this mode provides significant performance advantages. See git-sparse-checkout[1] for more information.



iff.statGraphWidth

Limit the width of the graph part in --stat output. If set, applies to all commands generating --stat output except format-patch.



diff.renameLimit

The number of files to consider in the exhaustive portion of copy/rename detection; equivalent to the git diff option -l. If not set, the default value is currently 1000. This setting has no effect if rename detection is turned off.



fastimport.unpackLimit

If the number of objects imported by git-fast-import[1] is below this limit, then the objects will be unpacked into loose object files. However if the number of imported objects equals or exceeds this limit then the pack will be stored as a pack. Storing the pack from a fast-import can make the import operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead.



gc.auto

When there are approximately more than this many loose objects in the repository, git gc --auto will pack them. Some Porcelain commands use this command to perform a light-weight garbage collection from time to time. The default value is 6700.

Setting this to 0 disables not only automatic packing based on the number of loose objects, but any other heuristic git gc --auto will otherwise use to determine if there’s work to do, such as gc.autoPackLimit.



gc.autoPackLimit

When there are more than this many packs that are not marked with *.keep file in the repository, git gc --auto consolidates them into one larger pack. The default value is 50. Setting this to 0 disables it. Setting gc.auto to 0 will also disable this.

See the gc.bigPackThreshold configuration variable below. When in use, it’ll affect how the auto pack limit works.



gc.bigPackThreshold

If non-zero, all packs larger than this limit are kept when git gc is run. This is very similar to --keep-largest-pack except that all packs that meet the threshold are kept, not just the largest pack. Defaults to zero. Common unit suffixes of k, m, or g are supported.

Note that if the number of kept packs is more than gc.autoPackLimit, this configuration variable is ignored, all packs except the base pack will be repacked. After this the number of packs should go below gc.autoPackLimit and gc.bigPackThreshold should be respected again.

If the amount of memory estimated for git repack to run smoothly is not available and gc.bigPackThreshold is not set, the largest pack will also be excluded (this is the equivalent of running git gc with --keep-largest-pack).



http.postBuffer

Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests.

Note that raising this limit is only effective for disabling chunked transfer encoding and therefore should be used only where the remote server or a proxy only supports HTTP/1.0 or is noncompliant with the HTTP standard. Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.



http.lowSpeedLimit, http.lowSpeedTime

If the HTTP transfer speed is less than http.lowSpeedLimit for longer than http.lowSpeedTime seconds, the transfer is aborted. Can be overridden by the GIT_HTTP_LOW_SPEED_LIMIT and GIT_HTTP_LOW_SPEED_TIME environment variables.



merge.renameLimit

The number of files to consider in the exhaustive portion of rename detection during a merge. If not specified, defaults to the value of diff.renameLimit. If neither merge.renameLimit nor diff.renameLimit are specified, currently defaults to 7000. This setting has no effect if rename detection is turned off.



pack.windowMemory

The maximum size of memory that is consumed by each thread in git-pack-objects[1] for pack window memory when no limit is given on the command line. The value can be suffixed with "k", "m", or "g". When left unconfigured (or set explicitly to 0), there will be no limit.



pack.deltaCacheSize

The maximum memory in bytes used for caching deltas in git-pack-objects[1] before writing them out to a pack. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Repacking large repositories on machines which are tight with memory might be badly impacted by this though, especially if this cache pushes the system into swapping. A value of 0 means no limit. The smallest size of 1 byte may be used to virtually disable this cache. Defaults to 256 MiB.



pack.deltaCacheLimit

The maximum size of a delta, that is cached in git-pack-objects[1]. This cache is used to speed up the writing object phase by not having to recompute the final delta result once the best match for all objects is found. Defaults to 1000. Maximum value is 65535.



pack.packSizeLimit

The maximum size of a pack. This setting only affects packing to a file when repacking, i.e. the git:// protocol is unaffected. It can be overridden by the --max-pack-size option of git-repack[1]. Reaching this limit results in the creation of multiple packfiles.

Note that this option is rarely useful, and may result in a larger total on-disk size (because Git will not store deltas between packs), as well as worse runtime performance (object lookup within multiple packs is slower than a single pack, and optimizations like reachability bitmaps cannot cope with multiple packs).

If you need to actively run Git using smaller packfiles (e.g., because your filesystem does not support large files), this option may help. But if your goal is to transmit a packfile over a medium that supports limited sizes (e.g., removable media that cannot store the whole repository), you are likely better off creating a single large packfile and splitting it using a generic multi-volume archive tool (e.g., Unix split).

The minimum size allowed is limited to 1 MiB. The default is unlimited. Common unit suffixes of k, m, or g are supported.



receive.unpackLimit

If the number of objects received in a push is below this limit then the objects will be unpacked into loose object files. However if the number of received objects equals or exceeds this limit then the received pack will be stored as a pack, after adding any missing delta bases. Storing the pack from a push can make the push operation complete faster, especially on slow filesystems. If not set, the value of transfer.unpackLimit is used instead.



receive.maxInputSize

If the size of the incoming pack stream is larger than this limit, then git-receive-pack will error out, instead of accepting the pack file. If not set or set to 0, then the size is unlimited.



status.renameLimit

The number of files to consider when performing rename detection in git-status[1] and git-commit[1]. Defaults to the value of diff.renameLimit.


transfer.unpackLimit

When fetch.unpackLimit or receive.unpackLimit are not set, the value of this variable is used instead. The default value is 100.


uploadpackfilter.<filter>.allow

Explicitly allow or ban the object filter corresponding to <filter>, where <filter> may be one of: blob:none, blob:limit, object:type, tree, sparse:oid, or combine. If using combined filters, both combine and all of the nested filter kinds must be allowed. Defaults to uploadpackfilter.allow.



There are apperently many limits that GIT could run into. It would be very usefull if GIT could display which limit was hit and caused the failure/error. Ofcourse it would be better if GIT tries to function within these limitations so it does not error.

Bye for now,
  Skybuck.

Bryan Turner

unread,
Dec 2, 2021, 1:04:58 AM12/2/21
to skybuck2000, git-for-windows

On Thursday, December 2, 2021 at 6:02:24 AM UTC+1 skybuck2000 wrote:
>>
>> Hello,
>>
>> I would like to report that GIT failed to checkout the linux kernel on a 6 GB system.
>>
>> For now I believe it ran out of memory. I only tried once.

Reporting errors without showing any error output makes it extremely difficult for anyone on the list to offer you any help. Since you're working with the Linux kernel there couldn't really be anything secret/special about your console output, so sharing that is a good place to start. That way the list can see the actual error output and help you.

One thing I know from experience is that the Linux kernel includes files that use names that cannot be checked out on Windows; they are reserved/forbidden. "aux.c" is an example of such a file. So it's entirely possible this was nothing to do with memory and, rather, was related to trying to write files that cannot be written on Windows.

Best regards,
Bryan Turner

Chris. Webster

unread,
Dec 2, 2021, 1:27:41 AM12/2/21
to git-for-windows
Why is PAGEFILE.SYS disabled?  Pretty much all operating systems for the last 40+ years rely on paging.  It is not something applications should be caring about.  Removing RAM while allowing paging?  Sorry, but not sure how you think that should cause a problem.  Paging accommodates for the amount of physical RAM in a system.  It is handled by the OS and not the application.

...chris.

Philip Oakley

unread,
Dec 2, 2021, 6:25:54 AM12/2/21
to git-for-windows
> new@new-PC MINGW64 /

Is this a Windows PC running Git-for-Windows?

i.e. there are many parts of Git, on Windows, that have a 4GB limit because of the different sizes of 'long' on Windows and Linux using the LLP64 and LP64 type systems.

skybuck2000

unread,
Dec 3, 2021, 12:53:49 AM12/3/21
to git-for-windows
The paging algorithm in windows 7 is flawed. It unnecessarily pages to disk.

Another good reason to disable paging is to learn how much RAM a system really needs.

Another good reason to disable paging is to learn how many firefox tabs or applications can be open without hitting the page file.

Another reason is to learn which applications can handle out of memory situations, not many apperently.

Another reason and this is the main one, is very speed reasons.

It's faster to crash an application because out of memory and restart it then to wait forever on the pagefile to catch up.

Also paging on windows can create big disk queues waiting for all the memory request to go through, again big fail algorith.

I also tried ubuntu today 20.04.3, it works much better and doesn't seem to be swapping a lot, comparison is not fair between windows 7 old os and ubunty brand new... (1 year old tops) but still =D

Bye,
  Skybuck.

skybuck2000

unread,
Dec 3, 2021, 12:55:27 AM12/3/21
to git-for-windows
This might be it, had a feeling it was some kind of file or filename issue.

I don't think it produces some kind of report, but perhaps I will try again sometime.

I am not really working with the linux kernel, at least not yet.

Just curious how many branches and what kind of branches there are.

So far I would only see a master branch and some lines and that is, that was quite surprising.

Makes me wonder if the linux kernel is too big for branching ?

Why are there so few branches in linux kernel ???

Bye,
  Skybuck.

skybuck2000

unread,
Dec 3, 2021, 12:59:38 AM12/3/21
to git-for-windows
I filtered on 4GB otherwise it was unclear to me what you hinting at.

I did see an issue where the checkout fails if 4GB files were added to the git repository previously I presume.

However the linux kernel seems to be 2 GB compressed, so I doubt there are any 4 GB files in linux kernel ?

So I don't think this is the issue...

Bye,
  Skybuck.

Bryan Turner

unread,
Dec 3, 2021, 1:01:31 AM12/3/21
to skybuck2000, git-for-windows
On Thu, Dec 2, 2021 at 9:55 PM skybuck2000 <skybu...@hotmail.com> wrote:
This might be it, had a feeling it was some kind of file or filename issue.

I don't think it produces some kind of report, but perhaps I will try again sometime.

If Git fails to do something, it's typically very good about providing some sort of output to at least try to say why.


I am not really working with the linux kernel, at least not yet.

Just curious how many branches and what kind of branches there are.

So far I would only see a master branch and some lines and that is, that was quite surprising.

Makes me wonder if the linux kernel is too big for branching ?

Why are there so few branches in linux kernel ???

I don't know where you got your copy of it from, but the answer almost certainly is not that it's too big for branching (I doubt there's a threshold where that ever becomes true); it's because Linux kernel development is done via a mailing list and everyone who works on it (essentially) has their own personal fork. They're not all working in some shared copy hosted on Github (or any other site). Things like https://github.com/torvalds/linux are just mirrors--they're not where real, mainline development happens.


Bye,
  Skybuck.

P.S. Please stop top-posting. It makes your responses unnecessarily harder to follow.

skybuck2000

unread,
Dec 3, 2021, 1:11:25 AM12/3/21
to git-for-windows
On Friday, December 3, 2021 at 7:01:31 AM UTC+1 btu...@atlassian.com wrote:
On Thu, Dec 2, 2021 at 9:55 PM skybuck2000 <skybu...@hotmail.com> wrote:
This might be it, had a feeling it was some kind of file or filename issue.

I don't think it produces some kind of report, but perhaps I will try again sometime.

If Git fails to do something, it's typically very good about providing some sort of output to at least try to say why.

If I had seen any decent error message I would have probably recorded it... all I can remember is maybe something with some kind of ref failing.

Seemed like such a minor error message wasn't worth recording, but tomorrow I will try again just for the fun of it to see if git actually does give a good error message if it presumeably runs out of memory ! LOL.

 


I am not really working with the linux kernel, at least not yet.

Just curious how many branches and what kind of branches there are.

So far I would only see a master branch and some lines and that is, that was quite surprising.

Makes me wonder if the linux kernel is too big for branching ?

Why are there so few branches in linux kernel ???

I don't know where you got your copy of it from, but the answer almost certainly is not that it's too big for branching (I doubt there's a threshold where that ever becomes true); it's because Linux kernel development is done via a mailing list and everyone who works on it (essentially) has their own personal fork. They're not all working in some shared copy hosted on Github (or any other site). Things like https://github.com/torvalds/linux are just mirrors--they're not where real, mainline development happens.

Yes I was warned for that on stack overflow I think it was.

Here is his "real" git supposedly:


Is it possible to checkout only a part of the kernel so at least I can get some kind of sense what is in it ?

Failing that I could switch to ubuntu 20.04.3 which I installed today in a VM, I don't want to re-enable pagefile.sys in windows 7, that is gone forever as far as I am concerned, taxes the host system to much, slow disk and all, I pay the price sometimes but find that amuzing... at least system is nice and fast a totally different experience then with pagefile.sys on...



Bye,
  Skybuck.

P.S. Please stop top-posting. It makes your responses unnecessarily harder to follow.

This is a google groups issue, it shows . . . and I have to expand it first to be able to bottom post... kinda weird.

Bye,
  Skybuck.
 

Chris. Webster

unread,
Dec 3, 2021, 1:40:52 AM12/3/21
to git-for-windows
On Thursday, December 2, 2021 at 9:53:49 PM UTC-8 skybu...@hotmail.com wrote:
The paging algorithm in windows 7 is flawed. It unnecessarily pages to disk.
Paging always pages to disk.  Win 7 (and NT) may have been a little aggressive.  How is it flawed? Please be specific.

Another good reason to disable paging is to learn how much RAM a system really needs.
Another good reason to disable paging is to learn how many firefox tabs or applications can be open without hitting the page file.
Another reason is to learn which applications can handle out of memory situations, not many apperently.
Another reason and this is the main one, is very speed reasons.
It's faster to crash an application because out of memory and restart it then to wait forever on the pagefile to catch up.
Also paging on windows can create big disk queues waiting for all the memory request to go through, again big fail algorith.
Modern Windows and *nix variants all use virtual memory.  Real memory (and I like lots of it) only make everything run faster (less paging).  Since the OS is designed to use paging, limiting/eliminating paging does not tell you how much RAM a system needs.  It just puts up roadblocks.  Sure, some RTOS systems need to avoid paging (specialized devices or space shuttles back in the day?).  Not sure you understand virtual memory.
 
I also tried ubuntu today 20.04.3, it works much better and doesn't seem to be swapping a lot, comparison is not fair between windows 7 old os and ubunty brand new... (1 year old tops) but still =D
 Did you turn off paging in Ubuntu?  Oh wait, is that an option?

Dirk Heinrichs

unread,
Dec 3, 2021, 1:45:13 AM12/3/21
to git-for...@googlegroups.com
Am Donnerstag, dem 02.12.2021 um 22:40 -0800 schrieb Chris. Webster:

Did you turn off paging in Ubuntu?

It's called "swapping" in Unix, btw. ;-)

  Oh wait, is that an option?

Sure it is. Just don't configure any swap space.

HTH...

Dirk
-- 
Dirk Heinrichs
Senior Systems Engineer, Delivery Pipeline
OpenText ™ Discovery | Recommind
Recommind GmbH, Von-Liebig-Straße 1, 53359 Rheinbach
Vertretungsberechtigte Geschäftsführer Gordon Davies, Madhu Ranganathan, Christian Waida, Registergericht Amtsgericht Bonn, Registernummer HRB 10646
This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet.
face-wink.svg

Chris. Webster

unread,
Dec 3, 2021, 2:19:41 AM12/3/21
to git-for-windows
It's called "swapping" in Unix, btw. ;-)
I'm old.  Swapping was at the process level on those mainframes that refuse to go away. Paging is inside the process.  Appreciate the clarification.

  Oh wait, is that an option?

Sure it is. Just don't configure any swap space.
I was kidding because 'someone' may not understand this while commenting about old versions of windows. Thanks for the input.

Dirk Heinrichs

unread,
Dec 3, 2021, 5:23:57 AM12/3/21
to git-for...@googlegroups.com
Am Donnerstag, dem 02.12.2021 um 23:19 -0800 schrieb Chris. Webster:

I'm old.

I'm too. :-D

Bye...

skybuck2000

unread,
Dec 3, 2021, 11:40:19 AM12/3/21
to git-for-windows
The paging algorithm in windows 7 is flawed. It unnecessarily pages to disk.
 
Paging always pages to disk.  Win 7 (and NT) may have been a little aggressive.  How is it flawed? Please be specific.

Observe it yourself via task manager and resource monitor. Watch the disk activity in resource monitor.

Basically windows 7 will swap anything to disk, including little programs like notepad, paint, editors, anything.

It probably uses a few lines of code very efficient reasons, it simply looks at which page was last accessed and then it starts paging to disk.

Even if enough memory is already free, say 2 gigabytes is free, the paging algorithm simply continues swapping more and more stuff to the disk.

Causing unnecessary delays when going back to notepad, calculator, tabs in firefox, it brings the entire system to a screeching halt especially on 5200 RPM laptop disks.
 

Another good reason to disable paging is to learn how much RAM a system really needs.
Another good reason to disable paging is to learn how many firefox tabs or applications can be open without hitting the page file.
Another reason is to learn which applications can handle out of memory situations, not many apperently.
Another reason and this is the main one, is very speed reasons.
It's faster to crash an application because out of memory and restart it then to wait forever on the pagefile to catch up.
Also paging on windows can create big disk queues waiting for all the memory request to go through, again big fail algorith.
 
Modern Windows and *nix variants all use virtual memory. 

I plan to buy 18 terrabyte disk soon, with 10 megabytes usb 2.0 ports the swapping algorithm is too slow, even with 100 mb/sec 3.0 ports/harddisk the swapping is annoyingly slow.

Conclusion: swapping is for fool, and old grannies, people with 4 or  8 mb ms-dos systems.

It is an old concept that belongs in 2021 in the waste basket, into the recycle bin.

Or improve on it, and only swap when absolutely necessarily, however even that is annoying.

THERE IS NO MORE SOLUTION TO SUSTAIN THE SWAPPING ALGORITHM.

It is too slow.

Get used to RAM limited systems, start modifieing your software so it can handle out of memory situations.

OR WAIT FOREVER.

Solid State Disks are not a solution. The will fail in 5 to 10 years. DO NOT COME TO ME AN WHINE ABOUT YOUR LOST BITCOINS.
 
Firefox disk activity makes it 10 to 100 times WORSE.

This is not only about swapping from the OS... many software applications are implementing swapping algorithms because the developers now have SOLID STATE DISKS.

This will lead to a CATASTROPHY. I predicted it, and you will be the victim if you don't listen.

Real memory (and I like lots of it) only make everything run faster (less paging).

Only FASTER? Dude come over here you can't evben use notepad with paging, and dont blame it on me.

RAM IS YOUR WORK MEMORY.

Harddisk 10 i/o/sec or 100/sec i/o is your limitiation.

USE IT WISELY. Not many do nowadays.
 
  Since the OS is designed to use paging,

Designed AGES ago. God knows what assumptions were made back then.

Harddisk have not become faster in i/o sec. Still the same like 30 years ago.

IF THAT IS  NOT TRUE THEN YOU ARE ACCESSING HARDDISK RAM CACHE UNTIL IT RUNS OUT/FULL. BYE BYE PERFORMANCE.

How much RAM does your harddisk have 30 MB, 100 MB in these times of terrabytes and gigabytes of applications it's laugable.

ONE WEB PAGE IN FIREFOX CONSUMES 2 GIGABYTES, OR EVEN MORE IN GITHUB.
 
limiting/eliminating paging does not tell you how much RAM a system needs.

OFCOURSE IT DOES.

TASK MANAGER, FREE MORE, AVAILABLE MEMORY

IT WILL DISPLAY EXACTLY HOW MUCH IS BEING USED AND HOW MUCH IS FREE, EXCEPT FOR A MYSTEROUSLY MISSING 100 MEGABYTES.

IT IS TIME FOR YOU TO BUY A SLOW MEMORY LIMITED SLOW HARDDISK LAPTOP AND EXPERIENCE IT YOURSELF.

IT WILL PREPARE YOU FOR THE FUTURE. STRANGE AS IT MAY SOUND IT IS THE TRUTH, IN THE FUTURE THESE EXACT SAME LIMITATIONS WILL APPLY, JUST SCALED UPWARDS, 4k screens, big ad videos, the whole works.
 
  It just puts up roadblocks. 

Paging does.

Queueing issues etc..
 
Sure, some RTOS systems need to avoid paging (specialized devices or space shuttles back in the day?).  Not sure you understand virtual memory.

YOU ARE THE ONE THAT CLEARLY DONT UNDERSTAND, I AM THE ONE EXPLAINING IT YOU CAUSE YOU ASKED TO AND YOU CLEARLY DEMONSTRATE A LACK OF UNDERSTANDING.

I READ EXACTLY HOW THE PAGE LISTS AND THE MEMORY WORKS IN WINDOWS, I HAVE USED SYS INTERNAL TOOLS TO RESET/CLEAR MEMORY TWO YEARS AGO. THREE YEARS AGO

I HAVE BEEN OBSERVING THE PAGING ALGORITHM IN THIS WINDOWS 7 LAPTOP FOR THE LAST 2 YEARS, ALMOST EVERY DAY.

THAT IS WHY I AM 10000000000000000000% PROCENT CONVINCED THE PAGING ALGORITHM IS GARBAGE.

I also tried ubuntu today 20.04.3, it works much better and doesn't seem to be swapping a lot, comparison is not fair between windows 7 old os and ubunty brand new... (1 year old tops) but still =D
 
 Did you turn off paging in Ubuntu?  Oh wait, is that an option?

Deleting the swap paritition is possible.

My usuage of ubuntu was only 1 day, this is too low to test it, too little trash files from firefox, nice defragmented file system.

Perhaps ubuntu has a better swapping algorithm.

Bye,
  Skybuck.

Bryan Turner

unread,
Dec 3, 2021, 2:54:19 PM12/3/21
to skybuck2000, git-for-windows
Ease up. Let's try and clarify a couple things:
- This is a mailing list about Git for Windows. It's not a mailing list about Windows, or how paging does or doesn't work
- "Shouting" at people on the list is not a) acceptable behavior or b) a good way to make a point. If you can't offer a compelling argument in a reasonable tone, perhaps just bow out of the argument

If you'd like to have a deep dive into how paging works on an end-of-life version of Windows, I'm certain there are corners of the Internet to do it in. But not this corner.

Please keep your discussion civil and on-topic.

Best regards,
Bryan Turner

skybuck2000

unread,
Dec 4, 2021, 10:56:42 PM12/4/21
to git-for-windows
Yesterday I was a bit angry because the way banks do bussiness nowadays haha, all kinds of stupid transaction limits.
Also angry how stores do not accept 200 or 500 euro bills.

My points are valid though.

And it is not certain the swapping algorithm changed in windows 8, 10, or 11 so it might still be the same ! ;)

Bye,
  Skybuck.

Chris. Webster

unread,
Dec 5, 2021, 7:47:26 PM12/5/21
to git-for-windows
For someone looking at this in the future:

When cloning the kernel on Win10, the git process uses just under 2.5gb of memory.  Windows kept about 1gb of memory free so every time it fell below that there was a spike in paging (presumably paging out other running applications).  Git memory use just kept increasing during the clone.

If the original attempt to clone had been successful, the conversation might have been about this instead:
error: invalid path 'drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.c'

...chris.
Reply all
Reply to author
Forward
0 new messages