Out of memory, malloc failed

1,056 views
Skip to first unread message

ohlhaver

unread,
Nov 20, 2008, 1:02:43 PM11/20/08
to Capistrano
Hello,

After having done hundreds of successfull deployments of updates to my
aplication, suddenly I have run into the following error message:

'fatal: Out of memory, malloc failed
fatal: unpack-objects failed'

This causes the deloyment to fail. I am using git, github and
capistrano.

I am pretty sure I know what cause this: I had earlier failed to
'gitignore' the development.log file which had a size of over 1 Gig.
Pushing to the repo was already very slow and cap deploy fails
resulting
in the above error message.

Now, I have already deleted the big file, it is nowhere to be seen in
the repository anymore. I have also reverted to a much earlier commit.
But strangely the 'ghost' of that big file seems be still around
somewhere in the .git folder. The problem persists. Deployment fails.
This never happened before.

My questions:

1.How could I possibly delete this file from the repo which most
likely
the casue for this problem - given that is is 'invisble'?

2.Or how can I ensure that the git repo really reflects 100% my
current
application on the development machine, where I have DEFINITELY
deleted
this file?

3.Is there a way to completely reset the git repo? (I'm not talking
about the 'git reset' command). I am looking for a away to keep my app
on the dev machine but start but start with a completely fresh new and
empty repository.

4.Any other ideas how this could be solved maybe through capistrano
(deploy.rb) or settings on the remote server?

Thanks a lot in advance for any help! It would be greatly appreciated.
Justus

Mathias Meyer

unread,
Nov 20, 2008, 1:44:17 PM11/20/08
to capis...@googlegroups.com
On Thu, 20 Nov 2008 10:02:43 -0800 (PST), ohlhaver wrote:

> 2.Or how can I ensure that the git repo really reflects 100% my
> current
> application on the development machine, where I have DEFINITELY
> deleted
> this file?
>

Are you using the :remote_cache option? If not you could try setting
git_shallow_clone:

set :git_shallow_clone, "1"

This will only fetch the last few commits (see git clone --depth for
details). Maybe that will reduce your pain.

Another option would be to just use your local repository as the base
for a deployment, and use deploy_via with :copy to transfer it.

Cheers, Mathias
--
http://paperplanes.de
http://twitter.com/roidrage

justus ohlhaver

unread,
Nov 20, 2008, 2:48:44 PM11/20/08
to capis...@googlegroups.com
Thanks, Mathias.
I tried what you suggested but unfortunately that didn't help. The data that's slowing everything down must still somewhere in the git repository (also in the local one). By just deleting it before doing the commit I can't seem to get rid of it. It doesn't show up anywhere after deleting it, but -as mentioned -  it's still causing the deployment to fail on lack of memory.

Is there any way to just delete the entire repo without deleting the application?
J

David Masover

unread,
Nov 20, 2008, 2:57:01 PM11/20/08
to capis...@googlegroups.com
Git always maintains your entire version history -- so if you merely reverted, that won't help. You'll have to actually reset, so that the commit which added the file shows up nowhere in your history.

Once you've done that, you'll also want to run 'git gc' on any repositories where it might have been.

justus ohlhaver

unread,
Nov 20, 2008, 3:21:53 PM11/20/08
to capis...@googlegroups.com
Hello David,
Thanks. Just quickly: By 'reset' are you referring to using 'git reset --hard commit-id' ? I tried that but that unfortunately didn't solve the problem.
I'll try 'git gc' now.

Is there a way to completely delete /reset the entire git repo, without while keeping the application, so that I can initialize a completely new one from scratch?

Anyway, thanks very much for the help!
Justus

Jamis Buck

unread,
Nov 20, 2008, 3:48:13 PM11/20/08
to capis...@googlegroups.com
You could do this:

$ rm -rf .git
$ git init
$ git add .
$ git commit

That'll make your local working a new git repository with no history,
just the files that are in the project at that moment. If you've got a
central repo somewhere, you'll need to blow it away and recreate it
before adding it as a remote and pushing your local repository to it.

This, though, is a drastic measure! I strongly advise you to back up
your existing repo before trying this, and even before THAT you should
exhaust every other avenue first.

- Jamis

> --~--~---------~--~----~------------~-------~--~----~
> To unsubscribe from this group, send email to capistrano-...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/capistrano
> -~----------~----~----~----~------~----~------~--~---
>

David Masover

unread,
Nov 20, 2008, 4:14:57 PM11/20/08
to capis...@googlegroups.com
Yes, that's roughly it -- make sure you're rewound to before the commit causing the issue. Also, make sure you don't have any branches or tags lingering somewhere which include the bad commit.

If you want to keep anything since then, you could cherry-pick those newer commits back in -- again, avoiding the bad commit.

And yes, you do need to run 'git gc' after doing this.

You'll also need to blow away any git repositories you have on the server, or perform the same procedure there.

justus ohlhaver

unread,
Nov 20, 2008, 4:52:59 PM11/20/08
to capis...@googlegroups.com
So, I did reset to an earlier commit from before the problem was cause. Then I ran git gc, but unfortunately got this error:

git gc
Counting objects: 3274, done.
git(433) malloc: *** mmap(size=285454336) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
warning: suboptimal pack - out of memory
git(433) malloc: *** mmap(size=687673344) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
git(433) malloc: *** mmap(size=954793984) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
git(433) malloc: *** mmap(size=530702336) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
git(433) malloc: *** mmap(size=530702336) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug

fatal: Out of memory, malloc failed
error: failed to run repack

---
Which - again - looks like it is caused by the very problem - too much data - which I am trying to solve. I am 100% sure that I did reset to a commit from way before that  problem ever occured.

I guess I will have to go with the complete reset of the repo.

One question regarding this:
By 'blowing away' a repo do you mean just deleting it? Or does that term refer to some specific way of deleting it?

Again, thank you very very much for all that help! This is a fantatstic group.
Justus

Jamis Buck

unread,
Nov 20, 2008, 5:18:31 PM11/20/08
to capis...@googlegroups.com
Yeah, "blow away" == "delete". One last question before you go all
drastic. :) What version of git are you using?

- Jamis

justus ohlhaver

unread,
Nov 20, 2008, 5:27:57 PM11/20/08
to capis...@googlegroups.com
Hi jamis, I'm using version 1.5.6.4 ..

Jamis Buck

unread,
Nov 20, 2008, 6:01:36 PM11/20/08
to capis...@googlegroups.com
You might try upgrading to the latest (1.6.0.4, http://git.or.cz/),
just to see if that helps at all.

- Jamis

David Masover

unread,
Nov 20, 2008, 6:33:42 PM11/20/08
to capis...@googlegroups.com
Try cloning your repository locally -- again, with everything reset to before the problem. If that doesn't help, try cloning with --no-hardlinks. I have no idea whether either will help...

Depending on how determined you are to get this back, you could try allocating more swap space. If you're on a Linux, swapfiles are relatively easy to create and enable, and you'll only need it until the gc finishes.

If you're not on a 64-bit machine, and it's actually trying to use more than 2 gigs of RAM, you have a problem... Again, depending on how much you care about your version history, you could try running an Amazon EC2 extra-large instance (15 gigs of RAM on a 64-bit system, with over a terabyte and a half of storage if you need swap space). It's $0.80 an hour, so you probably don't want to keep it running, but if a couple of hours fixes your problem, that's cheap as data recovery solutions go.
Reply all
Reply to author
Forward
0 new messages