I tried the patch from shougo's thread and it solved my problem. thanks!
There is another (minor) issue I noticed; Because I had problems with the close_cb (I got out-callbacks after it was called) I started a timer to 200ms so I'll be able to know when it's safe to cleanup. I'm not sure if it is still needed, however I see that when running in the terminal the timer callback is called almost immediately as expected, but in the gui it takes more than 2 seconds from when the timer starts until I get the callback. Any idea why?
Also, did you have a chance to check the two quickfix issues from the first message?
func! Close_cb(channel)
let g:rt = reltime()
call timer_start(200, 'Timer_cb')
endfunc
func! Timer_cb(timer)
echo reltime(g:rt)
endfunc
call job_start('ls', {'close_cb': 'Close_cb'})
>
> > Also, did you have a chance to check the two quickfix issues from the
> > first message?
>
> Which ones are that? This thread has gotten a bit long. Is this about
> parsing errors line by line? I was wondering if ":caddexpr" comes
> close. Perhaps we should have a function for this.
>
In my plugin I'm doing the message parsing manually, then I'm using setqflist() to add the results to the list. The problems I mentioned are quoted in Yegappan's response.
Yegappan, You were right -saving the qf_last makes a big difference. I probably checked your patch after using many quickfix operations (see below).
Let me summarize the problems found so far using the agrep plugin:
1. SEGV crashes - fixed
2. Vim hangs while quickfix window is opened - fixed.
3. GUI hangs - fixed
4. Timer issue in the GUI - fixed
5. quickfix is slow in general - it's much better with Yegappan's patch or 1881 but Vim is still not as responsive as it is when using the same search without quickfix (with Agrep window). Following are the top lines from the profiler log, in case you'll see something that can be optimized:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
11.34 0.11 0.11 11084 0.01 0.01 buf_valid
10.31 0.21 0.10 11378 0.01 0.01 do_cmdline
5.15 0.26 0.05 9912382 0.00 0.00 otherfile_buf
5.15 0.31 0.05 452997 0.00 0.00 get_func_tv
5.15 0.36 0.05 11083 0.00 0.02 buflist_new
5.15 0.41 0.05 11082 0.00 0.01 buflist_findname_stat
5.15 0.46 0.05 2131 0.02 0.02 buflist_findnr
6. Agrep become very slow after using the quickfix list many times. Well, this is what I see in the profiler log:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
55.98 4.21 4.21 33246 0.13 0.13 qf_mark_adjust <<<
12.10 5.12 0.91 50685 0.02 0.02 buf_valid
6.38 5.60 0.48 11091 0.04 0.05 buflist_findpat
5.05 5.98 0.38 13256 0.03 0.03 buflist_findnr
It looks like qf_mark_adjust is called each time line is appended to a buffer using 'out_io': 'buffer' for each entry in any quickfix list available. Can we avoid this?
Thanks,
Ramel
Because of adding quickfix entries was slow I've created my own buffer to display the search results (you can see an animated gif here: http://i.imgur.com/epffEDH.gif).
I need to perform some manipulations on the grep results in order to get the column numbers and to highlight the matching text. Currently, this is done in the out_cb function which sends the modified line to my special buffer via a separate 'cat' job (this is the workaround I found as a replacement to setbufline() :)).
> The reason this is needed, is that when you have a list of errors in
> various files, which are at specific line numbers, and you make changes
> in files, the line numbers need to be adjusted.
>
> Since the qf_last change helped a lot, I susped just going through all
> the quickfix entries is making it slow. We would need to use another
> data structure, which lists all the quickfix entries related to a
> buffer. Then we only need to look at the ones that might actually
> change. Keeping that list updated will be extra work though. In
> different circumstances it may actually make it slower.
>
I think that in cases like this -when lines are appended to a buffer by a job (using 'out_io': 'buffer') we don't need to adjust the quickfix marks since the target buffer is probably not included in any quickfix list. Is there a corresponding option to the :lockmarks command (like 'eventignore' and :noautocmd)?
Because of the serious performance impact of many quickfix entries, I think we should have a built in command for freeing a quickfix list. I guess I can use :call setqflist([], 'r'), but it'll leave an empty list and it'd be nicer to remove the list completely.
Also, I noticed that using :call setqflist([]) while there is only one list will add a new empty list, and will delete the last list when there is more than one list. According to the help it should behave like setqflist([], 'r'):
If you supply an empty {list}, the quickfix list will be
cleared.
Hi Bram,
1) Actually, there is a bug here which is not exactly what I've described earlier. The issue is that :call setqflist([]), instead of adding one more list after the last list, will clear the next list and delete the ones after. For example: let's say I used :grep 4 times so I have now 4 lists. Now, if I do :colder 3, the first list becomes the current list. :call setqflist([]) will empty list 2, and delete lists 3 and 4.
2) Although quickfix performance got much better, I'm afraid there is more to do:
a) It takes me about 9 seconds(!) to add 68,505 entries to the qf list (using simple :grep command. Time was measured from after the grep command output finished, of course). Yegappan's test takes only 2 seconds because all the results are from the same file. Try to add the loop variable to the file name in his test and see what happens...
This is the profiler log of the search I did:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
40.63 1.69 1.69 68510 0.02 0.02 buf_valid
21.88 2.60 0.91 68505 0.01 0.02 buflist_findname_stat
12.38 3.12 0.52 127664549 0.00 0.00 otherfile_buf
4.81 3.32 0.20 4345 0.05 0.05 buflist_findnr
b) I remember you did 2 things in order to avoid the line adjustments when it's not necessary: check first if the buffer has a quickfix entry and don't call line adjustment when adding a line at the end.
I still see that adding to a buffer when there are many qf entries is very slow. After I had the 68505 results of the :grep command, adding ~84000 lines to a buffer (from channel output) became really slow. This is the profiler log:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
31.46 6.98 6.98 149100 0.00 0.00 buf_valid
29.20 13.46 6.48 80590 0.00 0.00 buflist_findpat
26.66 19.38 5.92 84950 0.00 0.00 buflist_findnr
3.56 20.17 0.79 68507 0.00 0.00 buflist_findname_stat
2.28 20.67 0.51 127673186 0.00 0.00 otherfile_buf
It seems like checking the buffer each time alone takes a lot of time. Is there any way to optimize the above functions?
Also, why I still see all these buffer functions even though all the lines were added at the end of the buffer?
Thanks,
Ramel
>
> The file name would be turned into a buffer only when jumping to the
> location then.
>
> I suppose this is a property of the quickfix list. Perhaps with a
> function like "setqfflag()"? Would not work with commands like :cfile
> though.
Sounds good to me, although I didn't understand the :cfile problem you mentioned.
>
> > > > b) I remember you did 2 things in order to avoid the line
> > > > adjustments when it's not necessary: check first if the buffer has a
> > > > quickfix entry and don't call line adjustment when adding a line at
> > > > the end.
> > > >
> > > > I still see that adding to a buffer when there are many qf entries
> > > > is very slow. After I had the 68505 results of the :grep command,
> > > > adding ~84000 lines to a buffer (from channel output) became really
> > > > slow. This is the profiler log:
> > >
> > > How are the lines added?
> > The lines are added by using 'out_io': 'buffer'. Without having many
> > (4000) unlisted buffers adding 80000 lines takes about 1.2 seconds.
> > The unlisted buffers slow this down to ~6 seconds. If, in addition, I
> > add an out_cb to this job which calls setbufvar() it can take even 15
> > seconds.
>
> OK, so it's not adding the lines that's slow, but checking every time
> whether the buffer pointer we have is still valid.
>
Sorry, my mistake. I don't know what was wrong in my previous test (maybe I was testing it with the profiler build..) but the only noticed difference I see is when calling to setbufvar(). No real difference when only adding lines.
> > > > Each sample counts as 0.01 seconds.
> > > > % cumulative self self total
> > > > time seconds seconds calls s/call s/call name
> > > > 31.46 6.98 6.98 149100 0.00 0.00 buf_valid
> > > > 29.20 13.46 6.48 80590 0.00 0.00 buflist_findpat
> > > > 26.66 19.38 5.92 84950 0.00 0.00 buflist_findnr
> > > > 3.56 20.17 0.79 68507 0.00 0.00 buflist_findname_stat
> > > > 2.28 20.67 0.51 127673186 0.00 0.00 otherfile_buf
> > > >
> > > > It seems like checking the buffer each time alone takes a lot of time.
> > > > Is there any way to optimize the above functions?
> > > > Also, why I still see all these buffer functions even though all the
> > > > lines were added at the end of the buffer?
> > >
> > > If you use 'errorformat' it will locate the file name and find out what
> > > buffer has that file name.
> > >
> > I'm not using this in this case.
>
> Ehm, you do get those file names added as buffers, thus that must happen
> somewhere.
>
> Anyway, it seems that your problem is that you use thousands of buffers
> and that's something Vim wasn't prepared for.
I know that this is not the typical use case but, as I said, during a long Vim session there might be thousands of unlisted buffers eventually. I think that the solution you've proposed should make quickfix much more robust.