I'm wondering if picky is the right tool for a website search
engine:
I am building the website in nanoc, so i have the data in
categories like title, body, tag. Ideally, the indexer should build the
index when nanoc generates the site (offline), so all that categories
information is still there, and let me deploy the index to the server
together with the static website.
I imagine I can integrate a Sinatra server with the static pages to find
& display the results.
Do you think this makes sense?
Cheers
Michael
--
Michael Below <be...@judiz.de>
that's an interesting question. It seems to me there are two different answers.
1) The technical anser is: Yes that's easily possible and it fits really well to the classical method of first generating an index and then loading this index once when the picky server is started. And you'll greatly benefit from Picky returning categorized results and doing that fast.
2) The rather philosophical answer derives from the premise that you're using a static site engine for a good reason. The two most obvious one would be: Your hosting company just supports static HTML or you have really a lot of traffic (like in millions of page views per day). If the latter is the case, Picky would be a good match, as it can really handle a lot of traffic, fast. In the former case you would loose the benefit of being able to just deploy static pages. Then you'd perhaps rather try to find a solution like a Google site search or (if your index is small) a Javascript based search.
This gets me thinking... (perhaps you should stop reading here) could all responses for - lets say - *every single word search* be generated with Picky and put into static files? How many files would that be? I hacked together a small script to download a wikipedia page and tokenize it with one of pickys tokenizers:
https://gist.github.com/1331581
In the first case the text has 5799 words with 768 uniques. This results in 3816 substrings, that would mean as many separate static JSON documents which you would have to generate and deploy. I tried another wikipedia text with 17778 words and 1911 uniques: 9000 substrings. Mind the number of documents wouldn't matter, only the total number of words. And this certainly only works for single word searches. Would be fun playing with larger text bodies.
OK. Crazy. Sorry for capturing your thread for this sort of craziness.
Niko.
Probably a Javascript-based solution would be enough for my project, but I haven't found something convincing: IIRC the compass documentation builds a JSON-based index from within nanoc and searches it with Javascript, but the solution reaches its limits, the index is bloated with partial words etc., so I looked for a solution with a better indexer and found picky...
Michael
the nanoc data is in text files, they have a YAML header with title, date etc. and markdown content. While the nanoc compiler runs the content is represented in ruby objects (each item has title, date etc.), I think that would be the right time to run the indexer (pre- or postprocessing).
I am not really decided about the output question. I like the picky way to show the categories, limit queries etc., but it's not really necessary. The web site addresses the general public, not library research assistants, so many possibilities maybe won't be used. My goal is more like: the interface should be user friendly, and search should distinguisch between words in the main content or tags for a blog article, and words in a tag cloud that happens to be on the same page..
Michael
Am Mittwoch, den 02.11.2011, 00:06 -0700 schrieb Picky / Florian Hanke:
> That sounds good. Regarding the data: Can you hook into the compilation
> somehow?
Denis, the author of nanoc, advised me on irc to do this via a Rake
file. That way i don't have to run the indexer on every recompile of the
site. That would be something like:
site = Nanoc3::Site.new('.')
PagesIndex = Picky::Index.new(:pages) do
source { site.items }
category :title
category :tag
category :description
# ...
end
This sounds like a good idea, will try it... I will report back as soon
as I got that far.
> I do not know how well you know Ruby.
My Ruby knowledge is mostly how to plug some elements together --
luckily, Ruby seems to be good for this approach, there are a lot of
building blocks I can use... :-)
> Regarding the interface:
> It might be a good idea to just use simple javascript and do a JSON request
> to the Picky server itself. Then, it would display the results in a very
> simple way, clickable. Again, I do not know how well you are versed in
> Javascript.
Not very much, I still have to get to the stage to plug something
together that makes sense... But probably that means I should learn it
some time.
Hi Florian,Am Mittwoch, den 02.11.2011, 00:06 -0700 schrieb Picky / Florian Hanke:
> That sounds good. Regarding the data: Can you hook into the compilation
> somehow?Denis, the author of nanoc, advised me on irc to do this via a Rake
file. That way i don't have to run the indexer on every recompile of the
site. That would be something like:site = Nanoc3::Site.new('.')
PagesIndex = Picky::Index.new(:pages) do
source { site.items }
category :title
category :tag
category :description
# ...
endThis sounds like a good idea, will try it... I will report back as soon
as I got that far.
> I do not know how well you know Ruby.
My Ruby knowledge is mostly how to plug some elements together --
luckily, Ruby seems to be good for this approach, there are a lot of
building blocks I can use... :-)
> Regarding the interface:
> It might be a good idea to just use simple javascript and do a JSON request
> to the Picky server itself. Then, it would display the results in a very
> simple way, clickable. Again, I do not know how well you are versed in
> Javascript.Not very much, I still have to get to the stage to plug something
together that makes sense... But probably that means I should learn it
some time.
Am Mittwoch, den 02.11.2011, 13:53 -0700 schrieb Picky / Florian Hanke:
> > site = Nanoc3::Site.new('.')
> > PagesIndex = Picky::Index.new(:pages) do
> > source { site.items }
> > category :title
> > category :tag
> > category :description
> > # ...
> > end
> >
> > This sounds like a good idea, will try it... I will report back as soon
> > as I got that far.
> >
> That is perfect, and also a good idea. Good luck!
I didn't get too far: I have added the above to app.rb (in the
all_in_one config). When I try to build an index with rake, it throws an
error because #id is no longer defined. The friendly people on
#ruby-lang are telling me: "Use #object_id, if you really must"
$ rake index
Loaded picky with environment 'development' in /home/mbelow/html/judiz
on Ruby 1.9.3.
:public is no longer used to avoid overloading Module#public,
use :public_folder instead
from /home/mbelow/html/judiz/app.rb:55:in `<class:CommentSearch>'
Application loaded.
16:27:18: Indexing using 4 processors, in random order.
16:27:23: "development:pages": Starting parallel data preparation.
/home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/indexers/parallel.rb:41:in `block in process': undefined method `id' for <Nanoc3::Item:0x12209ec identifier=/stylesheet/ binary?=false>:Nanoc3::Item (NoMethodError)
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/indexers/parallel.rb:40:in `each'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/indexers/parallel.rb:40:in `process'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/indexers/base.rb:23:in `index'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/index_indexing.rb:78:in `index_in_parallel'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/index_indexing.rb:27:in `index'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:53:in `call'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:53:in `block (2 levels) in forked'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:51:in `fork'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:51:in `block in forked'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:41:in `loop'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/cores.rb:41:in `forked'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/picky/indexes_indexing.rb:30:in `index'
from (__DELEGATION__):2:in `index'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/picky-3.3.3/lib/tasks/index.rake:10:in `block in <top (required)>'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain'
from /home/mbelow/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:116:in `invoke_task'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block (2 levels) in top_level'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `each'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block in top_level'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:88:in `top_level'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:66:in `block in run'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0@global/gems/rake-0.9.2.2/bin/rake:33:in `<top (required)>'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/bin/rake:19:in `load'
from /home/mbelow/.rvm/gems/ruby-1.9.3-p0/bin/rake:19:in `<main>'
16:27:23: Indexing finished.
> Alright. It's probably a good idea to do the first step first (get a
> running server with good results) and then think about the interface.
I have been thinking about something like this: Probably I could add a
simple search form to my site template, and that forwards the user to
something like your search page. If I understand things right, that
takes all-in-one solution one server process, just like a backend server
would that is listening to javascript JSON requests, right?
> Btw, did you know that you can search a server directly from the terminal?
> "picky search http://localhost:4567/pages" Also see
> http://florianhanke.com/blog/2011/04/11/searching-with-picky-rake-search.html
> (I need to add that you might need to define the whole URL, not just the
> path)
Sounds useful, will try that...
> I didn't get too far: I have added the above to app.rb (in the
> all_in_one config). When I try to build an index with rake, it throws an
> error because #id is no longer defined. The friendly people on
> #ruby-lang are telling me: "Use #object_id, if you really must"
Further idea: maybe it's better to build an index based on the canonical
URL for a page, i.e. url_for(item), instead of runtime IDs? I don't see
how picky stores that ID 4711 is actually
http://do.main.com/impressum/index.html
I guess usually there is a content server running that knows the IDs,
but does this work with static pages? (Or maybe I am missing something?)
Hi,Am Mittwoch, den 02.11.2011, 13:53 -0700 schrieb Picky / Florian Hanke:
> > site = Nanoc3::Site.new('.')
> > PagesIndex = Picky::Index.new(:pages) do
> > source { site.items }
> > category :title
> > category :tag
> > category :description
> > # ...
> > end
> >
> > This sounds like a good idea, will try it... I will report back as soon
> > as I got that far.
> >
> That is perfect, and also a good idea. Good luck!I didn't get too far: I have added the above to app.rb (in the
all_in_one config). When I try to build an index with rake, it throws an
error because #id is no longer defined. The friendly people on
#ruby-lang are telling me: "Use #object_id, if you really must"
> Alright. It's probably a good idea to do the first step first (get a
> running server with good results) and then think about the interface.
I have been thinking about something like this: Probably I could add a
simple search form to my site template, and that forwards the user to
something like your search page. If I understand things right, that
takes all-in-one solution one server process, just like a backend server
would that is listening to javascript JSON requests, right?
> Btw, did you know that you can search a server directly from the terminal?
> "picky search http://localhost:4567/pages" Also see
> http://florianhanke.com/blog/2011/04/11/searching-with-picky-rake-search.html
> (I need to add that you might need to define the whole URL, not just the
> path)Sounds useful, will try that...
> If yes, you can extend the Nanoc Items class (I have no idea what it is
> called, I'm sorry) like this, for example:
> module Nanoc
> class Item
> def id
> url
> end
> end
> end
>
> Before indexing, Picky will load this and the Nanoc class will
> automatically return the url as its id.
I tried something like that using the path. But somehow it looks like
picky stores a 0 instead of the path string, the json files look like:
{"kosten":[0],"seite":[0],"nicht":[0],...
Any idea why?
Cheers!
(Message from mobile, hence short)
Yes, that does it. Nice, indexing works!
Now I am also indexing the item content (before layout, i.e. just the
article text), and i have noticed that words are indexed every time they
appear: some words have three or four entries for the same item, and
maybe one more for another item.
I guess that can make sense if the results are weighted, like "this is
90% relevant" - "this is 30% relevant". Does picky do that? If this is
more like a unintended consequence from my strange use case, should I
try to "clean" the index somehow?
Am Mittwoch, den 09.11.2011, 01:28 +1100 schrieb Florian Hanke:
> Yes. Picky does not know the id type - you can tell it that it should assume it's symbols by setting
> key_format :to_sym
> inside the index definition.Yes, that does it. Nice, indexing works!
Now I am also indexing the item content (before layout, i.e. just the
article text), and i have noticed that words are indexed every time they
appear: some words have three or four entries for the same item, and
maybe one more for another item.I guess that can make sense if the results are weighted, like "this is
90% relevant" - "this is 30% relevant". Does picky do that? If this is
more like a unintended consequence from my strange use case, should I
try to "clean" the index somehow?
Am Dienstag, den 08.11.2011, 15:52 -0800 schrieb Picky / Florian Hanke:
> > Now I am also indexing the item content (before layout, i.e. just the
> > article text), and i have noticed that words are indexed every time they
> > appear: some words have three or four entries for the same item, and
> > maybe one more for another item.
> >
> > I guess that can make sense if the results are weighted, like "this is
> > 90% relevant" - "this is 30% relevant". Does picky do that? If this is
> > more like a unintended consequence from my strange use case, should I
> > try to "clean" the index somehow?
> >
> I am not perfectly sure what you mean. Did you look at the indexes and see
> that one word references the same id multiple times, like so:
> :word => [1, 1, 1, 3, 1]
> Or something like that?
Yes, the JSON file body_exact_inverted.memory.json contains entries
like: "der":["page1","page1","page1","page1","page2","page2","page3"]
Now this tells me that I should tweak the list of stop words, but it
also makes me wonder if this shouldn't be:
"der":["page1","page2","page3"]
Best
You are absolutely right on both accounts. I am wondering what's happening here. How do you index? Using a source and "rake index"?
Feel free to post your app.rb so we can try to reproduce the problem (or send it to my email address if it is too public for you).
Thanks for your perseverance!
Florian
I am attaching the app.rb (the search part doesn't work yet).The
interesting bit is probably how I get the body content: the text is in
Markdown files. Those are processed through ERB and RDiscount
(compiled_content), but no layout is added (therefore it's called
the :pre snapshot). I am running that through Nokogiri to retrieve the
text.
Cheers
Am Mittwoch, den 09.11.2011, 23:51 +1100 schrieb Florian Hanke:
> You are absolutely right on both accounts. I am wondering what's happening here. How do you index? Using a source and "rake index"?
>
> Feel free to post your app.rb so we can try to reproduce the problem (or send it to my email address if it is too public for you).I am attaching the app.rb (the search part doesn't work yet).
The interesting bit is probably how I get the body content: the text is in
Markdown files. Those are processed through ERB and RDiscount
(compiled_content), but no layout is added (therefore it's called
the :pre snapshot). I am running that through Nokogiri to retrieve the
text.
Am Donnerstag, den 10.11.2011, 00:19 -0800 schrieb Picky / Florian
Hanke:
> P.S: Or better, 3.4.3.
Should we take this off-list? Maybe this threads gets a bit long for a
public mailing list...
Anyway, I installed 3.4.3 (instead of 3.4.0) and now I have an
interesting new problem:
13:55:54: Indexing using 4 processors, in random order.
13:55:54: "development:pages": Starting parallel data preparation.
/home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/nanoc3-3.2.3/lib/nanoc3/base/result_data/item_rep.rb:243:in `compiled_content': The current item cannot be compiled yet because of an unmet dependency on the “/kosten/” item (rep “default”). (Nanoc3::Errors::UnmetDependency)
This is the same error I had when I didn't call the site.compile method
before the index definitions: nanoc can't output the first item because
it isn't compiled yet.
I don't understand how this error is coming back now, when I am
explicitly calling the compile method. It looks like the body method is
used before the compile is done. Any ideas? Is there a place to get the
site compilation going earlier?
(Wild guess: maybe this is caused by the 4 parallel indexing threads
somehow? can I tell picky to get parallel only after the compile is
done?)
Hi again,Am Donnerstag, den 10.11.2011, 00:19 -0800 schrieb Picky / Florian
Hanke:
> P.S: Or better, 3.4.3.Should we take this off-list? Maybe this threads gets a bit long for a
public mailing list...
Anyway, I installed 3.4.3 (instead of 3.4.0) and now I have an
interesting new problem:13:55:54: Indexing using 4 processors, in random order.
13:55:54: "development:pages": Starting parallel data preparation.
/home/mbelow/.rvm/gems/ruby-1.9.3-p0/gems/nanoc3-3.2.3/lib/nanoc3/base/result_data/item_rep.rb:243:in `compiled_content': The current item cannot be compiled yet because of an unmet dependency on the “/kosten/” item (rep “default”). (Nanoc3::Errors::UnmetDependency)This is the same error I had when I didn't call the site.compile method
before the index definitions: nanoc can't output the first item because
it isn't compiled yet.I don't understand how this error is coming back now, when I am
explicitly calling the compile method. It looks like the body method is
used before the compile is done. Any ideas? Is there a place to get the
site compilation going earlier?(Wild guess: maybe this is caused by the 4 parallel indexing threads
somehow? can I tell picky to get parallel only after the compile is
done?)
Am Donnerstag, den 10.11.2011, 05:26 -0800 schrieb Picky / Florian
Hanke:
> It is surprising that it didn't occur in 3.4.0, as nothing groundbreaking
> has been changed. However, that might just have been luck.
Probably. I went back to 3.4.0, but the error is still there. There was
also an update for yajl when i did gem update this morning, but i don't
think this will be related...
> If it is the parallel indexing (in separate processes through forking, not
> threads), and assuming that site.compile does return before it is finished
> (if it wouldn't, the class would not finish loading until it was compiled
> and the forks would only be made later) – then maybe a simple sleep X after
> the site.compile is of help.
No, this doesn't help... I tried up to 60 seconds sleep, and there is
anotable pause, but the same error.
> rake index[pages,title] && rake index[pages,tags] etc.
For the title and the tags it works fine, but for the body I am still
getting that error.
** Execute index
16:21:53: "development:pages": Starting parallel data preparation.
rake aborted!
The current item cannot be compiled yet because of an unmet dependency
on the “/kosten/” item (rep “default”).
Best
Hi Florian,Am Donnerstag, den 10.11.2011, 05:26 -0800 schrieb Picky / Florian
Hanke:> It is surprising that it didn't occur in 3.4.0, as nothing groundbreaking
> has been changed. However, that might just have been luck.Probably. I went back to 3.4.0, but the error is still there. There was
also an update for yajl when i did gem update this morning, but i don't
think this will be related...
> If it is the parallel indexing (in separate processes through forking, not
> threads), and assuming that site.compile does return before it is finished
> (if it wouldn't, the class would not finish loading until it was compiled
> and the forks would only be made later) – then maybe a simple sleep X after
> the site.compile is of help.No, this doesn't help... I tried up to 60 seconds sleep, and there is
anotable pause, but the same error.
> rake index[pages,title] && rake index[pages,tags] etc.
For the title and the tags it works fine, but for the body I am still
getting that error.** Execute index
16:21:53: "development:pages": Starting parallel data preparation.
rake aborted!
The current item cannot be compiled yet because of an unmet dependency
on the “/kosten/” item (rep “default”).
Am Donnerstag, den 10.11.2011, 16:36 -0800 schrieb Picky / Florian
Hanke:
> So: Does it sometimes occur and sometimes not, or is it all the time now?
Yes, it's all the time now... And all I remember doing in between on
that project was turning off the machine, turning it on again and doing
a gem update...
> I don't know Nanoc very well. When a Nanoc item cannot meet a dependency,
> it pushes the item to the back of its compilation queue and continues. If
> it then reaches the end, and the item still cannot meet dependencies, it
> will raise this error.
No, that error isn't produced during compilation, it happens in the
method that accesses the compiled content after compilation, see
http://nanoc.stoneship.org/docs/api/3.2/Nanoc3/Item.html#compiled_content-instance_method
In the source snippet there, it looks like nanoc calls a check
"compiled?" before returning the compiled content for an item. That
makes sense, but somehow this test seems to fail.
> I don't think this is a Picky problem (but let's try to test this
> assumption later). It just occurs at a time when Picky tries to access the
> site.items (in the source block).
Yes, looks like.
> Perhaps following this helps?
> https://groups.google.com/forum/#!topic/nanoc/NPErVZFrXlg
Hm, yes, the problem looks similar, but there seems to be no reply on
that question...
> Can you maybe just run this script?
> require 'nanoc3'
> site = Nanoc3::Site.new('.')
> site.compile
> site.items.reject { |item| item.identifier=="/stylesheet/"}.each { |item|
> item.body[1..10] }
Yes, same error there. Picky seems to be innocent :-)
I reduced the example to :
require 'nanoc3'
site = Nanoc3::Site.new('.')
site.compile
site.items.each { |myitem| myitem.compiled_content(:snapshot => :pre) }
That fails in a fresh nanoc site, which contains just two items
(/stylesheet/ and /). I sent a question on this to the nanoc mailing
list, let's see what they say.
> Could it be that you updated the site? Did you already try to compile in
> the usual Nanoc way? (Using a rake task, I assume)
No, I didn't update the site, and yes, "nanoc compile" runs fine...
Cheers
Hi,Am Donnerstag, den 10.11.2011, 16:36 -0800 schrieb Picky / Florian
Hanke:> So: Does it sometimes occur and sometimes not, or is it all the time now?
Yes, it's all the time now... And all I remember doing in between on
that project was turning off the machine, turning it on again and doing
a gem update...
<snip>
Yes, same error there. Picky seems to be innocent :-)
I reduced the example to :
require 'nanoc3'
site = Nanoc3::Site.new('.')
site.compile
site.items.each { |myitem| myitem.compiled_content(:snapshot => :pre) }That fails in a fresh nanoc site, which contains just two items
(/stylesheet/ and /). I sent a question on this to the nanoc mailing
list, let's see what they say.
Am Freitag, den 11.11.2011, 03:42 -0800 schrieb Picky / Florian Hanke:
> > Yes, it's all the time now... And all I remember doing in between on
> > that project was turning off the machine, turning it on again and doing
> > a gem update...
> >
> Also of the Nanoc gem?
No, only picky and yajl...
> Ok, I wish you all the best and don't hesitate to ask more question (should
> you have them) if you get back to running Picky.
Thanks for your help, I hope I will get back on that soon...
Am Mittwoch, den 09.11.2011, 23:35 -0800 schrieb Picky / Florian Hanke:
> I was able to reproduce the problem and am now fixing it.
> The interesting thing here is that in the results, the problem does not
> occur anymore. That is probably why nobody noticed it.
> I have probably introduced the error a few versions back and am adding a
> regression test for it.
I just got a solution for the nanoc problem (tweaking in the compiler.rb
so it doesn't forget which pages are compiled), so now I can confirm
this bit: your fix works, there are only single results in the index
now.