aboutsummaryrefslogtreecommitdiff
path: root/doc/todo/different_search_engine.mdwn
blob: 9d0fc92c9f891699175f3afe0dbbf4f92b9b9ba8 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
[[done]], using xapian-omega! --[[Joey]]

After using it for a while, my feeling is that hyperestraier, as used in
the [[plugins/search]] plugin, is not robust enough for ikiwiki. It doesn't
upgrade well, and it has a habit of sig-11 on certain input from time to
time.

So some other engine should be found and used instead. 

Enrico had one that he was using for debtags stuff that looked pretty good.
That was [Xapian](http://www.xapian.org/), which has perl bindings in
libsearch-xapian-perl. The nice thing about xapian is that it does a ranked
search so it understands what words are most important in a search. (So
does Lucene..) Another nice thing is it supports "more documents like this
one" kind of search. --[[Joey]]

## xapian

I've invesitgated xapian briefly. I think a custom xapian indexer and use
of the omega for cgi searches could work well for ikiwiki. --[[Joey]]

### indexer

A custom indexer is needed because omindex isn't good enough for ikiwiki's
needs for incremental rendering. (And because, since ikiwiki has page info
in memory, it's silly to write it to disk and have omindex read it back.)

The indexer would run as a ikiwiki hook. It needs to be passed the page
name, and the content. Which hook to use is an open question.
Possibilities:

* `filter` - Since this runs before preprocess, only the actual text
  written on the page would be indexed. Not text generated by directives,
  pulled in by inlining, etc. There's something to be said for that. And
  something to be said against it. It would also get markdown formatted
  content, mostly, though it would still need to strip html, and also
  probably strip preprocessor directives too.
* `sanitize` - Would get the htmlized content, so would need to strip html.
  Preprocessor directive output would be indexed. Doesn't get a destpage
  parameter, making optimisation hard.
* `format` - Would get the entire html page, including the page template.
  Probably not a good choice as indexing the same template for each page
  is unnecessary.

The hook would remove any html from the content, and index it.
It would need to add the same document data that omindex would.

The indexer (and deleter) will need a way to figure out the ids in xapian
of the documents to delete. One way is storing the id of each page in the
ikiwiki index.

The other way would be adding a special term to the xapian db that can be
used with replace_document_by_term/delete_document_by_term. 
Hmm, let's use a term named "P<pagename>".

The hook should try to avoid re-indexing pages that have not changed since
they were last indexed. One problem is that, if a page with an inline is
built, every inlined item will get each hook run. And so a naive hook would
index each of those items, even though none of them have necessarily
changed. Date stamps are one possibility. Another would be to avoid having
the hook not do any indexing when `%preprocessing` is set (Ikiwiki.pm would
need to expose that variable.) Another approach would be to use a
needsbuild hook and only index the pages that are being built.

#### cgi

The cgi hook would exec omega to handle the searching, much as is done
with estseek in the current search plugin.

It would first set `OMEGA_CONFIG_FILE=.ikiwiki/omega.conf` ; that omega.conf
would set `database_dir=.ikiwiki/xapian` and probably also set a custom
`template_dir`, which would have modified templates branded for ikiwiki. So
the actual xapian db would be in `.ikiwiki/xapian/default/`.

## lucene

>> I've done a bit of prototyping on this. The current hip search library is [Lucene](http://lucene.apache.org/java/docs/). There's a Perl port called [Plucene](http://search.cpan.org/~tmtm/Plucene-1.25/). Given that it's already packaged, as `libplucene-perl`, I assumed it would be a good starting point. I've written a **very rough** patch against `IkiWiki/Plugin/search.pm` to handle the indexing side (there's no facility to view the results yet, although I have a command-line interface working). That's below, and should apply to SVN trunk.

>> Of course, there are problems. ;-)

>> * Plucene throws up a warning when running under Taint mode. There's a patch on the mailing list, but I haven't tried applying it yet. So for now you'll have to build IkiWiki with `NOTAINT=1 make install`.
>> * If I kill `ikiwiki` while it's indexing, I can screw up Plucene's locks. I suspect that this will be an easy fix.

>> There is a [C++ port of Lucene](http://sourceforge.net/projects/clucene/) which is packaged as `libclucene0`. The Perl interface to this is called [Lucene](http://search.cpan.org/~tbusch/Lucene-0.09/lib/Lucene.pm). This is supposed to be significantly faster, and presumably won't have the taint bug. The API is virtually the same, so it will be easy to switch over. I'd use this now, were it not for the lack of package. (I assume you won't want to make core functionality depend on installing a module from CPAN). I've never built a Debian package before, so I can either learn then try building this, or somebody else could do the honours. ;-)

>> If this seems a sensible approach, I'll write the CGI interface, and clean up the plugin. -- Ben

>>> The weird thing about lucene is that these are all reimplmentations of
>>> it. Thank you java.. The C++ version seems like a better choice to me
>>> (packages are trivial). --[[Joey]]

> Might I suggest renaming the "search" plugin to "hyperestraier", and then creating new search plugins for different engines?  No reason to pick a single replacement. --[[JoshTriplett]]

<pre>
Index: IkiWiki/Plugin/search.pm
===================================================================
--- IkiWiki/Plugin/search.pm    (revision 2755)
+++ IkiWiki/Plugin/search.pm    (working copy)
@@ -1,33 +1,55 @@
 #!/usr/bin/perl
-# hyperestraier search engine plugin
 package IkiWiki::Plugin::search;
 
 use warnings;
 use strict;
 use IkiWiki;
 
+use Plucene::Analysis::SimpleAnalyzer;
+use Plucene::Document;
+use Plucene::Document::Field;
+use Plucene::Index::Reader;
+use Plucene::Index::Writer;
+use Plucene::QueryParser;
+use Plucene::Search::HitCollector;
+use Plucene::Search::IndexSearcher;
+
+#TODO: Run the Plucene optimiser after a rebuild
+#TODO: CGI query interface
+
+my $PLUCENE_DIR;
+# $config{wikistatedir} may not be defined at this point, so we delay setting $PLUCENE_DIR
+# until a subroutine actually needs it.
+sub init () {
+  error("Plucene: Statedir <$config{wikistatedir}> does not exist!") 
+    unless -e $config{wikistatedir};
+  $PLUCENE_DIR = $config{wikistatedir}.'/plucene';  
+}
+
 sub import {
-       hook(type => "getopt", id => "hyperestraier",
-               call => \&amp;getopt);
-       hook(type => "checkconfig", id => "hyperestraier",
+       hook(type => "checkconfig", id => "plucene",
                call => \&amp;checkconfig);
-       hook(type => "pagetemplate", id => "hyperestraier",
-               call => \&amp;pagetemplate);
-       hook(type => "delete", id => "hyperestraier",
+       hook(type => "delete", id => "plucene",
                call => \&amp;delete);
-       hook(type => "change", id => "hyperestraier",
+       hook(type => "change", id => "plucene",
                call => \&amp;change);
-       hook(type => "cgi", id => "hyperestraier",
-               call => \&amp;cgi);
 }
 
-sub getopt () {
-        eval q{use Getopt::Long};
-       error($@) if $@;
-        Getopt::Long::Configure('pass_through');
-        GetOptions("estseek=s" => \$config{estseek});
-}
 
+sub writer {
+  init();
+  return Plucene::Index::Writer->new(
+      $PLUCENE_DIR, Plucene::Analysis::SimpleAnalyzer->new(), 
+      (-e "$PLUCENE_DIR/segments" ? 0 : 1));
+}
+
+#TODO: Better name for this function.
+sub src2rendered_abs (@) {
+  return map { Encode::encode_utf8($config{destdir}."/$_") } 
+    map { @{$renderedfiles{pagename($_)}} } 
+    grep { defined pagetype($_) } @_;
+}
+
 sub checkconfig () {
        foreach my $required (qw(url cgiurl)) {
                if (! length $config{$required}) {
@@ -36,112 +58,55 @@
        }
 }
 
-my $form;
-sub pagetemplate (@) {
-       my %params=@_;
-       my $page=$params{page};
-       my $template=$params{template};
+#my $form;
+#sub pagetemplate (@) {
+#      my %params=@_;
+#      my $page=$params{page};
+#      my $template=$params{template};
+#
+#      # Add search box to page header.
+#      if ($template->query(name => "searchform")) {
+#              if (! defined $form) {
+#                      my $searchform = template("searchform.tmpl", blind_cache => 1);
+#                      $searchform->param(searchaction => $config{cgiurl});
+#                      $form=$searchform->output;
+#              }
+#
+#              $template->param(searchform => $form);
+#      }
+#}
 
-       # Add search box to page header.
-       if ($template->query(name => "searchform")) {
-               if (! defined $form) {
-                       my $searchform = template("searchform.tmpl", blind_cache => 1);
-                       $searchform->param(searchaction => $config{cgiurl});
-                       $form=$searchform->output;
-               }
-
-               $template->param(searchform => $form);
-       }
-}
-
 sub delete (@) {
-       debug(gettext("cleaning hyperestraier search index"));
-       estcmd("purge -cl");
-       estcfg();
+       debug("Plucene: purging: ".join(',',@_));
+       init();
+  my $reader = Plucene::Index::Reader->open($PLUCENE_DIR);
+  my @files = src2rendered_abs(@_);
+  for (@files) {
+    $reader->delete_term( Plucene::Index::Term->new({ field => "id", text => $_ }));
+  }
+  $reader->close;
 }
 
 sub change (@) {
-       debug(gettext("updating hyperestraier search index"));
-       estcmd("gather -cm -bc -cl -sd",
-               map {
-                       Encode::encode_utf8($config{destdir}."/".$_)
-                               foreach @{$renderedfiles{pagename($_)}};
-               } @_
-       );
-       estcfg();
+       debug("Plucene: updating search index");
+  init();
+  #TODO: Do we want to index source or rendered files?
+  #TODO: Store author, tags, etc. in distinct fields; may need new API hook.
+  my @files = src2rendered_abs(@_);
+  my $writer = writer();    
+   
+  for my $file (@files) {
+    my $doc = Plucene::Document->new;
+    $doc->add(Plucene::Document::Field->Keyword(id => $file));
+    my $data;
+    eval { $data = readfile($file) };
+    if ($@) {
+      debug("Plucene: can't read <$file> - $@");
+      next;
+    }
+    debug("Plucene: indexing <$file> (".length($data).")");
+    $doc->add(Plucene::Document::Field->UnStored('text' => $data));
+    $writer->add_document($doc);
+  }
 }
-
-sub cgi ($) {
-       my $cgi=shift;
-
-       if (defined $cgi->param('phrase') || defined $cgi->param("navi")) {
-               # only works for GET requests
-               chdir("$config{wikistatedir}/hyperestraier") || error("chdir: $!");
-               exec("./".IkiWiki::basename($config{cgiurl})) || error("estseek.cgi failed");
-       }
-}
-
-my $configured=0;
-sub estcfg () {
-       return if $configured;
-       $configured=1;
-
-       my $estdir="$config{wikistatedir}/hyperestraier";
-       my $cgi=IkiWiki::basename($config{cgiurl});
-       $cgi=~s/\..*$//;
-
-       my $newfile="$estdir/$cgi.tmpl.new";
-       my $cleanup = sub { unlink($newfile) };
-       open(TEMPLATE, ">:utf8", $newfile) || error("open $newfile: $!", $cleanup);
-       print TEMPLATE IkiWiki::misctemplate("search", 
-               "<!--ESTFORM-->\n\n<!--ESTRESULT-->\n\n<!--ESTINFO-->\n\n",
-               baseurl => IkiWiki::dirname($config{cgiurl})."/") ||
-                       error("write $newfile: $!", $cleanup);
-       close TEMPLATE || error("save $newfile: $!", $cleanup);
-       rename($newfile, "$estdir/$cgi.tmpl") ||
-               error("rename $newfile: $!", $cleanup);
-
-       $newfile="$estdir/$cgi.conf";
-       open(TEMPLATE, ">$newfile") || error("open $newfile: $!", $cleanup);
-       my $template=template("estseek.conf");
-       eval q{use Cwd 'abs_path'};
-       $template->param(
-               index => $estdir,
-               tmplfile => "$estdir/$cgi.tmpl",
-               destdir => abs_path($config{destdir}),
-               url => $config{url},
-       );
-       print TEMPLATE $template->output || error("write $newfile: $!", $cleanup);
-       close TEMPLATE || error("save $newfile: $!", $cleanup);
-       rename($newfile, "$estdir/$cgi.conf") ||
-               error("rename $newfile: $!", $cleanup);
-
-       $cgi="$estdir/".IkiWiki::basename($config{cgiurl});
-       unlink($cgi);
-       my $estseek = defined $config{estseek} ? $config{estseek} : '/usr/lib/estraier/estseek.cgi';
-       symlink($estseek, $cgi) || error("symlink $estseek $cgi: $!");
-}
-
-sub estcmd ($;@) {
-       my @params=split(' ', shift);
-       push @params, "-cl", "$config{wikistatedir}/hyperestraier";
-       if (@_) {
-               push @params, "-";
-       }
-
-       my $pid=open(CHILD, "|-");
-       if ($pid) {
-               # parent
-               foreach (@_) {
-                       print CHILD "$_\n";
-               }
-               close(CHILD) || print STDERR "estcmd @params exited nonzero: $?\n";
-       }
-       else {
-               # child
-               open(STDOUT, "/dev/null"); # shut it up (closing won't work)
-               exec("estcmd", @params) || error("can't run estcmd");
-       }
-}
-
-1
+1;
</pre>