aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authortycho garen <garen@tychoish.com>2012-01-26 21:28:19 -0500
committertycho garen <garen@tychoish.com>2012-01-26 21:28:19 -0500
commit7bfa77380ac1fda68d224343c46b310779ce9980 (patch)
treed260cc607bf84e228550e17c942199316cf5b4be
parent6226d1a7656954f6636932aec1f9badea582bafa (diff)
downloadikiwiki-7bfa77380ac1fda68d224343c46b310779ce9980.tar
ikiwiki-7bfa77380ac1fda68d224343c46b310779ce9980.tar.gz
comment to multi-threading discussion
-rw-r--r--doc/todo/multi-thread_ikiwiki.mdwn25
1 files changed, 25 insertions, 0 deletions
diff --git a/doc/todo/multi-thread_ikiwiki.mdwn b/doc/todo/multi-thread_ikiwiki.mdwn
index 396037fa7..3838103ff 100644
--- a/doc/todo/multi-thread_ikiwiki.mdwn
+++ b/doc/todo/multi-thread_ikiwiki.mdwn
@@ -10,3 +10,28 @@ Disclaimer: I know nothing of the Perl approach to parallel processing.
> I agree that it would be lovely to be able to use multiple processors to speed up rebuilds on big sites (I have a big site myself), but, taking a quick look at what Perl threads entails, and taking into acount what I've seen of the code of IkiWiki, it would take a massive rewrite to make IkiWiki thread-safe - the API would have to be completely rewritten - and then more work again to introduce threading itself. So my unofficial humble opinion is that it's unlikely to be done.
> Which is a pity, and I hope I'm mistaken about it.
> --[[KathrynAndersen]]
+
+> > I have much less experience with the internals of Ikiwiki, much
+> > less Multi-threading perl, but I agree that to make Ikiwiki thread
+> > safe and to make the modifications to really take advantage of the
+> > threads is probably beyond the realm of reasonable
+> > expectations. Having said that, I wonder if there aren't ways to
+> > make Ikiwiki perform better for these big cases where the only
+> > option is to wait for it to grind through everything. Something
+> > along the lines of doing all of the aggregation and dependency
+> > heavy stuff early on, and then doing all of the page rendering
+> > stuff at the end quasi-asynchronously? Or am I way off in the deep
+> > end.
+> >
+> > From a practical perspective, it seems like these massive rebuild
+> > situations represent a really small subset of ikiwiki builds. Most
+> > sites are pretty small, and most sites need full rebuilds very
+> > very infrequently. In that scope, 10 minute rebuilds aren't that
+> > bad seeming. In terms of performance challenges, it's the one page
+> > with 3-5 dependency that takes 10 seconds (say) to rebuild that's
+> > a larger challenge for Ikiwiki as a whole. At the same time, I'd
+> > be willing to bet that performance benefits for these really big
+> > repositories for using fast disks (i.e. SSDs) could probably just
+> > about meet the benefit of most of the threading/async work.
+> >
+> > --[[tychoish]]