aboutsummaryrefslogtreecommitdiff
path: root/doc/bugs
diff options
context:
space:
mode:
authorJoey Hess <joey@kitenet.net>2010-09-15 16:24:50 -0400
committerJoey Hess <joey@kitenet.net>2010-09-15 16:24:50 -0400
commit884835ce1cd3b1ba24d6f6bb19786d04e6b8ae90 (patch)
treed98d0960e513edb31f6cd9fa1fe15b04f61a6303 /doc/bugs
parent0e89f374a6728e76b512d00224eb9a9455ed7939 (diff)
downloadikiwiki-884835ce1cd3b1ba24d6f6bb19786d04e6b8ae90.tar
ikiwiki-884835ce1cd3b1ba24d6f6bb19786d04e6b8ae90.tar.gz
cutpaste: Fix bug that occured in some cases involving inlines when text was pasted on a page before being cut.
Diffstat (limited to 'doc/bugs')
-rw-r--r--doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn20
1 files changed, 20 insertions, 0 deletions
diff --git a/doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn b/doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn
index 356f9155a..4535cf35d 100644
--- a/doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn
+++ b/doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn
@@ -24,3 +24,23 @@ This error shows up only for *news.html*, but not in *news/2010-07-31* or for
the aggregation in *index.html* or its RSS and atom files.
--[[tschwinge]]
+
+> So the cutpaste plugin, in order to support pastes
+> that come before the corresponding cut in the page,
+> relies on the scan hook being called for the page
+> before it is preprocessed.
+>
+> In the case of an inline, this doesn't happen, if
+> the page in question has not changed.
+>
+> Really though it's not just inline, it's potentially anything
+> that preprocesses content. None of those things guarantee that
+> scan gets re-run on it first.
+>
+> I think cutpaste is going beyond the intended use of scan hooks,
+> which is to gather link information, not do arbitrary data collection.
+> Requiring scan be run repeatedly could be a lot more work.
+>
+> Using `%pagestate` to store the cut content when scanning would be
+> one way to fix this bug. It would mean storing potentially big chunks
+> of page content in the indexdb. --[[Joey]]