aboutsummaryrefslogtreecommitdiff
path: root/doc/bugs/Error:_no_text_was_copied_in_this_page_--_missing_page_dependencies.mdwn
blob: 0bbf6096fd9122abd40b739a4f37332c187f9d9a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
That one has bitten me for some time; here is the minimal testcase.  There is
also an equivalent (I suppose) problem when using another plugin, but I hope
it's enough to track it down for this one.

    $ tar -xj < [bug-dep_order.tar.bz2](http://nic-nac-project.de/~schwinge/ikiwiki/bug-dep_order.tar.bz2)
    $ cd bug-dep_order/
    $ ./render_locally
    [...]
    $ find "$PWD".rendered/ -print0 | xargs -0 grep 'no text was copied'
    $ [no output]
    $ touch news/2010-07-31.mdwn 
    $ ./render_locally 
    refreshing wiki..
    scanning news/2010-07-31.mdwn
    building news/2010-07-31.mdwn
    building news.mdwn, which depends on news/2010-07-31
    building index.mdwn, which depends on news/2010-07-31
    done
    $ find "$PWD".rendered/ -print0 | xargs -0 grep 'no text was copied'
    /home/thomas/tmp/hurd-web/bug-dep_order.rendered/news.html:<p>[[!paste <span class="error">Error: no text was copied in this page</span>]]</p>
    /home/thomas/tmp/hurd-web/bug-dep_order.rendered/news.html:<p>[[!paste <span class="error">Error: no text was copied in this page</span>]]</p>

This error shows up only for *news.html*, but not in *news/2010-07-31* or for
the aggregation in *index.html* or its RSS and atom files.

--[[tschwinge]]

> So the cutpaste plugin, in order to support pastes
> that come before the corresponding cut in the page,
> relies on the scan hook being called for the page
> before it is preprocessed.
> 
> In the case of an inline, this doesn't happen, if
> the page in question has not changed.
> 
> Really though it's not just inline, it's potentially anything
> that preprocesses content. None of those things guarantee that
> scan gets re-run on it first. 
> 
> I think cutpaste is going beyond the intended use of scan hooks,
> which is to gather link information, not do arbitrary data collection.
> Requiring scan be run repeatedly could be a lot more work.
> 
> Using `%pagestate` to store the cut content when scanning would be 
> one way to fix this bug. It would mean storing potentially big chunks 
> of page content in the indexdb. [[done]] --[[Joey]]