1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
|
I'm trying to set up a [planet of my users' blogs](http://help.schmonz.com/planet/). I've enabled the aggregate, meta, and tag plugins (but not htmltidy, that thing has a gajillion dependencies). `aggregateinternal` is 1. The cron job is running and I've also enabled the webtrigger. My usage is like so:
\[[!inline pages="internal(planet/*) show=0"]]
\[[!aggregate
name="Amitai's blog"
url="http://www.schmonz.com/"
dir="planet/schmonz-blog"
feedurl="http://www.schmonz.com/atom/"
expirecount="2"
tag="schmonz"
]]
\[[!aggregate
name="Amitai's photos"
url="http://photos.schmonz.com/"
dir="planet/schmonz-photos"
feedurl="http://photos.schmonz.com/main.php?g2_view=rss.SimpleRender&g2_itemId=7"
expirecount="2"
tag="schmonz"
]]
(and a few more `aggregate` directives like these)
Two things aren't working as I'd expect:
1. `expirecount` doesn't take effect on the first run, but on the second. (This is minor, just a bit confusing at first.)
2. Where are the article bodies for e.g. David's and Nathan's blogs? The bodies aren't showing up in the `._aggregated` files for those feeds, but the bodies for my own blog do, which explains the planet problem, but I don't understand the underlying aggregation problem. (Those feeds include article bodies, and show up normally in my usual feed reader rss2email.) How can I debug this further? --[[schmonz]]
> I only looked at David's, but its rss feed is not escaping the html
> inside the rss `description` tags, which is illegal for rss 2.0. These
> unknown tags then get ignored, including their content, and all that's
> left is whitespace. Escaping the html to `<` and `>` fixes the
> problem. You can see the feed validator complain about it here:
> <http://feedvalidator.org/check.cgi?url=http%3A%2F%2Fwww.davidj.org%2Frss.xml>
>
> It's sorta unfortunate that [[!cpan XML::Feed]] doesn't just assume the
> un-esxaped html is part of the description field. Probably other feed
> parsers are more lenient. --[[Joey]]
>> Thanks for the quick response (and the `expirecount` fix); I've forwarded it to David so he can fix his feed. Nathan's Atom feed validates -- it's generated by the same CMS as mine -- so I'm still at a loss on that one. --[[schmonz]]
>>> Nathan's feed contains only summary elements, with no content elements.
>>> This is legal according to the Atom spec, so I've fixed ikiwiki to use
>>> the summary if no content is available. --[[Joey]]
>>>> After applying your diffs, blowing away my cached aggregated stuff, and running the aggregate cron job by hand, the resulting planet still doesn't have Nathan's summaries... and the two posts from each feed that aren't being expired aren't the two newest ones (not sure what the pattern is there). Have I done something wrong? --[[schmonz]]
>>>>> I think that both issues are now fixed. Thanks for testing.
>>>>> --[[Joey]]
>>>>>> I can confirm, they're fixed on my end. --[[schmonz]]
New bug: new posts aren't getting displayed (or cached for aggregation). After fixing his feed, David posted a new item today, and the aggregator is convinced there's nothing to do, whether by cronjob or webtrigger. I verified that it wasn't another problem with his feed by adding another of my ikiwiki's feed to the planet, running the aggregator, posting a new item, and running the aggregator again: no new item. --[[schmonz]]
> Even if you start it more frequently, aggregation will only occur every
> `updateinterval` minutes (default 15), maximum. Does this explain what
> you're seeing? --[[Joey]]
>> Crap, right, and my test update has since made it into the planet. His post still hasn't. So it must be something with David's feed again? A quick test with XML::Feed looks like it's parsing just fine: --[[schmonz]]
$ perl
use XML::Feed;
my $feed = XML::Feed->parse(URI->new('http://www.davidj.org/rss.xml')) or die XML::Feed->errstr;
print $feed->title, "\n";
for my $entry ($feed->entries) {
print $entry->title, ": ", $entry->issued, "\n";
}
^D
davidj.org
Amway Stories - Refrigerator Pictures: 2008-09-19T00:12:27
Amway Stories - Coffee: 2008-09-13T10:08:17
Google Alphabet Update: 2008-09-11T22:55:37
Writing for writing's sake: 2008-09-09T23:39:05
Google Chrome: 2008-09-02T23:12:26
Mister Casual: 2008-07-25T09:01:17
Parental Conversations: 2008-07-24T10:44:44
Place Of George Orwell: 2008-06-03T22:11:07
The Raw Beauty Of A National Duolian: 2008-05-31T12:41:06
> I had no problem getting the "Refrigerator Pictures" post to aggregate
> here, though without a copy of the old feed I can't be 100% sure I've
> reproduced your ikiwiki's state. --[[Joey]]
>> Okay, I blew away the cached entries and aggregator state files and reran the aggregator and all appears well again. If the problem recurs I'll be sure to post here. :-) --[[schmonz]]
>>> On the off chance that you retained a copy of the old state, I'd not
>>> mind having a copy to investigate. --[[Joey]]
>>>> Didn't think of that, will keep a copy if there's a next time. -- [[schmonz]]
-----
In a corporate environment where feeds are generally behind
authentication, I need to prime the aggregator's `LWP::UserAgent`
with some cookies. What I've done is write a custom plugin to populate
`$config{cookies}` with an `HTTP::Cookies` object, plus this diff:
--- /var/tmp/pkg/lib/perl5/vendor_perl/5.10.0/IkiWiki/Plugin/aggregate.pm 2010-06-24 13:03:33.000000000 -0400
+++ aggregate.pm 2010-06-24 13:04:09.000000000 -0400
@@ -488,7 +488,11 @@
}
$feed->{feedurl}=pop @urls;
}
- my $res=URI::Fetch->fetch($feed->{feedurl});
+ my $res=URI::Fetch->fetch($feed->{feedurl},
+ UserAgent => LWP::UserAgent->new(
+ cookie_jar => $config{cookies},
+ ),
+ );
if (! $res) {
$feed->{message}=URI::Fetch->errstr;
$feed->{error}=1;
It works, but I have to remember to apply the diff whenever I update
ikiwiki. Can you provide a more elegant means of allowing cookies and/or
the user agent to be programmatically manipulated? --[[schmonz]]
> Ping -- is the above patch perhaps acceptable (or near-acceptable)? -- [[schmonz]]
>> Pong.. I'd be happier with a more 100% solution that let cookies be used
>> w/o needing to write a custom plugin to do it. --[[Joey]]
>>> According to LWP::UserAgent, for the common case, a complete
>>> and valid configuration for `$config{cookies}` would be `{ file =>
>>> "$ENV{HOME}/.cookies.txt" }`. In the more common case of not needing
>>> to prime one's cookies, `cookie_jar` can be `undef` (that's the
>>> default). In my less common case, the cookies are generated by
>>> visiting a couple magic URLs, which would be trivial to turn into
>>> config options, except that these particular URLs rely on SPNEGO
>>> and so LWP::Authen::Negotiate has to be loaded. So I think adding
>>> `$config{cookies}` (and using it in the aggregate plugin) should
>>> be safe, might help people in typical cases, and won't prevent
>>> further enhancements for less typical cases. --[[schmonz]]
>>>> Ok, done. Called it cookiejar. --[[Joey]]
|