| Commit message (Collapse) | Author | Age |
|
|
|
| |
links. Since this needs the just released XML::Feed 0.3, as well as a not yet released XML::RSS, it will fall back to the old method if no xml:base info is available.
|
|
|
|
|
|
| |
The machine parseable date needs to include a timezone.
Also, simplified the interface for date display.
|
| |
|
|
|
|
| |
in the future.
|
|
|
|
|
|
|
|
| |
newpagefile.
Note that newpagefile is not used here (or in recentchanges) because
the internal use pages they generate are transient and unlikely to
benefit from being put each in their own subdir.
|
| |
|
| |
|
|
|
|
|
|
|
| |
I saw this in the wild, apparently a page was not present on disk, but was
in the aggregate db, and not marked as expired either. Not sure how that
happened, but such pages should get marked as expired since they have an
effectively zero ctime.
|
| |
|
|
|
|
|
| |
The expiry code does need to make sure to sort in ctime order, even if
expiring by count, so it expires the right ones.
|
|
|
|
| |
elements.
|
|
|
|
| |
needs to wait for the pages to be rendered though)
|
|
|
|
| |
too many plugins.. brain exploding..
|
|
|
|
|
| |
They were a bit confusing, since they did not actually set the default, and
example values are sufficient.
|
| |
|
| |
|
| |
|
|
|
|
| |
This handles deleting empty directories too.
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
IkiWiki/Plugin/aggregate.pm
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
| |
Usage:
1. Update all pagespecs that use aggregated pages to use internal()
2. ikiwiki-transition aggregateinternal $srcdir $htmlext
(where $srcdir and $htmlext are the srcdir and htmlext options in
your .setup file)
3. Add aggregateinternal to your .setup file
4. Rebuild the wiki
|
|\ |
|
| | |
|
|/
|
|
|
|
|
|
| |
This addresses <http://ikiwiki.info/todo/aggregate_to_internal_pages/>
in a simple way. With this approach, a flag day is required, on which all
users of aggregated pages start to inline them using the internal() pagespec;
after that, the aggregateinternal option can safely be switched on in the
setup file (and the old aggregated pages can be deleted by hand).
|
|
|
|
|
| |
Allows to specify the template file which is used to
create the html pages.
|
|
|
|
| |
explicitly pass 0 (FB_DEFAULT) as the second parameter. Apparently perl 5.8 needs this to avoid crashing on malformed utf-8, despite its docs saying it is the default.
|
|
|
|
| |
stuck on shared hosting without cron. (Sheesh.) Enabled via the `aggregate_webtrigger` configuration optiom.
|
|
|
|
| |
lacking one.
|
|
|
|
| |
Used in several subs, not all of which load it on demand, this seems simpler.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Now aggregation will not lock the wiki. Any changes made during aggregaton are
merged in with the changed state accumulated while aggregating. A separate
lock file prevents multiple concurrent aggregators. Garbage collection
of orphaned guids is much improved. loadstate() is only called once
per process, so tricky support for reloading wiki state is not needed.
(Tested fairly thuroughly.)
|
| |
|
|
|
|
| |
approach.
|
|
|
|
| |
the aggregating page to be rebuilt. Fix this.
|
|
|
|
| |
misc fixes
|
|
|
|
| |
removed
|
|
|
|
|
|
|
| |
gettext choked on a Unicode apostrophe in the aggregate plugin, which
appeared in a new error message in commit
4f872b563300e4a277cac3d7ea2a999bcf75d1ff. Replace it with an ASCII
apostrophe.
|
|
|
|
|
|
| |
the code, since that process can change internal state as needed, and
it will automatically be cleaned up for the parent process, which proceeds
to render the changes.
|
| |
|
|
|
|
|
|
| |
directives.
* aggregate: Yet another state saving fix (sigh).
* aggregate: Add hack to support feeds with invalidly escaped html entities.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
the needsbuild hook. This resulted in feeds not being removed when pages
were updated, and probably other bugs.
* aggregate: Avoid uninitialised value warning when removing a feed that
has an expired guid.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
links required meta to be run during scan, which complicated its data
storage, since it had to clear data stored during the scan pass to avoid
duplicating it during the normal preprocessing pass.
* If you used "meta link", you should switch to either "meta openid" (for
openid delegations), or tags (for internal, invisible links). I assume
that nobody really used "meta link" for external, non-openid links, since
the htmlscrubber ate those. (Tell me differently and I'll consider bringing
back that support.)
* meta: Improved data storage.
* meta: Drop the hackish filter hook that was used to clear
stored data before preprocessing, this hack was ugly, and broken (cf:
liw's disappearing openids).
* aggregate: Convert filter hook to a needsbuild hook.
|
| |
|
|
|
|
|
|
| |
page name to be expired and reused for several distinct guids. When this
happened, the expiry code counted each past guid that had used that page
name as a currently existing page, and thus expired too many pages.
|