| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to support translated basewiki and other underlays, we need
support for mo files in underlays.
The code did not allow this before, because if a mo file was in an
underlay, then it might try to update it, and its pot, and write to the
underlay, which is guaranteed to either fail due to permissions, or be
undesirable.
To fix, my approach is to just detect if a mo or pot file that is about to
be updated is in an underlay, and skip updating it. This seems to work
well:
- If the mo is out of date in the underlay, it won't get updated, but this
would probably be due to a problem in the underlay, or more likely,
the wiki is being rebuilt and so it *thinks* the mo is out of date,
but it's really not (and it would be a waste of time to rebuild it
anyway).
- If a page from the basewiki is edited, it is saved to the srcdir,
which causes generation of an updated mo and pot also in the srcdir;
the underlay stops being used for that page, and everything seems
to work.
Note that I am not including an underlay search directory for pot files.
They *seem* to be unnecessary for the underlay, since the mo files
in there never need to be updated.
|
| |
|
|
|
|
|
| |
It seem to make sense to remove the check for there being slave languages
as part of this, since one might want a wiki that is only in non-English.
|
|
|
|
|
| |
Recursive calls make perl whine about protypes, and it wasn't
adding any value.
|
|
|
|
|
|
| |
These are for use by wikis where the primary language is not English.
On such a wiki, it makes sense to use an underlay has the source for pages
in the native language.
|
| |
|
|
|
|
|
| |
It exports gettext and stuff by default, which conflicts with IkiWiki
exports.
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
Conflicts:
debian/changelog
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On various sites I have two IkiWiki instances running from the same
repository: one accessible via http and only accepting openid logins,
and one accessible via authenticated https and only accepting httpauth.
The https version should still pretty-print OpenIDs seen in git history,
even though it does not itself accept OpenID logins.
|
| |/ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
openiduser previously used a constructor that no longer works in 2.x.
However, all we actually want is the (undocumented) DisplayOfURL function
that is invoked by the display method, so try to use that.
(cherry picked from commit c3dd0ff5c7c10743107f203a5b456fdcd1b171df)
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Besides being wrong to do, this could lead to the wrong item
being expired, as follows: If B is added and at the same time
A is changed, then A's ctime may be set to the current time,
while B's is set to its creation time. Thus the new item, A,
is incorrectly removed as older.
(This interacted especially badly with the bug fixed by
90b4d079605b72bb50d1da41402d994960e10937.)
|
| |
| |
| |
| |
| |
| |
| | |
The aggregate state merge code neglected to merge changes to the md5
field of an item. Therefore, if an item's md5 changed after initial
aggregation, it would be updated, and rewritten, each time thereafter.
This was wasteful and indirectly led to some expire problems.
|
| | |
|
| | |
|
|\|
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
debian/changelog
debian/control
Signed-off-by: intrigeri <intrigeri@boum.org>
|
| |
| |
| |
| | |
support several cases including mercurial's long user names on the RecentChanges page, and urls with spaces being handled by the 404 plugin.
|
| | |
|
| |
| |
| |
| |
| | |
Another benefit is that consistently using gettext("Discussion")
eliminates the need to translate one string.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
and document the comment directive syntax.
Rationalle: Comments need to be user-editable so that they can be posted
via git commit etc.
The _comment directive is still supported, for back-compat.
|
| |
| |
| |
| | |
See #530654
|
| |
| |
| |
| |
| |
| |
| |
| | |
Setting up a new highlighter object is slightly expensive since it
reads and parses the langfile each time. So cache them.
This also speeds up ext2langfile by avoiding it needing to check for the
existence of a language file in some cases.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
format: Provide a htmlizefallback hook that other plugins can use to
handle formats that are not suitable for general-purpose htmlize hooks.
highlight: Use the hook to allow formatting of any language/extension,
without it needing to be enabled for standalone source files.
highlight: If the highlight perl binding is not available, fallback
safely to a passthrough mode.
|
| | |
|
| |
| |
| |
| |
| |
| | |
* debian/control: Add suggests for libhighlight-perl, although
that package is not yet created by Debian's highlight source package.
(See #529869)
|
| |
| |
| |
| | |
directive starting with _ is likewise internal.
|
| |
| |
| |
| | |
Also, sort the list of page types.
|
| |
| |
| |
| | |
Signed-off-by: intrigeri <intrigeri@boum.org>
|
|\|
| |
| |
| |
| | |
Conflicts:
debian/changelog
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
We build an array of [ plugin name, long name ] pairs, where long name
is an optional argument to hook(). So, a syntax plugin could define
long "friendly" name, such as "Markdown" instead of mdwn, and we would
then pass this array to formbuilder to populate the drop-down on the
edit page.
|
| |
| |
| |
| | |
Not needed since it returns a list of pages, not a fail/success object.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is sorta an optimisation, and sorta a bug fix. In one
test case I have available, it can speed a page build up from 3
minutes to 3 seconds.
The root of the problem is that $links{$page} contains arrays of
links, rather than hashes of links. And when a link is found,
it is just pushed onto the array, without checking for dups.
Now, the array is emptied before scanning a page, so there
should not be a lot of opportunity for lots of duplicate links
to pile up in it. But, in some cases, they can, and if there
are hundreds of duplicate links in the array, then scanning it
for matching links, as match_link and some other code does,
becomes much more expensive than it needs to be.
Perhaps the real right fix would be to change the data structure
to a hash. But, the list of links is never accessed like that,
you always want to iterate through it.
I also looked at deduping the list in saveindex, but that does
a lot of unnecessary work, and doesn't completly solve the problem.
So, finally, I decided to add an add_link function that handles deduping,
and make ikiwiki-transition remove the old dup links.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When finding the pageurl, it was calling bestlink unnecessarily.
Since at this point $page contains the full name of the page that
is being inlined, there is no need to do bestlink's scan
for it.
This is only a minor optimisation, since bestlink is only called
once per displayed, inlined page.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reverts commit 2f96c49bd1826ecb213ae025ad456a714aa04863.
I forgot about internal pages. We don't want * matching them!
I left the optimisation in pagecount, where it used to live.
Internal pages probably don't matter when they're just being
counted.
|