| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
Instead of logging "bad file name %s" and attempting to call the
(string) filename as a subroutine, actually do the intended
sprintf operation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The instance in cgierror() is a potential cross-site scripting attack,
because an attacker could conceivably cause some module to raise an
exception that includes attacker-supplied HTML in its message, for
example via a crafted filename. (OVE-20160505-0012)
The instances in preprocess() is just correctness. It is not a
cross-site scripting attack, because an attacker could equally well
write the desired HTML themselves; the sanitize hook is what
protects us from cross-site scripting here.
|
|
|
|
| |
This is a corner case spotted while fixing UTF-8 syslogging.
|
|
|
|
| |
Sys::Syslog is not UTF-8-literate.
|
|
|
|
| |
It doesn't do anything yet.
|
|
|
|
|
|
|
| |
This avoids nasty surprises on upgrade if a site is using httpauth,
or passwordauth with an account_creation_password, and relying on
only a select group of users being able to edit the site. We can revisit
this for ikiwiki 4.
|
|
|
|
| |
ikiwiki-hosting needs to do this
|
|
|
|
|
|
|
|
| |
IkiWiki::cgiurl() currently produces non-deterministic output, because
the params hash can be sorted different ways.
Sorting keys to params before crafting the string should make the
output deterministic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
banned_users
This was needed due to emailauth, but I've also wrapped all IP address
exposure in cloak(), although the function doesn't yet cloak IP addresses.
(One IP address I didn't cloak is the one that appears on the password
reset email template. That is expected to be the user's own IP address,
so ok to show it to them.)
Thanks to smcv for the pointer to
http://xmlns.com/foaf/spec/#term_mbox_sha1sum
|
| |
|
| |
|
|
|
|
| |
GNU system and that file exists, or GMT otherwise
|
| |
|
|
|
|
|
| |
This makes the documentation read more sensibly, and matches how we
handle underlaydirs and underlaydir.
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Mobile browsers typically assume that arbitrary web pages are
designed for a "desktop-sized" browser window (around 1000px)
and display that layout, zoomed out, in order to avoid breaking
naive designs that assume nobody will ever look at a website on
a phone or something. People who are actually doing "responsive
design" need to opt-in to mobile browsers rendering it at a
more normal size.
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
According to caniuse.com, a significant fraction of Web users are
still using Internet Explorer versions that do not support HTML5
sectioning elements. However, claiming we're XHTML 1.0 Strict
means we can't use features invented in the last 12 years, even if
they degrade gracefully in older browsers (like the role and placeholder
attributes).
This means our output is no longer valid according to any particular
DTD. Real browsers and other non-validator user-agents have never
cared about DTD compliance anyway, so I don't think this is a real loss.
|
|
|
|
|
|
|
|
|
| |
It appears that both the open-source and proprietary rulesets for
ModSecurity default to blacklisting requests that say they are
from libwww-perl, presumably because some script kiddies use libwww-perl
and are too inept to set a User-Agent that is "too big to blacklist",
like Chrome or the iPhone browser or something. This seems doomed to
failure but whatever.
|
|
|
|
|
| |
This solves several people's issues with the CGI trying to be
too clever when IkiWiki is placed behind a reverse-proxy.
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
In the scan phase, it's too early to match pagespecs or sort pages;
in the render phase, both of those are OK.
It would be possible to add phases later, renumbering them if necessary
to maintain numerical order.
|
| |
| |
| |
| | |
Also add a regression test for templatebody.
|
| | |
|
| | |
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| | |
Whenever I look at dependency calculation, it takes me a while to get my
head round the concept of influences. If what I've written here is
accurate, maybe the next person to look at this (or my future self)
will need less of a run-up.
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| | |
pagespec_match_list() makes the current page depend on the pagespec
being matched, so if you use [[!trailoptions sort="..."]] to force
a sort order, the trail ends up depending on internal(*) and is
rebuilt whenever anything changes. Add a new sort_pages() and use that
instead.
|
|/ |
|
|
|
|
| |
directive infinite loop guard.
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
agent string for outbound HTTP requests
Package: ikiwiki
Version: 3.20140125
Severity: wishlist
By default, LWP::UserAgent used by IkiWiki to perform outbound HTTP
requests sends the string "libwww-perl/<version number>" as User-Agent
header in HTTP requests. Some blogging platforms have blacklisted the
user agent and won't serve any content for clients using this user agent
string. With IkiWiki configuration option "useragent" it's now possible
to define a custom string that is used for the value of the User-Agent
header.
|
| |
| |
| |
| | |
querying git to find the files that were changed, rather than looking at the work tree. Not enabled by default as it can break some setups where not all files get committed to git.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I have benchmarked the pagename() call this avoids taking up to 2 seconds
for a loadindex in a large wiki. The total loadindex for that wiki was
6.46s, so this is a significant improvment.
Even on a smaller site, this reduces the refresh time from 1.69 to 1.52
seconds.
The only breakage risk here is that pagename() can change the page name
it calculates due to setup changes. But in the case of a setup change, the
whole site is rebuilt. So the cached page name is not used in that
case.
|
|/
|
|
|
|
|
|
|
| |
used instead. Can speed up refreshes by nearly 50% in some circumstances.
I *think* this is ok, at least it results in close to the same index being
saved as before. The difference is that plugins that have a pagestate of {}
have that recorded this way, while with the tight loop, the key for the
plugin in not copied in that case. I cannot see how this could matter.
|
| |
|
| |
|
| |
|
|
|
|
| |
of spammers.
|
|
|
|
|
| |
This allows e.g. the meta command to be used to introduce DublinCore
metadata.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Try to avoid a situation in which so many ikiwiki cgi wrapper programs are
running, all waiting on some long-running thing like a site rebuild, that
it prevents the web server from doing anything else. The current approach
only avoids this problem for GET requests; if multiple cgi's run GETs on a
site at the same time, one will display a "please wait" page for a
configurable number of seconds, which then redirects to retry. To enable
this protection, set cgi_overload_delay to the number of seconds to wait.
This is not enabled by default.
|