| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regexp blowup.
Complex regular subexpression recursion limit (32766) exceeded at
/home/joey/src/ikiwiki/IkiWiki.pm line 1532.
This doesn't fix the blowup potential itself, it just fixes the typo. :)
A sample page that causes the blowup is attached below for future
reference. The first directive is not terminated. Contributing are the
additional quotes around the following directives, which mean that they can
each be processed as a parameter to the first directive, or as an
individual directive. In resolving this ambiguity, the regexp blows up.
Happily, perl contains the explosion , so I don't think there is an exploit
here.
"[[!shortcut name=wiktionary url=\"https://secure.wikimedia.org/wiktionary/en/"
"[[!shortcut name=debss url=\"http://snapshot.debian.net/package/%s\"]]"
"[[!shortcut name=debwiki url=\"http://wiki.debian.org/%s\"]]"
"[[!shortcut name=fdobug url=\"https://bugs.freedesktop.org/show_bug.cgi?id=%s\" desc=\"freedesktop.org bug #%s\"]]"
"[[!shortcut name=fdolist url=\"http://lists.freedesktop.org/mailman/listinfo/%s\" desc=\"%s@lists.freedesktop.org\"]]"
"[[!shortcut name=cpanrt url=\"https://rt.cpan.org/Ticket/Display.html?id=%s\" desc=\"CPAN RT#%s\"]]"
"[[!shortcut name=novellbug url=\"https://bugzilla.novell.com/show_bug.cgi?id=%s\" desc=\"bug %s\"]]"
"[[!shortcut name=fdolist url=\"http://lists.freedesktop.org/mailman/listinfo/%s\" desc=\"%s@lists.freedesktop.org\"]]"
"[[!shortcut name=gnomebug url=\"http://bugzilla.gnome.org/show_bug.cgi?id=%s\" desc=\"GNOME bug #%s\"]]"
"[[!shortcut name=linuxbug url=\"http://bugzilla.kernel.org/show_bug.cgi?id=%s\" desc=\"Linux bug #%s\"]]"
"[[!shortcut name=gmane url=\"http://dir.gmane.org/gmane.%s\" desc=\"gmane.%s\"]]"
"[[!shortcut name=gmanemsg url=\"http://mid.gmane.org/%s\"]]"
"[[!shortcut name=cpan url=\"http://search.cpan.org/search?mode=dist&query=%s\"]]"
"[[!shortcut name=ctan url=\"http://tug.ctan.org/cgi-bin/ctanPackageInformation.py?id=%s\"]]"
"[[!shortcut name=hoogle url=\"http://haskell.org/hoogle/?q=%s\"]]"
"[[!shortcut name=iki url=\"http://ikiwiki.info/%S/\"]]"
"[[!shortcut name=ljuser url=\"http://%s.livejournal.com/\"]]"
"[[!shortcut name=rfc url=\"http://www.ietf.org/rfc/rfc%s.txt\" desc=\"RFC %s\"]]"
"[[!shortcut name=c2 url=\"http://c2.com/cgi/wiki?%s\"]]"
"[[!shortcut name=meatballwiki url=\"http://www.usemod.com/cgi-bin/mb.pl?%s\"]]"
"[[!shortcut name=emacswiki url=\"http://www.emacswiki.org/cgi-bin/wiki/%s\"]]"
"[[!shortcut name=haskellwiki url=\"http://haskell.org/haskellwiki/%s\"]]"
"[[!shortcut name=dict url=\"http://www.dict.org/bin/Dict?Form=Dict1&Strategy=*&Database=*&Query=%s\"]]"
"[[!shortcut name=imdb url=\"http://imdb.com/find?q=%s\"]]"
"[[!shortcut name=gpg url=\"http://pgpkeys.mit.edu:11371/pks/lookup?op=vindex&exact=on&search=0x%s\"]]"
"[[!shortcut name=perldoc url=\"http://perldoc.perl.org/search.html?q=%s\"]]"
"[[!shortcut name=whois url=\"http://reports.internic.net/cgi/whois?whois_nic=%s&type=domain\"]]"
"[[!shortcut name=cve url=\"http://cve.mitre.org/cgi-bin/cvename.cgi?name=%s\"]]"
"[[!shortcut name=cia url=\"http://cia.vc/stats/project/%s\"]]"
"[[!shortcut name=ciauser url=\"http://cia.vc/stats/user/%s\"]]"
"[[!shortcut name=flickr url=\"http://www.flickr.com/photos/%s\"]]"
"[[!shortcut name=man url=\"http://linux.die.net/man/%s\"]]"
"[[!shortcut name=ohloh url=\"http://www.ohloh.net/projects/%s\"]]"
"[[!shortcut name=cpanrt url=\"https://rt.cpan.org/Ticket/Display.html?id=%s\" desc=\"CPAN RT#%s\"]]"
"[[!shortcut name=novellbug url=\"https://bugzilla.novell.com/show_bug.cgi?id=%s\" desc=\"bug %s\"]]"
|
| |
|
|
|
|
| |
confuses eg, cvsweb.
|
| |
|
|
|
|
| |
second parameter, to allow for plugins that needs access to this information earlier than the delete hook.
|
|
|
|
| |
array of things that need built. (Backwards compatability code keeps plugins using the old interface working.)
|
| |
|
|
|
|
| |
This reverts commit 25447bccae0439ea56da7a788482a4807c7c459d.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is needed for the po plugin vs. e.g. meta titles.
In order to get rid of the ugly "rebuilding all pages to fix meta titles" thing,
Joey suggested to make "po, at scan time, re-run the scan hooks, passing them
modified content (either converted from po to mdwn or with the escaped stuff
cheaply de-escaped)". This would unfortunately not work, as the meta plugin
gathers its data using the preprocess hook in scan mode: it would overwrite with
buggy data the correct data we would have forced it to gather in po's scan hook.
We then need a hook that runs *after* the preprocess hook has been run in scan
mode, but *before* any page rendering is started. Hence this one.
|
|
|
|
|
|
|
|
|
|
| |
There was some confusion about whether the filename was
relative to srcdir or not. Some test cases, and the bzr
plugin assumed it was relative to the srcdir. Most everything else
assumed it was absolute.
Changed it to relative, for consistency with the rest
of the rcs_ functions.
|
| |
|
|
|
|
| |
Bugfix in passing: New files not treated as such when no rcs is used.
|
|\
| |
| |
| | |
and fix underlaydir override attack guard when srcdir is non-absolute
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
(In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..)
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and the flag
remains set; and decode_utf8 sees the flag and does *nothing*. The result
is that the filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that chdir srcdir is safe because we check for symlinks in the srcdir
path.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A short story:
Once there was a unicode string, let's call him Srcdir.
Along came a crufy old File::Find, who went through a tree and pasted each
of the leaves in turn onto Srcdir. But this 90's relic didn't decode the
leaves -- despite some of them using unicode! Poor Srcdir, with these
leaves stuck on him, tainted them with his nice unicode-ness. They didn't
look like leaves at all, but instead garbage.
In other words, perl's unicode support sucks mightily, and drives
us all to drink and bad storytelling. But we knew that..
So, srcdir is not normally flagged as unicode, because typically it's pure
ascii. And in that case, things work ok; File::Find finds filenames, which
are not yet decoded to unicode, and appends them to the srcdir, and then
decode_utf8 happily converts the whole thing.
But, if the srcdir does contain utf8 characters, that breaks. Or, if a Yaml
setup file is used, Yaml::Syck's implicitunicode sets the unicode flag of
*all* strings, even those containing only ascii. In either case, srcdir
has the unicode flag set; a non-decoded filename is appended, and
decode_utf8 sees the flag and does *nothing*. The result is that the
filename is not decoded, so looks valid and gets skipped.
File::Find only sticks the directory and filenames together in no_chdir
mode .. but we need that mode for security. In order to retain the
security, and avoid the problem, I made it not pass srcdir to File::Find.
Instead, chdir to the srcdir, and pass ".". Since "." is ascii, the problem
is avoided.
Note that it takes care to chdir back to the starting location. Because
the user may have specified relative paths and so staying in the srcdir
might break. A relative path could even be specifed for an underlay dir, so
it chdirs back after each.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pages that had contained them.
Problem is that by the time rendering calls render_dependent, %pagesources
has had deleted files removed from it. So match_comment's lookup of
files in there to see if they had the _comment extension failed.
I had to introduce a hash that temporarily holds filenames of deleted pages
to fix this.
Note that unlike comment(), internal() had avoided this pitfall by being
defined to match both internal and non-internal pages.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Necessary so search can remove its indexes for internal pages.
But also, it seems it was an omission not to pass the deleted
pages before.
|
|
|
|
| |
indexed for searching.
|
|
|
|
| |
Probably only the search plugin uses it, so this seemed safe.
|
| |
|
| |
|
|
|
|
|
|
| |
Plugins will also be able to use this to tell if the template
is being used to generate a wiki page, when misctemplate starts
also using page.tmpl.
|
| |
|
|
|
|
|
|
|
| |
links to the action bar without modifying the template further.
(COMMENTSLINK and DISCUSSIONLINK could be folded into this, but are kept
separate for now to avoid breaking modified templates.)
|
|
|
|
| |
mode, use time tag.
|
|
|
|
|
|
| |
* Ikiwiki can be configured to generate html5 instead of the default xhtml
1.0. The html5 output mode is experimental, not yet fully standards
compliant, and will be subject to rapid change.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Needed to handle the move of the .js files into ikiwiki/, but also this is
a longstanding bug.
Old pagemtime is not remembered in rebuild mode, and changing that would
need a lot of changes. So instead, loop on pagectime, which is remembered.
Change to remembering old pagesources info in rebuild mode. This seems safe
enough.
|
| |
|
| |
|
|
|
|
| |
Registered by passing "" as page name to add_depends.
|
|
|
|
|
|
| |
Rather than wasting resources recording that every page depends on
page.tmpl, add a special case. The special case curretly rebuilds non-page
files too when page.tmpl changes, but that's minor.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This entailed changing template_params; it no longer takes the template
filename as its first parameter.
Add template_depends to api and replace calls to template() with
template_depends() in appropriate places, where a dependency should be
added on the template.
Other plugins don't use template(), so will need further work.
Also, includes are disabled for security. Enabling includes only when using
templates from the templatedir would be nice, but would add a lot of
complexity to the implementation.
|
|\
| |
| |
| |
| | |
Conflicts:
IkiWiki/Plugin/tag.pm
|
| |
| |
| |
| |
| |
| | |
loadindex does not bother populating oldtypedlinks if there is no link
type. However, the code in link_types_changed assumed that if oldtypedlinks
is not defined, and typedlinks is, they must differ.
|
| |
| |
| |
| |
| |
| | |
This way, if an autofile is registered for a file that already exists,
it is remembered that it was tried, and it doesn't get recreated when
removed.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes the problem that it did not remember if an autofile is deleted,
unless a plugin happened to register the autofile at the same time.
With the new code, we just never recreate an autofile more than once.
Only downside is that the list of autofiles is never pruned either.
And I don't really see a way to prune it.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Splitting out this function bothered me. It is conceptially similar to
file_pruned, and yet also very specific to exactly the security needs of
find_src_files.
I liked that it got rid of duplicate code in the latter function. So
instead, put a helper sub in that, which I think allows refactoring
things more cleanly, and with less boilerplate.
As to the needs of gen_autofile, I'm not convinced this needs to handle
the same set of problems that verify_src_file did. So I sat down and
wrote a custom validator for autofiles, which turned out to seem to just
need three things: Make sure the candidate filename is not something
that would be pruned; untaint the candidate filename; and make sure that
srcdir doesn't already have something with its name. (Plus, of course,
all the other checks that were already in gen_autofile.)
(In passing, also fixed a bunch of bugs I had introduced in this branch.)
|
|\| |
|
| | |
|
| |
| |
| |
| | |
This is more accurate when a file that is not a page is what is removed.
|
| | |
|
| |
| |
| |
| | |
Ok, this is longer, but features less scary action at a distance.
|
| |
| |
| |
| |
| | |
Filenames need to be decoded, as File::Find does not provide them in
decoded form, but other callers of verify_src_file will be using utf8.
|
| |
| |
| |
| |
| |
| | |
Made add_autofile take a generator function, and just register the
autofile, for later possible creation. The testing is moved into Render,
which allows cleaning up some stuff.
|