| Commit message (Collapse) | Author | Age |
|
|
|
| |
To allow for having branches in multiple git repositories.
|
| |
|
|
|
|
| |
As most pages vary based on the Accept header.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The licenses table, along with the package_metadata table had duplicate
values. This could happen as the unique constraints on those tables didn't
properly account for the nullable fields.
The duplicates in those tables also affected the license_sets, packages,
package_derivations tables in a similar way. Finally, the
guix_revision_package_derivations table was also affected.
This commit adds a migration to fix the data, as well as the constraints. THe
code to populate the licenses and package_metadata tables is also updated.
|
|
|
|
| |
To allow easily comparing revisions.
|
| |
|
| |
|
|
|
|
| |
To enable adding a link to cgit to the comparison page.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
As that's probably more useful than recent revisions and jobs.
|
|
|
|
| |
As I'm thinking about using this on the index page.
|
|
|
|
| |
This is common to both view-revision and unknown-revision.
|
|
|
|
|
| |
As this was duplicated in the functions for viewing known and unknown
revisions.
|
| |
|
| |
|
|
|
|
|
| |
Make the link between repositories and branches clearer, replacing the
/branches and /branch pages by /repository/ and /repository/*/branch/* pages.
|
|
|
|
| |
This allows running jobs manually that have failed.
|
| |
|
|
|
|
|
| |
Specify 'GET rather than GET, to actually match the method, rather than
binding it to GET.
|
|
|
|
| |
This means it'll connect over the socket, like the application.
|
|
|
|
|
|
| |
Reserve some capacity to process revisions which are the tip of a branch. This
should reduce the time between new revisions appearing, and then being
processed.
|
|
|
|
| |
So that they aren't retried again and again.
|
|
|
|
|
|
| |
There are some revisions of Guix which take forever to process (or days at
least). To avoid jobs being processed forever, kill them after they've been
running for a while (default 24 hours).
|
|
|
|
|
|
| |
This should speed up processing new revisions, reduce latency between finding
out about new revisions and processing them, as well as help manage memory
usage, by processing each job in a process that then exits.
|
|
|
|
|
|
| |
This allows easily processing an individual job by id. This may be useful to
use manually, but also when processing jobs in parallel, as forking doesn't
work well with the libpq library used by squee.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is working towards running the jobs in parallel. Each job looks at the
records in the database, and adds missing ones. If other jobs, running in
different transactions insert the same missing records at the same time, this
could cause an error.
Therefore, to just avoid this problem, lock before inserting the data. This
will allow the jobs to be processed in parallel, and it shouldn't have too
much of an effect on performance, as the slow bit is outside of the
transaction.
|
|
|
|
|
|
|
| |
This is in preparation for running jobs in parallel. The channels code in Guix
uses a cached copy of the Git repository. Multiple jobs can't concurrently
access this without causing issues, so use an advisory lock to ensure that
only one job is using the repository at a time.
|
|
|
|
|
| |
Use symbol-hash to convert a symbol to the number for the lock. I'm hoping
this is OK, and it seems to be stable.
|
|
|
|
|
| |
This happened for a package with #f as the licenses. That's incorrect, but try
to handle it without erroring.
|
| |
|
|
|
|
| |
This helps when working out which connection to the database is doing what.
|
|
|
|
| |
It doesn't work as intended unless the module is also specified, so do that.
|
|
|
|
|
| |
If the job started, and then was restarted, the row will already exist. So
don't error on a conflict.
|
|
|
|
|
|
| |
To better separate the code that needs to happen after a lock has been
acquired to allow concurrently loading revisions without concurrent insertion
issues.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the query for the jobs page was really slow, as it checked the
load_new_guix_revision_job_log_parts table for each job, doing a sequential
scan through the potentially large table.
Adding an index didn't seem to help, as the query planner would belive the
query could return loads of rows, where actually, all that needed checking is
whether a single row existed with a given job_id.
To avoid adding the index to the load_new_guix_revision_job_log_parts table,
and fighting with the query planner, this commit changes the
load_new_guix_revision_job_logs table to include a blank entry for jobs which
are currently being processed. This is inserted at the start of the job, and
then updated at the end to combine and replace all the parts.
This all means that the jobs page should render quickly now.
|
| |
|
|
|
|
|
| |
As the page can be quite long, so jumping to the top and bottom is really
useful.
|
|
|
|
|
|
|
|
| |
Replace the Guile-side HTML escaping with a less complete, but hopefully
faster PostgreSQL side HTML escaping approach.
Also, allow reading part of the log, by default, the last 1,000,000
characters, as this should render quickly.
|
|
|
|
|
| |
Which does the same thing as parse-result-limit, which may have an overly
specific name.
|
|
|
|
| |
So that this is logged.
|
|
|
|
|
| |
This part of the soft-port seems to be called, but I don't know why, and
trying to close the output port causes issues.
|
|
|
|
| |
Which shows the output for that job.
|
|
|
|
|
|
|
| |
So that it can easily be shown through the web interface. There's two tables
being used. One which temporarily stores the output as it's output while the
job is running, and other which stores the whole log once the job has
finished.
|
|
|
|
|
| |
Try to isolate the code that inserts in to the database, so that the relevant
tables can be locked during this time.
|