| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
This matches the previous behaviour without using the platform data.
|
|
|
|
|
| |
This means there's less reliance on the hardcoded lists of systems and targets
and mappings between them.
|
| |
|
| |
|
|
|
|
| |
For the table schema change.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This means you can query for derivations where builds exist or don't exist on
a given build server.
I think this will come in useful when submitting builds from a Guix Data
Service instance.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
And create a proper git_branches table in the process.
I'm hoping this will help with slow deletions from the
package_derivations_by_guix_revision_range table in the case where there are
lots of branches, since it'll separate the data for one branch from another.
These migrations will remove the existing data, so
rebuild-package-derivations-table will currently need manually running to
regenerate it.
|
|
|
|
| |
Thanks to Tobias for reporting.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
As I'm seeing the inferior process crash with [1] just after fetching the
derivation lint warnings.
This change appears to help, although it's probably just a workaround. When
there's more packages/derivations, the caches might need clearing while
fetching the derivation lint warnings, or this will need to be split across
multiple processes.
1: Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
|
|
|
|
|
| |
These cached store connections have caches associated with them, that take up
lots of memory, leading to the inferior crashing. This change seems to help.
|
|
|
|
|
|
|
|
|
|
| |
To the end of the main revision processing transaction.
Currently, I think there are issues when this query does update some builds,
as those rows in the build table remain locked until the end of the
transaction. This then causes build event submission to hang. Moving this part
of the revision loading process to the end of the transaction should help to
mitigate this.
|
|
|
|
|
| |
When there's a target, render the heading neatly, and include the target
parameter in the URLs.
|
| |
|
|
|
|
|
| |
This means that the lock can be acquired after closing the inferior, freeing
the large amount of memory that the inferior process is probably using.
|
|
|
|
| |
Since the hardcoded list in the load-new-guix-revision code has been updated.
|
| |
|
|
|
|
| |
As the cross targets take quite some time.
|
|
|
|
| |
This might help reduce memory usage a little.
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, duplicates could creep through if the duplicate wasn't exported,
and only found as a replacement. Now they're filtered out.
This isn't ideal, as duplicates aren't always mistakes, it would be useful
still to capture this package, but having multiple entries for the same
name+version causes the comparison functionality to break.
|
|
|
|
|
| |
Use the a-version and b-version variables, rather than calling the functions
again.
|
|
|
|
|
|
|
|
|
|
| |
It's 100% translated according to
<https://translate.fedoraproject.org/projecs/guix/guix/nl/>.
* guix-data-service/model/package-metadata.scm
(locales): Add nl_NL.utf-8
Signed-off-by: Christopher Baines <mail@cbaines.net>
|
| |
|
|
|
|
|
|
|
| |
Switch from using a recursive query to doing a breath first search through the
graph of derivations, as I think PostgreSQL wasn't doing a great job of
planning the recursive queries (it would overestimate the rows involved, and
prefer sequential scans for the derivation_outputs table).
|
|
|
|
| |
Use larger batches and more efficient duplicate deletion.
|
|
|
|
| |
Which is useful when deleting duplicates from large lists.
|
|
|
|
| |
So that the job completes. The sequence can be deleted later.
|
|
|
|
|
|
| |
As I think some operations (like the database backup) can block the DROP
SEQUENCE bit, so at least this approach means that the main transaction should
commit and then the sequence is eventually dropped.
|
|
|
|
|
|
| |
This code is a bit tricky, since it should be compatible with old and new guix
revisions. I think these changes stop computing package derivations for
invalid systems, while hopefully not breaking anything.
|
|
|
|
|
| |
I think the main change required is just to stop accessing the now missing
current-fiber parameter.
|
| |
|
|
|
|
| |
As this is clearer.
|
|
|
|
|
| |
Remove the brackets from the values since this makes the set of values more
consistent, and don't display the no additional fields value on the page.
|
|
|
|
| |
Since this speeds up the response if you don't need the nar information.
|
| |
|
|
|
|
|
| |
For the latest processed revision, this is useful for looking up which is the
latest processed revision.
|
|
|
|
| |
Which should reduce the peak memory usage.
|
| |
|
|
|
|
| |
As that better reflects what it does.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously it would compute a long list of strings, potentially more than
100,000 elements long, then split this string up and insert it in chunks. Only
then could memory be freed.
This new approach builds the strings in batches for the insertion query, then
moves on to the next batch. This should mean that more memory can be freed and
reused along the way.
|
|
|
|
|
| |
This is helpful when the jobs fail through Guile running out of memory for
example.
|
|
|
|
|
| |
The return value of sleep is unreliable (see guile bug #53139), so use a
signal handler instead.
|
| |
|