| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
| |
The guix-dameon WAL is still growing excessively, so avoid doing anything with
the long running store connection except registering temporary roots.
|
|
|
|
|
| |
As I think the temporary roots on the long running store connection should be
sufficient.
|
|
|
|
|
| |
As this isn't for correctness reasons, but resource usage. I'm hoping to
manage this differently.
|
| |
|
| |
|
|
|
|
| |
To help the QA data service prioritise branches over patches.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Since this seems to happen for i586-gnu for core-updates currently and I can't
seem to reproduce the issue locally or work out what's wrong.
|
| |
|
| |
|
|
|
|
|
| |
As there's an issue with current core-updates that I'm struggling to track
down.
|
| |
|
| |
|
| |
|
|
|
|
| |
As I think this is currently quite slow.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Make parallel use of inferiors when computing channel instance derivations,
and when extracting information about a revision. This should allow for some
horizontal scalability, reducing the impact of additional systems for which
derivations need computing.
This commit also fixes an apparent issue with package replacements, as
previously the wrong id was used, and this hid some issues around
deduplication.
|
|
|
|
| |
I'm sure this was present before, but maybe lost during some refactoring.
|
|
|
|
| |
To help find the right glibc-locales to use.
|
|
|
|
| |
As the default is a void port.
|
|
|
|
| |
As it's not actually used.
|
|
|
|
|
|
|
|
| |
Just have one fiber at the moment, but this will enable using fibers for
parallelism in the future.
Fibers seemed to cause problems with the logging setup, which was a bit odd in
the first place. So move logging to the parent process which is better anyway.
|
|
|
|
|
| |
Move in the direction of being able to run multiple inferior REPLs, and use
some vectors rather than lists in places (maybe this is more efficient).
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly a workaround for the occasional problems with the guix-commits
mailing list, as it can break and then the data service doesn't learn about
new revisions until the problem is fixed.
I think it's still a generally good feature though, and allows deploying the
data service without it consuming emails to learn about new revisions, and is
a step towards integrating some kind of way of notifying the data service to
poll.
|
|
|
|
| |
As previously it only applied to system tests.
|
|
|
|
| |
As the ordering from Guix seems to be non-deterministic.
|
|
|
|
|
|
|
|
| |
There's an issue where sometimes for i686-linux and armhf-linux, only a few
package derivations are computed.
This commit tries to simplify the code, and adds some conditional logging for
the guix package, which might help reveal what's going on.
|
|
|
|
|
| |
This helps with replacements, as the original package is usually higher up in
the file.
|
|
|
|
|
| |
Make sure to log any errors, and also use a more efficient approach sending
less data to the inferior.
|
|
|
|
| |
To help with debugging
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This might be generally useful, but I've been looking at it as it offers a way
to try and improve query performance when you want to select all the
derivations related to the packages for a revision.
The data looks like this (for a specified system and target):
┌───────┬───────┐
│ level │ count │
├───────┼───────┤
│ 15 │ 2 │
│ 14 │ 3 │
│ 13 │ 3 │
│ 12 │ 3 │
│ 11 │ 14 │
│ 10 │ 25 │
│ 9 │ 44 │
│ 8 │ 91 │
│ 7 │ 1084 │
│ 6 │ 311 │
│ 5 │ 432 │
│ 4 │ 515 │
│ 3 │ 548 │
│ 2 │ 2201 │
│ 1 │ 21162 │
│ 0 │ 22310 │
└───────┴───────┘
Level 0 reflects the number of packages. Level 1 is similar as you have all
the derivations for the package origins. The remaining levels contain less
packages since it's mostly just derivations involved in bootstrapping.
When using a recursive CTE to collect all the derivations, PostgreSQL assumes
that the each derivation has the same number of inputs, and this leads to a
large overestimation of the number of derivations per a revision. This in turn
can lead to PostgreSQL picking a slower way of running the query.
When it's known how many new derivations you should see at each level, it's
possible to inform PostgreSQL this by using LIMIT's at various points in the
query. This reassures the query planner that it's not going to be handling
lots of rows and helps it make better decisions about how to execute the
query.
|
| |
|
| |
|
|
|
|
| |
This is a bit ugly, but might speed up computing derivations for system tests.
|
|
|
|
|
|
|
|
|
|
|
| |
Generating system test derivations are difficult, since you generally need to
do potentially expensive builds for the system you're generating the system
tests for. You might not want to disable grafts for instance because you might
be trying to test whatever the test is testing in the context of grafts being
enabled.
I'm looking at skipping the system tests on data.guix.gnu.org, because they're
not used and quite expensive to compute.
|
|
|
|
| |
For derivation_inputs.
|
|
|
|
|
|
| |
This should help with query performance, as the recursive queries using
derivation_inputs and derivation_outputs are particularly sensitive to the
n_distinct values for these tables.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And remove the chunking of derivation lint warnings.
The derivation linter computes the derivation for each packages supported
systems, but there are two problems with the approach. By doing this for each
package in turn, it forces inefficient uses of caches, since most of the
cached data is only relevant to a single system. More importantly though,
because the work of checking one package is dependent on it's supported
systems, it's unpredictable how much work will happen, and this will tend to
increase as more packages support more systems.
I think especially because of this last point, it's not worth attempting to
keep running the derivation linter at the moment, because it doesn't seem
sustainable. I can't see an way to run it that's futureproof and won't break
at some point in the future when packages in Guix support more systems.
|
|
|
|
| |
To try and bring the peak memory usage down.
|
| |
|
| |
|
|
|
|
| |
To avoid long running queries.
|
| |
|
|
|
|
| |
To better understand the memory usage when this is happening.
|