| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
| |
Generating system test derivations are difficult, since you generally need to
do potentially expensive builds for the system you're generating the system
tests for. You might not want to disable grafts for instance because you might
be trying to test whatever the test is testing in the context of grafts being
enabled.
I'm looking at skipping the system tests on data.guix.gnu.org, because they're
not used and quite expensive to compute.
|
|
|
|
|
|
|
| |
As data.qa.guix.gnu.org has lots of branches and 100,000+ metrics, and this is
causing Prometheus to time out fetching the metrics.
I'm not sure there's much value in these metrics, so cut them out for now.
|
| |
|
|
|
|
| |
Since this is now quite dynamic, it's useful to have a metric for it.
|
|
|
|
|
| |
This will allow for instrumenting low level database functionality, before
anything starts using the database.
|
| |
|
|
|
|
| |
For derivation_inputs.
|
| |
|
|
|
|
|
|
| |
This should help with query performance, as the recursive queries using
derivation_inputs and derivation_outputs are particularly sensitive to the
n_distinct values for these tables.
|
|
|
|
|
| |
In the case where multiple data deleting processes end up running at the same
time.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And remove the chunking of derivation lint warnings.
The derivation linter computes the derivation for each packages supported
systems, but there are two problems with the approach. By doing this for each
package in turn, it forces inefficient uses of caches, since most of the
cached data is only relevant to a single system. More importantly though,
because the work of checking one package is dependent on it's supported
systems, it's unpredictable how much work will happen, and this will tend to
increase as more packages support more systems.
I think especially because of this last point, it's not worth attempting to
keep running the derivation linter at the moment, because it doesn't seem
sustainable. I can't see an way to run it that's futureproof and won't break
at some point in the future when packages in Guix support more systems.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
In the /compare response.
This should enable qa.guix.gnu.org to detect when the base revision for a
comparison is unknown.
|
|
|
|
| |
To try and bring the peak memory usage down.
|
|
|
|
|
| |
This will make it easier to tell when a scheduled build is yet to start, and
can't start due to a missing dependency.
|
|
|
|
| |
As this is unused.
|
|
|
|
| |
And drop the chunk size.
|
|
|
|
| |
As scheduling a build might unblock others.
|
|
|
|
|
|
|
|
| |
This means that an output is viewed to not be blocking if it has a scheduled
build, just as if it has a succeeded build. Also, scheduling builds will
unblock blocked builds.
This is helpful as it means that it reduces noise for blocking builds.
|
| |
|
|
|
|
| |
In various places in the blocked-builds module.
|
|
|
|
| |
So that the queries don't get cancelled by the statement timeout.
|
|
|
|
| |
To make it more efficient.
|
|
|
|
| |
This also fixes a typo in the partition name.
|
|
|
|
| |
As it doesn't work in a transaction.
|
| |
|
|
|
|
|
|
|
|
|
| |
This will hopefully provide a less expensive way of finding out if a scheduled
build is probably blocked by other builds failing or being canceled.
By working this out when the build events are recieved, it should be more
feasible to include information about whether builds are likely blocked or not
in various places (e.g. revision comparisons).
|
| |
|
| |
|
|
|
|
|
|
| |
I think the idle connections associated with idle threads are still taking up
memory, so especially now that you can configure an arbitrary number of
threads (and thus connections), I think it's good to close them regularly.
|
| |
|
|
|
|
|
| |
As the guix-data-service process seems to be using excessive amounts of
memory, and this will be useful to track it.
|
|
|
|
| |
And double the default to 16.
|
|
|
|
| |
As I think with lots of requests, this could become a bottleneck.
|
|
|
|
| |
To avoid long running queries.
|
|
|
|
|
| |
Chunk the values inserted in the query, rather than the derivations involved,
as this is more consistent.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This helps to avoid queries getting logged as slow just because of the amount
of data.
|
| |
|
|
|
|
| |
As these queries are still slow enough to be logged.
|
| |
|
| |
|
| |
|