| Commit message (Collapse) | Author | Age |
... | |
| |
|
|
|
|
|
| |
As these can cause deadlocks. This will probably cause errors, so some
retrying will need to be added.
|
|
|
|
|
| |
Rather than the derivations system id, as this helps PostgreSQL run the query
faster.
|
|
|
|
|
| |
Move the batching to the database, which should reduce memory usage while
removing the limit on the number of fetched narinfos.
|
|
|
|
|
| |
Currently I'm seeing failures due to guile-gnutls not supporting suspendable
ports (write_wait_fd), so batch the requested outputs to try and avoid this.
|
|
|
|
|
| |
The request timeout should ensure that the operations don't back up if the
thread pool is overloaded.
|
| |
|
|
|
|
| |
So the fetch-result-of-defered-thunk procedure can be removed.
|
|
|
|
|
|
|
|
| |
In to two thread pools, a default one, and one reserved for essential
functionality.
There are some pages that use slow queries, so this should help stop those
pages block other operations.
|
| |
|
| |
|
| |
|
|
|
|
| |
After the migrations have run.
|
| |
|
|
|
|
|
|
| |
Use the new approach of looking up the distribution of the derivations, and
building a non recursive query specifically for this revision. This should
avoid PostgreSQL picking a poor plan for performing the query.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This might be generally useful, but I've been looking at it as it offers a way
to try and improve query performance when you want to select all the
derivations related to the packages for a revision.
The data looks like this (for a specified system and target):
┌───────┬───────┐
│ level │ count │
├───────┼───────┤
│ 15 │ 2 │
│ 14 │ 3 │
│ 13 │ 3 │
│ 12 │ 3 │
│ 11 │ 14 │
│ 10 │ 25 │
│ 9 │ 44 │
│ 8 │ 91 │
│ 7 │ 1084 │
│ 6 │ 311 │
│ 5 │ 432 │
│ 4 │ 515 │
│ 3 │ 548 │
│ 2 │ 2201 │
│ 1 │ 21162 │
│ 0 │ 22310 │
└───────┴───────┘
Level 0 reflects the number of packages. Level 1 is similar as you have all
the derivations for the package origins. The remaining levels contain less
packages since it's mostly just derivations involved in bootstrapping.
When using a recursive CTE to collect all the derivations, PostgreSQL assumes
that the each derivation has the same number of inputs, and this leads to a
large overestimation of the number of derivations per a revision. This in turn
can lead to PostgreSQL picking a slower way of running the query.
When it's known how many new derivations you should see at each level, it's
possible to inform PostgreSQL this by using LIMIT's at various points in the
query. This reassures the query planner that it's not going to be handling
lots of rows and helps it make better decisions about how to execute the
query.
|
| |
|
| |
|
|
|
|
| |
I think this might help with queries that don't use the build_server_id.
|
|
|
|
| |
This seems to generate better plans.
|
| |
|
|
|
|
| |
This is a bit ugly, but might speed up computing derivations for system tests.
|
| |
|
|
|
|
| |
To better prevent two processes running at the same time.
|
|
|
|
|
|
|
|
|
|
| |
Harmonize "Build change" options between the selection menu and the
documentation
* guix-data-service/web/compare/html.scm (compare/package-derivations):
Replace "Still broken" with "Still failing" in the "Build change" help text.
Signed-off-by: Christopher Baines <mail@cbaines.net>
|
|
|
|
|
| |
As it's frequently useful to know how many packages/builds some change has
affected.
|
| |
|
| |
|
|
|
|
| |
When deleting data for a branch.
|
| |
|
| |
|
|
|
|
|
|
|
| |
The newer Guile Fibers web server will use the chunked transfer encoding when
a procedure is used and the content length is unspecified. This is good for
large responses, but unnecessary here. Also, there's a bug with the charset so
these changes to respond with correctly encoded bytevectors to avoid that.
|
|
|
|
|
|
|
|
|
| |
Newer versions of Guile Fibers will now use chunked encoding when a procedure
is used (and no content length is set). This is good, but not always what is
wanted, and there's also an issue with the port encoding.
This commit switches to responding with a string/bytevector when more
appropriate, plus explicitly setting the port encoding where that's needed.
|
|
|
|
|
|
|
|
|
|
|
| |
Generating system test derivations are difficult, since you generally need to
do potentially expensive builds for the system you're generating the system
tests for. You might not want to disable grafts for instance because you might
be trying to test whatever the test is testing in the context of grafts being
enabled.
I'm looking at skipping the system tests on data.guix.gnu.org, because they're
not used and quite expensive to compute.
|
|
|
|
|
|
|
| |
As data.qa.guix.gnu.org has lots of branches and 100,000+ metrics, and this is
causing Prometheus to time out fetching the metrics.
I'm not sure there's much value in these metrics, so cut them out for now.
|
| |
|
|
|
|
| |
Since this is now quite dynamic, it's useful to have a metric for it.
|
|
|
|
|
| |
This will allow for instrumenting low level database functionality, before
anything starts using the database.
|
| |
|
|
|
|
| |
For derivation_inputs.
|
| |
|
|
|
|
|
|
| |
This should help with query performance, as the recursive queries using
derivation_inputs and derivation_outputs are particularly sensitive to the
n_distinct values for these tables.
|
|
|
|
|
| |
In the case where multiple data deleting processes end up running at the same
time.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And remove the chunking of derivation lint warnings.
The derivation linter computes the derivation for each packages supported
systems, but there are two problems with the approach. By doing this for each
package in turn, it forces inefficient uses of caches, since most of the
cached data is only relevant to a single system. More importantly though,
because the work of checking one package is dependent on it's supported
systems, it's unpredictable how much work will happen, and this will tend to
increase as more packages support more systems.
I think especially because of this last point, it's not worth attempting to
keep running the derivation linter at the moment, because it doesn't seem
sustainable. I can't see an way to run it that's futureproof and won't break
at some point in the future when packages in Guix support more systems.
|
|
|
|
| |
To hopefully bring down the memory usage from idle connections.
|
| |
|
| |
|
| |
|
| |
|