| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
This is useful when builds finish quickly since there could be more than 2
idle threads, and then threads start stopping. This way, each thread waits 20
seconds before stopping, which should be enough time for new builds to be
fetched.
|
| |
|
| |
|
|
|
|
|
| |
A fixed threshold should make things line up when there's multiple ports in
use.
|
| |
|
|
|
|
|
| |
If the connection closes before all the data's been received. Most of a nar
file isn't useful.
|
| |
|
|
|
|
| |
A broken one was committed previously.
|
|
|
|
|
|
|
|
| |
Don't compress then send, since I think compression can be slower than
sending, so doing both at the same time is probably faster. Add
make-chunked-output-port* which might be more efficient than the Guile chunked
output port, will disable garbage collection to avoid issues with GnuTLS and
will try to force the garbage collector to run if there's garbage building up.
|
|
|
|
| |
Bring more stuff inside one with-gc-protection block.
|
|
|
|
| |
As this reduces the GC disabling/enabling.
|
| |
|
|
|
|
|
|
|
| |
These procedures actually increment/decrement a counter, so gc-enable might
not enable garbage collection if gc-disable has been called twice in a
row. dynamic-wind should ensure that gc-enable is always called after
gc-disable, even if thunk raises an exception for example.
|
|
|
|
|
|
|
|
| |
Even if they aren't requeued, the agent should learn about the job again from
the coordinator.
I'm mostly removing this, because I'm seeing agents seemingly process the same
job twice at the same time, and I wonder if it's related.
|
| |
|
|
|
|
|
| |
Since the gc breaking gnutls problem can occur for these requests probably as
well.
|
|
|
|
| |
I think this works better.
|
|
|
|
|
|
|
|
|
| |
Guile's garbage collector interferes with Guile+gnutls, which means that
sending files while the garbage collector is active is difficult.
These changes try to work around this by disabling the garbage collector just
as the data is being written, then enabling it again. I think this helps to
work around the issue.
|
|
|
|
|
|
| |
This code was copied from Guile, but this seems like a deficiency. I can't
imagine a case where you'd be processing chunked data, and just want to
pretend you've got to the end, when you haven't.
|
| |
|
|
|
|
| |
As this helps improve throughput.
|
|
|
|
|
|
| |
If the jobs are really quick, I think the one running thread keeps stopping
and starting, and that stops the agent starting more threads. I think this
change might help.
|
|
|
|
|
| |
I'm seeing mmap(PROT_NONE) failed crashes, and maybe these metrics will help
in understanding what's going on.
|
| |
|
|
|
|
|
|
|
| |
If and when this happens, some proceduces will be moved. This change might
avoid things breaking.
https://issues.guix.gnu.org/45409
|
|
|
|
| |
I've see the process hang on the hurd, and I think this might help.
|
|
|
|
|
| |
I think the eval-when thing might help given narinfo-references is a syntax
thing, rather than a normal procedure.
|
| |
|
|
|
|
|
| |
With respect to narinfo-references, which moved from (guix scripts substitute)
to (guix narinfo).
|
|
|
|
|
|
|
|
|
|
|
| |
With with-exception-handler being called with #:unwind? #f (implicitly). This
breaks Guile internals used by (backtrace) [1], meaning you get a different
exception/backtrace when Guile itself breaks.
This should avoid the "string->number: Wrong type argument in position
1 (expecting string): #f" exception I've been haunted by for the last year.
1: https://debbugs.gnu.org/cgi/bugreport.cgi?bug=46009
|
|
|
|
|
| |
Wait slightly longer between starting new threads, and give more time for new
jobs to arrive for running threads.
|
|
|
|
|
| |
Various changes, hopefully improvements. Inactive threads should stop
promptly, and new threads should start promptly when new builds arrive.
|
|
|
|
| |
So that the number of threads can decrease without new jobs arriving.
|
| |
|
|
|
|
| |
In the agent, as I doubt the new connection caching in Guix is thread safe.
|
|
|
|
| |
Introduced in be5a75ebb5988b87b2392e2113f6590f353dd6cd.
|
|
|
|
|
|
| |
This means that if the agent is only processing 2 builds at a time, it'll only
fetch up to two builds, rather than whatever maximum it would fetch. This
avoids fetching builds unnecessarily.
|
|
|
|
|
|
|
|
|
|
| |
Allow rate limiting new worker threads starting in the agent. Currently if the
running jobs is limited by system load, lots of jobs start, the load goes up,
then the jobs gradually finish, and once the load decreases, lots of jobs
start again, and the cycle repeats.
Rate limiting the starting of new threads might help to soften the jobs all
starting at once.
|
|
|
|
|
|
| |
This means that rather than having threads, just sleep, the number of threads
running can change. One good effect of this is that a bounds can be easily put
on the number of threads (like a lower bound).
|
| |
|
| |
|
| |
|
|
|
|
| |
I think exceptions are happening, but the existing logging isn't working.
|
| |
|
|
|
|
| |
Include the substitute servers that should be providing the substitute.
|
| |
|
|
|
|
| |
As I think this sometimes hangs.
|
|
|
|
|
| |
The previous code was less than ideal, this simpler and avoids less messy
state.
|
| |
|
| |
|