| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
From looking at what curl does, it seems that the last the requests end in
"0\r\n\r\n". The requests being sent before just had "0\r\n" at the end. This
worked with the server, because that wasn't expecting the final "\r\n", and it
would crash if it was included, as it would be read as the start of the next
request.
To work around this, adjust both the sending and receiving of the
requests. Send the "\r\n" after the chuncked data when making requests, and
use a patched version of make-chunked-input-port that requests two more bytes
after it's finished reading the last chunk.
|
|
|
|
| |
This is probably sensible.
|
| |
|
|
|
|
| |
To produce Prometheus style metrics for the counts of various things.
|
| |
|
|
|
|
| |
So that you don't have to just use the daemon's defaults.
|
| |
|
| |
|
|
|
|
|
| |
I don't think when was really working, the procedure was returning <undefined>
when it shouldn't have been.
|
| |
|
|
|
|
| |
So that it includes builds that haven't been processed yet in the results.
|
|
|
|
|
|
|
| |
Because the allocation plan can be replaced with one that's already out of
date, including allocating builds that have been completed, guard against this
here. This is a comprimise to avoid having to block operations when planning
allocating builds.
|
| |
|
|
|
|
|
| |
This isn't particularly helpful, especially as the agent process now handles
the log file.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Allow specifying build priority, although the allocator currently doesn't use
this. Add --defer-allocation to allow inserting lots of builds without
spending time re-computing the allocation for each one. Add
--ensure-all-related-derivations-have-builds to make it easy to have a
derivation, and all related derivations built at least once. Add
--ignore-if-build-for-derivation-exists to make it easy to avoid building
derivations again if that isn't the intention.
|
|
|
|
|
| |
Useful for finding out what derivations need to be built to ensure all related
derivations have been built by the build coordinator.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This will hopefully make it easier to create narinfo files for the outputs. I
think all of this information can be derived from the nar, but I'm not sure
how to do that, so maybe this can eventually be removed.
|
|
|
|
|
|
|
| |
That submits new build jobs to build these missing inputs if appropriate.
This means that you can tell the coordinator to build something, and it will
automatically attempt to build the dependencies if they're missing.
|
|
|
|
|
| |
The put-message operation blocks, which doesn't work for triggering the
allocation process.
|
|
|
|
| |
The last time the agent checked.
|
|
|
|
| |
This might help with using fibers for other things.
|
|
|
|
|
| |
This seems to avoid all the warnings, and fix the broken merge-generics
behaviour.
|
| |
|
|
|
|
| |
To rollback when they fail.
|
|
|
|
| |
They'll probably help.
|
|
|
|
| |
This allows configurable code to be executed when builds succeed or fail.
|
|
|
|
| |
So that it reports issues to the coordinator, rather than just crashing.
|
| |
|
|
|
|
|
|
|
|
|
| |
Move some of the code around, and trigger allocating builds via a thread if an
agent fails to setup for a build and when a build succeeds/fails.
This is important, as some setup failures can be handled by the build
allocator, for example a build finishing may unblock other builds waiting for
outputs it generates.
|
|
|
|
| |
Previously, it would error trying to insert 0 records.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Perform a build, then query for the next one.
|
|
|
|
|
|
|
| |
To avoid potentially wasting time. Instead, report the missing inputs to the
coordinator as soon as possible. The build may be scheduled on a different
agent, so it might not be necessary to download the inputs which do have
substitutes available.
|
| |
|
| |
|
|
|
|
| |
Don't start the build if there are missing inputs.
|
|
|
|
| |
So that it can read the derivation, and store the details in the database.
|