| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
This was a hack to work around reading the entire request/response body in to
memory, and is no longer needed.
|
| |
|
| |
|
|
|
|
| |
As this makes it harder to debug issues.
|
|
|
|
| |
To avoid the process half working.
|
|
|
|
| |
To help with debugging.
|
|
|
|
|
|
|
|
|
| |
With the "conversion to port encoding failed" (#62590) error, I'm seeing the
"error: when processing" logs, but the backtrace doesn't get logged, maybe
because it's going to the current output port, which might be broken?
Anyway, try sending the backtrace to the current error port, in the hope that
this port is still working.
|
|
|
|
|
| |
This currently causes an error on the server side and a timeout on the client
side.
|
|
|
|
|
| |
The idea with these is to allow the agent to resume waiting for the
coordinator to finish computing the output hash.
|
|
|
|
| |
If the file doesn't exist.
|
| |
|
|
|
|
|
|
| |
I'm seeing things like this, which I'm guessing relate to logging failing:
2023-05-14 13:40:44 (ERROR): exception in output hash thread: #<&compound-exception components: (#<&error> #<&origin origin: "put-char"> #<&message message: "conversion to port encoding failed"> #<&irritants irritants: 84> #<&exception-with-kind-and-args kind: encoding-error args: ("put-char" "conversion to port encoding failed" 84 #<output: file 1> #\2)>)>
|
| |
|
|
|
|
| |
For debugging purposes.
|
|
|
|
| |
So if it fails, it doesn't leave things in an inconsistent state.
|
| |
|
| |
|
|
|
|
| |
Even if the connection to the agent has dropped when the upload has completed.
|
|
|
|
| |
Since this isn't supposed to be set.
|
|
|
|
| |
As I've seen log-msg here raise exceptions.
|
|
|
|
| |
As this is useful to observe since it can take a long time for large outputs.
|
| |
|
|
|
|
|
| |
So in case of connection loss, this still happens and the work to compute the
hash isn't wasted.
|
|
|
|
|
|
|
| |
This should help the coordinator and agents ensure hashes are computed and the
agent finds out when this has happened, even in a situation where the
coordinator is restarted/crashes and the connection between the agents and
coordinator are lost.
|
|
|
|
|
|
|
|
| |
I think NGinx might time out reading the response headers for streaming
requests, since the headers probably don't fill the buffer. So force output at
that point.
Also, address the issue with forcing output to the chunked port.
|
|
|
|
|
|
| |
As the fibers web server takes care of this.
It's currently adding the header twice, so this should be fixed.
|
|
|
|
|
|
| |
This will happen when the upload process is interupted after the data has been
received, but before the hash has been computed, perhaps by the coordinator
restarting.
|
| |
|
|
|
|
|
| |
As the amount of data to upload is known, this is unnecessary complexity and
overhead.
|
|
|
|
|
|
|
|
| |
This happens on the server and can be very slow for large outputs, exceeding
the time that NGinx will keep waiting for a response.
To address this, stream progress information back to the client, this should
keep the connection from timing out.
|
|
|
|
| |
As I think this might want reducing at some point.
|
|
|
|
| |
As I've found this useful in spotting systems which have problems.
|
|
|
|
|
|
| |
I'm seeing "too many open file" errors, and while I'm not sure if
port-for-each is directly connected, it sounds like it might be worth
instrumenting.
|
|
|
|
|
|
| |
I was mistaken, Guile doesn't handle chunked request bodies.
This reverts commit 07e42953f257b846d44376d998cc7d654214ca17.
|
|
|
|
|
| |
The Guile chunked input port now raises an exception when the input is
incomplete.
|
| |
|
|
|
|
|
| |
This means that agents will know whether to submit the outputs of builds, even
if they're restarted.
|
|
|
|
|
| |
The data doesn't look particularly useful, and I think the memory problem I
was chasing was down to a broken hook (and poor handling of that).
|
|
|
|
| |
In case any of these are a factor in the occasional high memory use.
|
|
|
|
| |
This is useful when interpreting the load information.
|
|
|
|
| |
As the keys in JSON are strings.
|
|
|
|
|
|
|
| |
Previously, updating the status was used by the agent just to get back the
list of builds it was already allocated.
Now the status sent is actually stored, along with the 1min load average.
|
|
|
|
|
| |
In newer versions of Guile Fibers, this would mean that chunked transport
encoding is used, which is unnecessary for small quick respones.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
When an error occurs while trying to compute the hash, as I hope this
information will help to identify where things have gone wrong.
|
| |
|