| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
This was a hack to work around reading the entire request/response body in to
memory, and is no longer needed.
|
| |
|
|
|
|
| |
In most places at least.
|
|
|
|
| |
And change #:ignore to better reflect ignoring the exception.
|
| |
|
|
|
|
| |
As this makes it harder to debug issues.
|
|
|
|
| |
To avoid the process half working.
|
|
|
|
| |
To help with debugging.
|
|
|
|
|
|
|
|
|
| |
With the "conversion to port encoding failed" (#62590) error, I'm seeing the
"error: when processing" logs, but the backtrace doesn't get logged, maybe
because it's going to the current output port, which might be broken?
Anyway, try sending the backtrace to the current error port, in the hope that
this port is still working.
|
|
|
|
| |
Just log the line instead.
|
| |
|
|
|
|
|
| |
This currently causes an error on the server side and a timeout on the client
side.
|
| |
|
|
|
|
|
| |
The idea with these is to allow the agent to resume waiting for the
coordinator to finish computing the output hash.
|
|
|
|
| |
If the file doesn't exist.
|
| |
|
|
|
|
|
|
| |
I'm seeing things like this, which I'm guessing relate to logging failing:
2023-05-14 13:40:44 (ERROR): exception in output hash thread: #<&compound-exception components: (#<&error> #<&origin origin: "put-char"> #<&message message: "conversion to port encoding failed"> #<&irritants irritants: 84> #<&exception-with-kind-and-args kind: encoding-error args: ("put-char" "conversion to port encoding failed" 84 #<output: file 1> #\2)>)>
|
| |
|
|
|
|
| |
For debugging purposes.
|
|
|
|
| |
So if it fails, it doesn't leave things in an inconsistent state.
|
| |
|
| |
|
|
|
|
| |
Even if the connection to the agent has dropped when the upload has completed.
|
|
|
|
| |
Since this isn't supposed to be set.
|
| |
|
|
|
|
| |
This commit should correct the progress reporting on partial uploads.
|
|
|
|
| |
As I've seen log-msg here raise exceptions.
|
|
|
|
| |
As this is useful to observe since it can take a long time for large outputs.
|
|
|
|
| |
Otherwise it looks like the upload should finish, but hasn't.
|
| |
|
|
|
|
|
| |
Don't retry status updates many times, since the information will be more out
of date each time.
|
|
|
|
|
| |
So in case of connection loss, this still happens and the work to compute the
hash isn't wasted.
|
|
|
|
|
|
|
| |
This should help the coordinator and agents ensure hashes are computed and the
agent finds out when this has happened, even in a situation where the
coordinator is restarted/crashes and the connection between the agents and
coordinator are lost.
|
|
|
|
|
|
|
|
| |
I think NGinx might time out reading the response headers for streaming
requests, since the headers probably don't fill the buffer. So force output at
that point.
Also, address the issue with forcing output to the chunked port.
|
|
|
|
|
|
| |
As the fibers web server takes care of this.
It's currently adding the header twice, so this should be fixed.
|
|
|
|
|
| |
Make use of the coordinator trying to avoid the connection timing out. This
should improve things when the coordinator is restarted or crashes.
|
|
|
|
|
|
| |
This will happen when the upload process is interupted after the data has been
received, but before the hash has been computed, perhaps by the coordinator
restarting.
|
| |
|
|
|
|
|
| |
As the amount of data to upload is known, this is unnecessary complexity and
overhead.
|
|
|
|
|
|
|
|
| |
This happens on the server and can be very slow for large outputs, exceeding
the time that NGinx will keep waiting for a response.
To address this, stream progress information back to the client, this should
keep the connection from timing out.
|
|
|
|
| |
As I think this might want reducing at some point.
|
| |
|
|
|
|
| |
As I've found this useful in spotting systems which have problems.
|
| |
|
|
|
|
| |
Use the dump-port* progress reporter instead.
|
|
|
|
|
|
| |
I'm seeing "too many open file" errors, and while I'm not sure if
port-for-each is directly connected, it sounds like it might be worth
instrumenting.
|
|
|
|
|
|
| |
I was mistaken, Guile doesn't handle chunked request bodies.
This reverts commit 07e42953f257b846d44376d998cc7d654214ca17.
|
|
|
|
|
| |
The Guile chunked input port now raises an exception when the input is
incomplete.
|
| |
|