| Commit message (Collapse) | Author | Age |
... | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Our old code correctly called bufferevent_flush() on linked
connections to make sure that the other side got an EOF event... but
it didn't call bufferevent_flush() when the connection wasn't
hold_open_until_flushed. Directory connections don't use
hold_open_until_flushed, so the linked exit connection never got an
EOF, so they never sent a RELAY_END cell to the client, and the
client never concluded that data had arrived.
The solution is to make the bufferevent_flush() code apply to _all_
closing linked conns whose partner is not already marked for close.
|
| | | | |
| | | | |
| | | | |
| | | | | |
This reverts part of commit a0c1c2ac012fded493c0d8c49fe57e56373b061f.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If we don't, we will (among other bad things) never update
lastread/lastwritten, and so flood the network with keepalives.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
First start of a fix for bug2001, but my test network still isn't
working: the client and the server send each other VERSIONS cells,
but never notice that they got them.
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | | |
May help with tracking down bug #2022
|
|\ \ \ \
| | |/ /
| |/| | |
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
src/common/util.c
|
| | | | | |
|
|\ \ \ \ \
| |/ / / /
|/| / / /
| |/ / / |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When picking bridges (or other nodes without a consensus entry (and
thus no bandwidth weights)) we shouldn't just trust the node's
descriptor. So far we believed anything between 0 and 10MB/s, where 0
would mean that a node doesn't get any use from use unless it is our
only one, and 10MB/s would be a quite siginficant weight. To make this
situation better, we now believe weights in the range from 20kB/s to
100kB/s. This should allow new bridges to get use more quickly, and
means that it will be harder for bridges to see almost all our traffic.
|
| | | | |
|
|\| | | |
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This won't change any behavior, since it will still be rounded back
up to 2seconds, but should reduce the chances of some extra warns.
|
| |\ \ \ \ |
|
| | |/ / / |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In the first 100 circuits, our timeout_ms and close_ms
are the same. So we shouldn't transition circuits to purpose
CIRCUIT_PURPOSE_C_MEASURE_TIMEOUT, since they will just timeout again
next time we check.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Also, cap the measurement timeout to 2X the max we've seen.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We really should ignore any timeouts that have *no* network activity for their
entire measured lifetime, now that we have the 95th percentile measurement
changes. Usually this is up to a minute, even on fast connections.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If we really want all this complexity for these stages here, we need to handle
it better for people with large timeouts. It should probably go away, though.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Rechecking the timeout condition was foolish, because it is checked on the
same codepath. It was also wrong, because we didn't round.
Also, the liveness check itself should be <, and not <=, because we only have
1 second resolution.
|
|\| | | |
| |_|/ /
|/| | | |
|
| |\| | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We now differentiate between timeouts and cutoffs by the REASON string and
the PURPOSE string.
|
| |\| | |
|
| | |/
| | |
| | |
| | |
| | | |
Use 4/3 of this timeout value for 4 hop circuits, and use half of it for
canabalized circuits.
|
|\| | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Specifically, a circ attempt that we'd launched while the network was
down could timeout after we've marked our entrynodes up, marking them
back down again. The fix is to annotate as bad the OR conns that were
around before we did the retry, so if a circuit that's attached to them
times out we don't do anything about it.
|
| | |
| | |
| | |
| | |
| | | |
Otherwise we'd never set have_minimum_dir_info to false, so the
"optimistic retry" would never trigger.
|
| | |
| | |
| | |
| | |
| | | |
We used to mark all our known bridges up when they're all down and we
get a new socks request. Now do that when we've set EntryNodes too.
|
| | | |
|
| | | |
|
| | | |
|
|\| | |
|
| |\ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is needed for IOCP, since telling the IOCP backend about all
your CPUs is a good idea. It'll also come in handy with asn's
multithreaded crypto stuff, and for people who run servers without
reading the manual.
|