| Commit message (Collapse) | Author | Age |
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we first implemented TLS, we assumed in conneciton_handle_write
that a TOR_TLS_WANT_WRITE from flush_buf_tls meant that nothing had
been written. But when we moved our buffers to a ring buffer
implementation back in 0.1.0.5-rc (!), we broke that invariant: it's
possible that some bytes have been written but nothing.
That's bad. It means that if we do a sequence of TLS writes that ends
with a WANTWRITE, we don't notice that we flushed any bytes, and we
don't (I think) decrement buckets.
Fixes bug 7708; bugfix on 0.1.0.5-rc
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This informational counter is probably now redundant, but might as well keep
it consistent I guess.
|
| |
| |
| |
| | |
It had nothing to do with circuit build times.
|
| | |
|
| |
| |
| |
| | |
The other remaining parameters don't really need range checks.
|
| |
| |
| |
| |
| |
| | |
Also document it better.
Mention this refactoring in the comments for the path state machine.
|
| | |
|
| |
| |
| |
| |
| | |
Also, deprecate the torrc options for the scaling values. It's unlikely anyone
but developers will ever tweak them, even if we provided a single ratio value.
|
| | |
|
| | |
|
| | |
|
|\ \ |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
This is the non-automated portion of bug 7599.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is meant to avoid conflict with the built-in log() function in
math.h. It resolves ticket 7599. First reported by dhill.
This was generated with the following perl script:
#!/usr/bin/perl -w -i -p
s/\blog\(LOG_(ERR|WARN|NOTICE|INFO|DEBUG)\s*,\s*/log_\L$1\(/g;
s/\blog\(/tor_log\(/g;
|
|/ /
| |
| |
| |
| | |
Improve the log message when "Bug/attack: unexpected sendme cell
from client" occurs.
|
| | |
|
| | |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is allowed by the C statndard, which permits you to represent
doubles any way you like, but in practice we have some code that
assumes that memset() clears doubles in structs. Noticed as part of
7802 review; see 8081 for more info.
|
| | |
| | |
| | |
| | | |
Cosmetic tweak on 5956; not in any released tor.
|
| | | |
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Instead of hardcoding the minimum fraction of possible paths to 0.6, we
take it from the user, and failing that from the consensus, and
failing that we fall back to 0.6.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously we did this based on the fraction of descriptors we
had. But really, we should be going based on what fraction of paths
we're able to build based on weighted bandwidth, since otherwise a
directory guard or two could make us behave quite oddly.
Implementation for feature 5956
|
| | |
| | |
| | |
| | |
| | | |
This way we get the usable nodes themselves, so we can feed them into
frac_nodes_with_descriptors
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
This is a minimal refactoring to expose the weighted bandwidth
calculations for each node so I can use them to see what fraction of
nodes, weighted by bandwidth, we have descriptors for.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we implemented #5823 and removed v2 directory request info, we
never actually changed the unit tests not to expect it.
Fixes bug 8084; bug not in any released version of Tor.
|
|\ \ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
Improve debug logs and fix a state fencepost error.
|
| | | |
| | | |
| | | |
| | | | |
Make a debug log more informative.
|
| | | |
| | | |
| | | |
| | | | |
We need to let them live long enough to perform the test.
|
| | | |
| | | |
| | | |
| | | | |
Move a log message about scaling to after we scale
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If any circuits were opened during a scaling event, we were scaling attempts
and successes by different amounts. This leads to rounding error.
The fix is to record how many circuits are in a state that hasn't been fully
counted yet, and subtract that before scaling, and add it back afterwords.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Since they use RELAY_EARLY (which can be seen by all hops on the path),
it's not safe to say they actually count as a successful use.
There are also problems with trying to allow them to finish extending due to
the circuit purpose state machine logic. It is way less complicated (and
possibly more semantically coherent) to simply wait until we actually try to
do something with them before claiming we 'used' them.
Also, we shouldn't call timed out circuits 'used' either, for semantic
consistency.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
An adversary could let the first stream request succeed (ie the resolve), but
then tag and timeout the remainder (via cell dropping), forcing them on new
circuits.
Rolling back the state will cause us to probe such circuits, which should lead
to probe failures in the event of such tagging due to either unrecognized
cells coming in while we wait for the probe, or the cipher state getting out
of sync in the case of dropped cells.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Path use bias measures how often we can actually succeed using the circuits we
actually try to use. It is a subset of path bias accounting, but it is
computed as a separate statistic because the rate of client circuit use may
vary depending on use case.
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The fix: Instead of clipping huge/negative times, ignore them as
probably invalid.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This makes removing items from the middle of the queue into an O(1)
operation, which could prove important as we let onionqueues grow
longer.
Doing this actually makes the code slightly smaller, too.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The right way to set "MaxOnionsPending" was to adjust it until the
processing delay was appropriate. So instead, let's measure how long
it takes to process onionskins (sampling them once we have a big
number), and then limit the queue based on its expected time to
finish.
This change is extra-necessary for ntor, since there is no longer a
reasonable way to set MaxOnionsPending without knowing what mix of
onionskins you'll get.
This patch also reserves 1/3 of the onionskin spots for ntor
handshakes, on the theory that TAP handshakes shouldn't be allowed to
starve their speedier cousins. We can change this later if need be.
Resolves 7291.
|
|\ \ \ \ \ |
|