aboutsummaryrefslogtreecommitdiff
path: root/doc/HACKING
blob: 3722aa5e8cc30be1a1cd2a91b13528bc0d7a1d19 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
                        Guide to Hacking Tor

(As of 8 October 2003, this was all accurate.  If you're reading this in
the distant future, stuff may have changed.)

0. Intro and required reading

  Onion Routing is still very much in development stages. This document
  aims to get you started in the right direction if you want to understand
  the code, add features, fix bugs, etc.

  Read the README file first, so you can get familiar with the basics of
  installing and running an onion router.

  Then, skim some of the introductory materials in tor-design.pdf,
  tor-spec.txt, and the Tor FAQ to learn more about how the Tor protocol
  is supposed to work.  This document will assume you know about Cells,
  Circuits, Streams, Connections, Onion Routers, and Onion Proxies.

1. Code organization

1.1. The modules

  The code is divided into two directories: ./src/common and ./src/or.
  The "common" directory contains general purpose utility functions not
  specific to onion routing.  The "or" directory implements all
  onion-routing and onion-proxy specific functionality.

  Files in ./src/common:

     aes.[ch] -- Implements the AES cipher (with 128-bit keys and blocks),
        and a counter-mode stream cipher on top of AES.  This code is
        taken from the main Rijndael distribution.  (We include this
        because many people are running older versions of OpenSSL without
        AES support.)

     crypto.[ch] -- Wrapper functions to present a consistent interface to
        public-key and symmetric cryptography operations from OpenSSL.

     fakepoll.[ch] -- Used on systems that don't have a poll() system call;
        reimplements() poll using the select() system call.

     log.[ch] -- Tor's logging subsystem.

     test.h -- Macros used by unit tests.

     torint.h -- Provides missing [u]int*_t types for environments that
        don't have stdint.h.

     tortls.[ch] -- Wrapper functions to present a consistent interface to
        TLS, SSL, and X.509 functions from OpenSSL.

     util.[ch] -- Miscellaneous portability and convenience functions.

  Files in ./src/or:
  
   [General-purpose modules]

     or.h -- Common header file: include everything, define everything.

     buffers.c -- Implements a generic buffer interface.  Buffers are 
        fairly opaque string holders that can read to or flush from:
        memory, file descriptors, or TLS connections.  

        Also implements parsing functions to read HTTP and SOCKS commands
        from buffers.

     tree.h -- A splay tree implementation by Niels Provos.  Used by
        dns.c for dns caching at exits, and by connection_edge.c for dns
        caching at clients.

     config.c -- Code to parse and validate the configuration file.

   [Background processing modules]

     cpuworker.c -- Implements a farm of 'CPU worker' processes to perform
        CPU-intensive tasks in the background, so as not interrupt the
        onion router.  (OR only)

     dns.c -- Implements a farm of 'DNS worker' processes to perform DNS
        lookups for onion routers and cache the results.  [This needs to
        be done in the background because of the lack of a good,
        ubiquitous asynchronous DNS implementation.] (OR only)

   [Directory-related functionality.]

     directory.c -- Code to send and fetch directories and router
        descriptors via HTTP.  Directories use dirserv.c to generate the
        results; clients use routers.c to parse them.

     dirserv.c -- Code to manage directory contents and generate
        directories. [Directory server only] 

     routers.c -- Code to parse directories and router descriptors; and to
        generate a router descriptor corresponding to this OR's
        capabilities.  Also presents some high-level interfaces for
        managing an OR or OP's view of the directory.

   [Circuit-related modules.]

     circuit.c -- Code to create circuits, manage circuits, and route
        relay cells along circuits.

     onion.c -- Code to generate and respond to "onion skins".

   [Core protocol implementation.]

     connection.c -- Code used in common by all connection types.  See
        1.2. below for more general information about connections.

     connection_edge.c -- Code used only by edge connections.

     command.c -- Code to handle specific cell types.

     connection_or.c -- Code to implement cell-speaking connections.

   [Toplevel modules.]

     main.c -- Toplevel module.  Initializes keys, handles signals,
        multiplexes between connections, implements main loop, and drives
        scheduled events.

     tor_main.c -- Stub module containing a main() function.  Allows unit
        test binary to link against main.c

   [Unit tests]

     test.c -- Contains unit tests for many pieces of the lower level Tor
        modules.

1.2. All about connections

  All sockets in Tor are handled as different types of nonblocking
  'connections'.  (What the Tor spec calls a "Connection", the code refers
  to as a "Cell-speaking" or "OR" connection.)
  
  Connections are implemented by the connection_t struct, defined in or.h.
  Not every kind of connection uses all the fields in connection_t; see 
  the comments in or.h and the assertions in assert_connection_ok() for
  more information.

  Every connection has a type and a state.  Connections never change their
  type, but can go through many state changes in their lifetime.

  The connection types break down as follows:

     [Cell-speaking connections]
       CONN_TYPE_OR -- A bidirectional TLS connection transmitting a
          sequence of cells.  May be from an OR to an OR, or from an OP to
          an OR.

     [Edge connections]
       CONN_TYPE_EXIT -- A TCP connection from an onion router to a
          Stream's destination. [OR only]
       CONN_TYPE_AP -- A SOCKS proxy connection from the end user
          application to the onion proxy.  [OP only]

     [Listeners]
       CONN_TYPE_OR_LISTENER [OR only]
       CONN_TYPE_AP_LISTENER [OP only]
       CONN_TYPE_DIR_LISTENER [Directory server only]
          -- Bound network sockets, waiting for incoming connections.

     [Internal]
       CONN_TYPE_DNSWORKER -- Connection from the main process to a DNS
          worker process. [OR only]

       CONN_TYPE_CPUWORKER -- Connection from the main process to a CPU
          worker process. [OR only]

   Connection states are documented in or.h.

   Every connection has two associated input and output buffers.
   Listeners don't use them.  For non-listener connections, incoming
   data is appended to conn->inbuf, and outgoing data is taken from the
   front of conn->outbuf.  Connections differ primarily in the functions
   called to fill and drain these buffers.

1.3. All about circuits.

   A circuit_t structure fills two roles.  First, a circuit_t links two
   connections together: either an edge connection and an OR connection,
   or two OR connections.  (When joined to an OR connection, a circuit_t
   affects only cells sent to a particular circID on that connection.  When
   joined to an edge connection, a circuit_t affects all data.)

   Second, a circuit_t holds the cipher keys and state for sending data
   along a given circuit.  At the OP, it has a sequence of ciphers, each
   of which is shared with a single OR along the circuit.  Separate
   ciphers are used for data going "forward" (away from the OP) and
   "backward" (towards the OP).  At the OR, a circuit has only two stream
   ciphers: one for data going forward, and one for data going backward.

1.4. Asynchronous IO and the main loop.

   Tor uses the poll(2) system call (or it wraps select(2) to act like
   poll, if poll is not available) to handle nonblocking (asynchronous)
   IO.  If you're not familiar with nonblocking IO, check out the links
   at the end of this document.

   All asynchronous logic is handled in main.c.  The functions
   'connection_add', 'connection_set_poll_socket', and 'connection_remove'
   manage an array of connection_t*, and keep in synch with the array of
   struct pollfd required by poll(2).  (This array of connection_t* is
   accessible via get_connection_array, but users should generally call
   one of the 'connection_get_by_*' functions in connection.c to look up
   individual connections.)

   To trap read and write events, connections call the functions
   'connection_{is|stop|start}_{reading|writing}'. If you want
   to completely reset the events you're watching for, use
   'connection_watch_events'.

   Every time poll() finishes, main.c calls conn_read and conn_write on
   every connection. These functions dispatch events that have something
   to read to connection_handle_read, and events that have something to
   write to connection_handle_write, respectively.

   When connections need to be closed, they can respond in two ways.  Most
   simply, they can make connection_handle_* return an error (-1),
   which will make conn_{read|write} close them.  But if it's not
   convenient to return -1 (for example, processing one connection causes
   you to realize that a second one should close), then you can also
   mark a connection to close by setting conn->marked_for_close. Marked
   connections will be closed at the end of the current iteration of
   the main loop.

   The main loop handles several other operations: First, it checks
   whether any signals have been received that require a response (HUP,
   KILL, USR1, CHLD).  Second, it calls prepare_for_poll to handle recurring
   tasks and compute the necessary poll timeout.  These recurring tasks
   include periodically fetching the directory, timing out unused
   circuits, incrementing flow control windows and re-enabling connections
   that were blocking for more bandwidth, and maintaining statistics.

   A word about TLS: Using TLS on OR connections complicates matters in
   two ways.
   First, a TLS stream has its own read buffer independent of the
   connection's read buffer.  (TLS needs to read an entire frame from
   the network before it can decrypt any data.  Thus, trying to read 1
   byte from TLS can require that several KB be read from the network
   and decrypted.  The extra data is stored in TLS's decrypt buffer.)
   Because the data hasn't been read by tor (it's still inside the TLS),
   this means that sometimes a connection "has stuff to read" even when
   poll() didn't return POLLIN. The tor_tls_get_pending_bytes function is
   used in main.c to detect TLS objects with non-empty internal buffers.
   Second, the TLS stream's events do not correspond directly to network
   events: sometimes, before a TLS stream can read, the network must be
   ready to write -- or vice versa.

1.5. How data flows (An illustration.)

   Suppose an OR receives 256 bytes along an OR connection.  These 256
   bytes turn out to be a data relay cell, which gets decrypted and
   delivered to an edge connection.  Here we give a possible call sequence
   for the delivery of this data.

   (This may be outdated quickly.)

   do_main_loop -- Calls poll(2), receives a POLLIN event on a struct
                 pollfd, then calls:
    conn_read -- Looks up the corresponding connection_t, and calls:
     connection_handle_read -- Calls:
      connection_read_to_buf -- Notices that it has an OR connection so:
       read_to_buf_tls -- Pulls data from the TLS stream onto conn->inbuf.
      connection_process_inbuf -- Notices that it has an OR connection so:
       connection_or_process_inbuf -- Checks whether conn is open, and calls:
        connection_process_cell_from_inbuf -- Notices it has enough data for
                 a cell, then calls:
         connection_fetch_from_buf -- Pulls the cell from the buffer.
         cell_unpack -- Decodes the raw cell into a cell_t
         command_process_cell -- Notices it is a relay cell, so calls:
          command_process_relay_cell -- Looks up the circuit for the cell,
                 makes sure the circuit is live, then passes the cell to:
           circuit_deliver_relay_cell -- Passes the cell to each of: 
            relay_crypt -- Strips a layer of encryption from the cell and
                 notices that the cell is for local delivery.
            connection_edge_process_relay_cell -- extracts the cell's
                 relay command, and makes sure the edge connection is
                 open.  Since it has a DATA cell and an open connection,
                 calls:
             circuit_consider_sending_sendme -- check if the total number
                 of cells received by all streams on this circuit is
                 enough that we should send back an acknowledgement
                 (requesting that more cells be sent to any stream).
             connection_write_to_buf -- To place the data on the outgoing
                 buffer of the correct edge connection, by calling:
              connection_start_writing -- To tell the main poll loop about
                 the pending data.
              write_to_buf -- To actually place the outgoing data on the
                 edge connection.
             connection_consider_sending_sendme -- if the outbuf waiting
                 to flush to the exit connection is not too full, check
                 if the total number of cells received on this stream
                 is enough that we should send back an acknowledgement
                 (requesting that more cells be sent to this stream).

   In a subsequent iteration, main notices that the edge connection is
   ready for writing:

   do_main_loop -- Calls poll(2), receives a POLLOUT event on a struct
                 pollfd, then calls:
    conn_write -- Looks up the corresponding connection_t, and calls:
     connection_handle_write -- This isn't a TLS connection, so calls:
      flush_buf -- Delivers data from the edge connection's outbuf to the
                 network.
      connection_wants_to_flush -- Reports that all data has been flushed.
      connection_finished_flushing -- Notices the connection is an exit,
                 and calls:
       connection_edge_finished_flushing -- The connection is open, so it
                 calls:
        connection_stop_writing -- Tells the main poll loop that this
                 connection has no more data to write.
        connection_consider_sending_sendme -- now that the outbuf
                 is empty, check again if the total number of cells
                 received on this stream is enough that we should send
                 back an acknowledgement (requesting that more cells be
                 sent to this stream).


1.6. Routers, descriptors, and directories

   All Tor processes need to keep track of a list of onion routers, for
   several reasons:
       - OPs need to establish connections and circuits to ORs.
       - ORs need to establish connections to other ORs.
       - OPs and ORs need to fetch directories from a directory server.
       - ORs need to upload their descriptors to directory servers.
       - Directory servers need to know which ORs are allowed onto the
         network, what the descriptors are for those ORs, and which of
         those ORs are currently live.

   Thus, every Tor process keeps track of a list of all the ORs it knows
   in a static variable 'directory' in the routers.c module.  This
   variable contains a routerinfo_t object for each known OR. On startup,
   the directory is initialized to a list of known directory servers (via
   router_get_list_from_file()).  Later, the directory is updated via
   router_get_dir_from_string().  (OPs and ORs retrieve fresh directories
   from directory servers; directory servers generate their own.)

   Every OR must periodically regenerate a router descriptor for itself.
   The descriptor and the corresponding routerinfo_t are stored in the
   'desc_routerinfo' and 'descriptor' static variables in routers.c.

   Additionally, a directory server keeps track of a list of the
   router descriptors it knows in a separate list in dirserv.c.  It
   uses this list, checking which OR connections are open, to build
   directories.

1.7. Data model
  
  [XXX]

1.8. Flow control

  [XXX]

2. Coding conventions

2.1. Details

  Use tor_malloc, tor_strdup, and tor_gettimeofday instead of their
  generic equivalents.  (They always succeed or exit.)

  Use INLINE instead of 'inline', so that we work properly on windows.

2.2. Calling and naming conventions

  Whenever possible, functions should return -1 on error and and 0 on
  success.

  For multi-word identifiers, use lowercase words combined with
  underscores. (e.g., "multi_word_identifier").  Use ALL_CAPS for macros and
  constants.

  Typenames should end with "_t".

  Function names should be prefixed with a module name or object name.  (In
  general, code to manipulate an object should be a module with the same
  name as the object, so it's hard to tell which convention is used.)

  Functions that do things should have imperative-verb names
  (e.g. buffer_clear, buffer_resize); functions that return booleans should
  have predicate names (e.g. buffer_is_empty, buffer_needs_resizing).

2.3. What To Optimize

  Don't optimize anything if it's not in the critical path.  Right now,
  the critical path seems to be AES, logging, and the network itself.
  Feel free to do your own profiling to determine otherwise.

2.4. Log conventions

  Log convention: use only these four log severities.

    ERR is if something fatal just happened.
    WARN if something bad happened, but we're still running. The
      bad thing is either a bug in the code, an attack or buggy
      protocol/implementation of the remote peer, etc. The operator should
      examine the bad thing and try to correct it.
    NOTICE if it's something the operator will want to know about.
    (No error or warning messages should be expected during normal OR or OP
      operation. I expect most people to run on -l notice eventually. If a
      library function is currently called such that failure always means
      ERR, then the library function should log WARN and let the caller
      log ERR.)
    INFO means something happened (maybe bad, maybe ok), but there's nothing
      you need to (or can) do about it.
    DEBUG is for everything louder than INFO.

  [XXX Proposed convention: every messages of severity INFO or higher should
  either (A) be intelligible to end-users who don't know the Tor source; or
  (B) somehow inform the end-users that they aren't expected to understand
  the message (perhaps with a string like "internal error").  Option (A) is
  to be preferred to option (B). -NM]

3. References

  About Tor

     See http://freehaven.net/tor/
         http://freehaven.net/tor/cvs/doc/tor-spec.txt
         http://freehaven.net/tor/cvs/doc/tor-design.tex
         http://freehaven.net/tor/cvs/doc/FAQ

  About anonymity

     See http://freehaven.net/anonbib/

  About nonblocking IO

     [XXX insert references]


# ======================================================================
# Old HACKING document; merge into the above, move into tor-design.tex,
# or delete.
# ======================================================================
The pieces.

  Routers. Onion routers, as far as the 'tor' program is concerned,
  are a bunch of data items that are loaded into the router_array when
  the program starts. Periodically it downloads a new set of routers
  from a directory server, and updates the router_array. When a new OR
  connection is started (see below), the relevant information is copied
  from the router struct to the connection struct.

  Connections. A connection is a long-standing tcp socket between
  nodes. A connection is named based on what it's connected to -- an "OR
  connection" has an onion router on the other end, an "OP connection" has
  an onion proxy on the other end, an "exit connection" has a website or
  other server on the other end, and an "AP connection" has an application
  proxy (and thus a user) on the other end.

  Circuits. A circuit is a path over the onion routing
  network. Applications can connect to one end of the circuit, and can
  create exit connections at the other end of the circuit. AP and exit
  connections have only one circuit associated with them (and thus these
  connection types are closed when the circuit is closed), whereas OP and
  OR connections multiplex many circuits at once, and stay standing even
  when there are no circuits running over them.

  Streams. Streams are specific conversations between an AP and an exit.
  Streams are multiplexed over circuits.

  Cells. Some connections, specifically OR and OP connections, speak
  "cells". This means that data over that connection is bundled into 256
  byte packets (8 bytes of header and 248 bytes of payload). Each cell has
  a type, or "command", which indicates what it's for.

Robustness features.

[XXX no longer up to date]
 Bandwidth throttling. Each cell-speaking connection has a maximum
  bandwidth it can use, as specified in the routers.or file. Bandwidth
  throttling can occur on both the sender side and the receiving side. If
  the LinkPadding option is on, the sending side sends cells at regularly
  spaced intervals (e.g., a connection with a bandwidth of 25600B/s would
  queue a cell every 10ms). The receiving side protects against misbehaving
  servers that send cells more frequently, by using a simple token bucket:

  Each connection has a token bucket with a specified capacity. Tokens are
  added to the bucket each second (when the bucket is full, new tokens
  are discarded.) Each token represents permission to receive one byte
  from the network --- to receive a byte, the connection must remove a
  token from the bucket. Thus if the bucket is empty, that connection must
  wait until more tokens arrive. The number of tokens we add enforces a
  longterm average rate of incoming bytes, yet we still permit short-term
  bursts above the allowed bandwidth. Currently bucket sizes are set to
  ten seconds worth of traffic.

  The bandwidth throttling uses TCP to push back when we stop reading.
  We extend it with token buckets to allow more flexibility for traffic
  bursts.

 Data congestion control. Even with the above bandwidth throttling,
  we still need to worry about congestion, either accidental or intentional.
  If a lot of people make circuits into same node, and they all come out
  through the same connection, then that connection may become saturated
  (be unable to send out data cells as quickly as it wants to). An adversary
  can make a 'put' request through the onion routing network to a webserver
  he owns, and then refuse to read any of the bytes at the webserver end
  of the circuit. These bottlenecks can propagate back through the entire
  network, mucking up everything.

  (See the tor-spec.txt document for details of how congestion control
  works.)

  In practice, all the nodes in the circuit maintain a receive window
  close to maximum except the exit node, which stays around 0, periodically
  receiving a sendme and reading more data cells from the webserver.
  In this way we can use pretty much all of the available bandwidth for
  data, but gracefully back off when faced with multiple circuits (a new
  sendme arrives only after some cells have traversed the entire network),
  stalled network connections, or attacks.

  We don't need to reimplement full tcp windows, with sequence numbers,
  the ability to drop cells when we're full etc, because the tcp streams
  already guarantee in-order delivery of each cell. Rather than trying
  to build some sort of tcp-on-tcp scheme, we implement this minimal data
  congestion control; so far it's enough.

 Router twins. In many cases when we ask for a router with a given
  address and port, we really mean a router who knows a given key. Router
  twins are two or more routers that share the same private key. We thus
  give routers extra flexibility in choosing the next hop in the circuit: if
  some of the twins are down or slow, it can choose the more available ones.

  Currently the code tries for the primary router first, and if it's down,
  chooses the first available twin.