Channels/DHT Design

From Snowblossom Wiki
Revision as of 04:33, 13 September 2019 by Fireduck (talk | contribs)
Jump to: navigation, search

Snow Channels Distributed Hash Table (DHT) Design

Requirements

  • Allow users to store and retrieve peer lists for things. Things will be hashes of content files or hashes of channels, but that doesn’t matter to the DHT. It is just hashes.
  • The DHT should work anywhere from 1 to 10 million shit nodes.
  • The DHT should allow for a ratio of at least 500000:1 records to nodes.
  • Since the nodes are shit, the DHT should allow for some reasonable measure of fault tolerance
  • The DHT should re-replicate data at least a little (maybe)
  • The DHT should support privacy, it should not be clear to intermediate nodes which destination hash a user is looking for information on. The originator node (acting on behalf of the user) and the destination node (that actually might have the data on the hash) will need to know, others should not.

Design

  • Each node has a public/private key pair. The hash of the public key gives the node ID.
  • Node IDs and content hashes are evaluated on a single global hash ring.
  • When a node starts, it consults existing peer lists (including a static seed hostname) and connects to the following nodes:
 * M nodes evenly distributed around the hash ring with a modulus of this node’s ID.  These are the “long range” links.  With the modulus, each node should be selecting different cross ring peers.
 * N nodes even distributed in the closest wedge, these are the “regional links”.  The wedge is the area closer to this node than the nearest long range links.
 * P nodes that are absolutely closest to this node, that it knows about.  These are the neighbor links.
  • Of course if it just has the seed nodes, it is going to be much, but whatever.

The node asks each of its connected peers for their DHT peer lists. This way the node learns about more peers and can improve the connections above to be more close to the ideal configuration described above. (Learn about a peer slightly closer to where an long range link should land? Switch to that one.) When a node gets a user request for metadata on a hash, it selects the closest of it’s peers to the hash in question and contacts that peer asking for it’s peer list. Repeat with returned peer list until we get no closer to target hash. Ask node closest to target hash about metadata. Note: might do something for fault tolerance, network change tolerance like:

for(int i=0; i<4; i++) effective_target = hash(target + i)

This will allow to us to have some number of target hashes for any given metadata hash to have multiple places on the ring that might have data. Do the same thing when saving data, of course (Maybe) occasionally prune through saved data and update appropriate closer nodes. Like I am 7 away from target hash, there is a new node that is only 2 away, add entry to that node.

  • All communication to and from peers is via TLS cert signed by expected key pair for that peer. Peer lists always include public key hash.
  • All updates are signed and timestamped, with a recent snowblossom block hash to prove they weren’t made ahead of time. This way, we can start to ignore/prune old things and no one can just retransmit old things as new.
  • Node keys and especially update mesasges should be signed with snowblossom time staked keys to avoid spam.