Future-Plans

From Snowblossom Wiki
Jump to: navigation, search

These things aren't necessarily going to be included, they are just ideas.


See Road Map for more up to date.


Mining Next Generation - Jean-Luc

This mining plan involves an ordered list of locations/layers where snowfield data can be found. Snowfields will be split into 1gb chunks, named snowblossom.<field_no>.snow.<chunk_number>. In the ordered list of locations, the first location that has a given chunk will be used for that chunk. Decks will be used wherever they are founds, who cares. They are only needed to build proofs so doesn't matter much.

One of the locations will be a special one that means in memory. The memory will be populated with chunks from the last location.

Each location will have its own thread pool to process requests waiting on IO from that location.

Each location will have a priority queue ordered by:

block_number (highest block first), pow pass number (closest to end of pow wins)

When the work unit monitor gets a work unit with a new block, it clears all the queues for all layers.

A thread working on a layer reads the needed spot, advances the pow calculation and if more calculations are needed puts the work on the appropriate queue. If the queues have too many things, the worst items are pruned (last by priority order).

This was implemented as Arktika

Parameters

  • threads Ordered list of threads to use on each layer threads=256,32,32
  • snow_path Ordered list of locations to use for each layer snow_path=loc_a,loc_b,loc_c

Examples

Memory precache

Imagine a 64gb machine with an ssd.

snow_path=mem_50gb,/var/ssd
threads=256,32

or on linux, using /dev/shm rather than java memory:
snow_path=/dev/shm/snow,/var/ssd
threads=256,32

Two equal SSDs

Put half of the chunks on ssd_a, other half on ssd_b.

snow_path=/var/ssd_a,/var/ssd_b
threads=64,64

Small SSD, then HDD

snow_path=/dev/shm/snow,/var/ssd,/var/hdd
threads=256,64,64

Time shifted multisig

Although Snowblossom already implements FSFA it would be nice to have a further protection. In an environment where private keys can be derived from any visible public keys, a user would be counting on their transaction being adopted into the mempool and nodes rejecting any double spends.

The idea here is to in some way announce a transaction without revealing the public keys and signatures yet. That would be easy, a transaction would be broadcast without public keys or signatures but then the network wouldn't be able to tell if it was a legit transaction likely to be signed and confirmed or spam.

So the idea is to have a multisig address and initially sign it with just a subset of the needed signatures. Then after that transaction is provisionally accepted (utxos marked as used, transaction goes into a block as an incomplete transaction). Then some time later, the sender can sign the transaction the rest of the way and the transaction gets included all the way.

This way, we eliminate the race of depending on FSFA in a quantum broken world. There are some problems, for example with Snowblossom's current address spec structure we can't claim an address without revealing all the public keys. This is easily solvable by changing the hashing format of the spec structure. This would also complicate a bunch of UTXO code, but that isn't too terrible. We would also need a threshold to expire an incomplete transaction and open the UTXOs up for use again if it is isn't fully signed in some defined number of blocks.