Scalable Byzantine Consensus via Hardware-assisted Secrete Sharing

Fastest and most scalable BFT protocol

Scalable Byzantine Consensus via Hardware-assisted Secrete Sharing

The author of this paper described FastBFT a new BFT protocols, highly scalable that achieve better performances than any other BFT protocols. The protocol was designed by studying the weakness of other popular protocols such as PBFT, Zyzzyva, BChain and MiniBFT to improve upon them. There are several components that characterize the paper.

The protocol is an hybrid protocol that require hardware assistance (so it only require $2f+1$ nodes in the network) based on a optimistic paradigm (requests are executed before reaching the agreement) that require only $f+1$ active replica to agree and execute a transaction while the other $f$ passive replicas are updates by the network.

Message aggregation

To reach the agreement in BFT protocols, $O(n^2)$ messages are required since every replica $S_i$ multicast a commit message (in the general case) to all other active replicas in the network. The $O(n^2)$ message complexity represent the main bottleneck in PBFT.

During the commit phase of the protocol, each replica sends a commit message directly to the primary instead of multicasting the message to the network. To reduce the overhead of message aggregation the authors use secret sharing instead of a classical multisignatures approach.

To implement secret sharing, an additional phase of pre-processing it’s necessary to set up the “secret shares”.

Pre-processing

In this phase the primary/leader $S_p$ generates a set of random secrets and publish a cryptographic hash of each of them (each secret will be bound to a single requests in the prepare phase). Then $S_p$ creates $f+1$ shares (and its hash as well) of the message and send one share to each active replica.

In the commit phase, every replica will reveals its share to the primary that will reconstruct the secret (if it receives enough valid shares). The secret is the multicasted by $S_p$ to all the replicas that can verify it (they verify that the hash previously bound to the request is actually correct).

To ensure that $S_p$ doesn’t impersonate any other $S_i$ the generation of secrete, hash, shares and binding happens inside the TEE (Trusted Execution Environment) and each secret is bound to a monotonic counter (as well as a single request).

Communication topology

To further improve the exchange of messages, the network is organized (by the primary) in a balance tree structure rooted at $S_p$. Instead of receiving all $n$ messages from the replicas, the primary take advantage of the tree topology that will allow it to receive a constant number of messages (equal to the branching factor of each node, in the simplest case $n = 2$).

Failure detection

Crashes are detected by timeout while Byzantine faults are detected by verifying shares: when a node $S_i$ receive a message from one of its children its check whether the aggregate shares is valid. If the shares received are wrong, the node will directly send a SUSPECT message to its parent (hence aborting the transaction), the message will eventually reach the primary that will be in charge of rearranging the network by putting the suspected faulty node in one of the leaves.