Group 11 Created with Sketch.
noun_63692_ffffff Created with Sketch.

Lntenna-Python: Part 3 – Lnproxy

Kevin Durkin for In The Mesh
Kevin Durkin for In The Mesh

This article is the third in our series that documents Global Mesh Labs’ progress towards developing a bottom up system for censorship resistant and private communication, incentivized by Bitcoin Lightning Network micropayments.

In the first part of this series we described a system to trustlessly and privately send a message anywhere in the world from an off-grid computer connected to a local mesh radio network. This system uses a mesh internet gateway to negotiate and pay a Lightning invoice for the Blockstream Transmission service using minimal bandwidth. This system used on-chain payments via a trustless ‘submarine swap’ to reduce complexity and bandwidth, but incurred blockchain confirmation delay and on-chain transaction fees for each message sent.

The second part of this project implemented lightningtenna to negotiate payments directly with the Blockstream Transmission service using the Lightning payment channel protocol. This involved optimizing how nodes negotiate payments to work with the reduced bandwidth of long-range mesh networks. We accomplished this goal primarily by stripping out the gossip system used to exchange routing information between nodes.

In this iteration we expand upon the ideas of natively making a Lightning payment over mesh networks, and begin implementing the scale, efficiency and robustness improvements needed to make this a feasible application of the Lightning payments technology. We also implement a Lot49-style system for including encrypted messages with a payment to incentivize other nodes to participate in message delivery.

If you are less familiar with Bitcoin and the Lightning network be sure to read part one and part two first as they include some high-level descriptions and links to explainers for some of the concepts used in the following.

Architecture

Before we talk about specific improvements, first we compare what architecture decisions were kept or changed since the lightningtenna design:

We retained the “no-encrypt” C-Lightning patch, which allowed us to read, modify and route the raw, unencrypted Lightning messages.
We retained the “no-gossip” C-Lightning patch, which eliminates Lightning network gossip messages that provide no benefit to our mesh network (you never know which peers might be online and in-range) whilst also saving us precious bandwidth.
We re-architected the standalone executable to run as a C-Lightning Plugin. This allows lnproxy to be started and stopped automatically by the instance of C-Lightning, whilst also allowing us to ‘intercept” Lightning RPC calls (and handle/modify them ourselves if necessary) and to add new plugin-specific RPC calls. Now Bitcoin, Lightning and mesh operations can all be controlled via the C-Lightning RPC system.
We introduced tools to connect nodes and send messages in a way that approximates how mesh networks incrementally route data.
We introduced a system for incentivized delivery of encrypted messages. These messages are similar to the recently released Whatsat system, however instead of concealing the encrypted message within the onion, they are appended to the update_add_htlc message, whilst still being end-to-end (E2E) encrypted.

There are four key areas we wanted to improve on from the previous iteration of lightningtenna to take us closer to our goal: i) bandwidth efficiency, ii) incremental routing, iii) fewer mesh round trips and iv) integrated messaging function. These improvements are described in more detail below.

Bandwidth Efficiency

Onions

The first efficiency gain was realised through the removal and dynamic on-the-fly re-generation of the Lightning routing onion. This particular improvement is worth mentioning specifically, as it has knock-on effects to many of the other areas being discussed later.

A Lightning RFC-compliant onion routing packet is 1366 bytes in total, containing a version byte, a 33 byte pubkey, a 32 byte HMAC and a 1300 byte onion. As discussed in Part 2, this makes the total size of an update_add_htlc (including aforementioned 1366B onion packet) message about 1450 bytes. Or in goTenna mesh terms, 8 messages at >= 1m30s of broadcast transmission time to start the payment sequence.

Removing the onion from this message became a high priority, because it would reduce our total payment sequence time down from the order of minutes, to seconds, whilst also leaving us room for more messages before goTenna anti-flood measures kicked in.

Removing the onion was easy enough to begin with — you can simply chop off the final 1366 bytes from the update_add_htlc message — but having it regenerated by the receiver seemed like it might be more of a challenge. Luckily a C-Lightning devtool exists for generating onions: Christian Decker’s onion.c! After a few trivial patches to suit the tool to our purposes, we were able to have the receiver of an update_add_htlc message generate a compliant onion and add it into the onion field to satisfy C-Lightning’s internal payment processing system.

The astute reader might notice that the onion usually contains all the routing and fee data for each hop (along with some extra privacy benefits which we will cover later). Because we are not targeting a “source routing” model, which is how the Lightning network of today operates, but more of an IP routing-style design, where each hop will negotiate only the next hop (in a mesh network primarily based on availability), we didn’t need that routing data from the onion anyway, as often it will be worse than useless.

Onion generation is only therefore done to satisfy C-Lightning that a valid payment message has been received and should be processed accordingly.

Pings

Ping messages are relatively frequent in the Lightning protocol; they are often issued before specific message sequences, e.g. payments where ping might be issued before a commitment_signed, to test for liveness. On the internet, these tiny unobtrusive messages introduce negligible latency, but this was frequently found not to be the case on the mesh network where a ping-pong sequence would use up a valuable round trip at a critical, high-traffic moment. By internally reflecting these ping messages with the appropriate pong message, we were able to further reduce payment times without otherwise changing the payment protocol.

Incremental Routing

The onion-based improvements listed above lead us on to the the challenge of how to route payment and message data when the sender does not precompute the whole route. In lightningtenna the routes were hardcoded for a maximum of two mesh nodes, i.e. a single hop payment, naturally we will need longer paths than this. Fundamentally we need incremental message routing where each hop is in charge of selecting the next hop.

The end goal for this project is to use the mesh routing system provided natively by goTenna Mesh devices. In the near future we also hope to take advantage of the even more efficient VINE protocol, currently used by goTenna Pro devices. As that is not possible today, we needed to add our own incremental routing implementation to stand in for using the native mesh routing performed by the device.

The current implementation simply initializes each node in the network with a pair of values for every other node in the network:
[lightning node pubkey : goTenna device ID (GID)]

Due to the lack of a complete source routing table, or “network graph” in Lightning parlance, as we don’t have channel announcements, neither the sender nor any individual hop in the route knows the entire route to a destination node. Instead they use an opportunistic approach based on nodes they have channels open with. A simplified version of the routing system looks something like this:

In the future, nodes will be able to discover other nearby nodes that advertise Lightning capability and be able to open channels with them. For now we simply forward payments and messages to peers we already have an open channel with.

This level of routing has proved to work well in our physical tests of up to 4 goTenna devices with various combinations of channels open. Currently this has not been tested on larger physical networks nor in larger simulations, but we would expect such crude routing techniques to begin to break down in larger network groups where they also might start to succumb to “routing loops”.

Interestingly the lack of an onion routing packet technically opens up the ability for the network to take paths of more than 20 hops for the payment, where 20 hops is the current cap on the number of hops allowed in the Lightning protocol due to the fixed size of the onion. The theoretical soft upper bound is now based on the C-Lightning (not protocol) htlc commit timeout parameter. Our gossip patch already increased this from 30 seconds to 300 seconds to avoid payments timing out before we could broadcast them via the mesh.

Round Trips

Communication round trips between nodes play a crucial part in mesh network operation. Finding ways to reduce these whilst retaining the trust-minimised guarantees of the Lightning protocol has been a key challenge. We designed an agnostic and robust message batching system to reduce round trips wherever possible. Simply, we buffer up the data being sent to each peer, and send it every 2 seconds, or when we reach 210 bytes in the buffer, whichever occurs sooner. We were able to reduce the startup sequence for channel reestablishment from 3 round trips (6 messages), down to 2 round trips (4 messages). The channel update sequence used to make payments was reduced from 4.5 round trips (9 messages) down to 3 round trips (6 messages).

The result of this improvement was that, whilst in lightningtenna it was taking us 1m30s to transmit only the update_add_htlc (including routing onion) message, then a further 5 round trips subsequent to that for the various commitment_signed, revoke_and_ack and update_fulfill_htlc messages, in lnproxy we have been able to reduce the entire payment sequence to 3 round trips, or, from a total of 20 mesh messages down to 6. Because these 6 messages are split roughly evenly over two devices, they do not trigger flow control limits and payments over a single hop can be made in about 30 seconds.

Our primary focus is on the speed of the payment sequence; channel startup and creation occur relatively infrequently. The lnproxy system can now forward roughly 2 payments per minute between pairs of mesh nodes once a channel and connection are established. This is a huge improvement over lightningtenna and is fast enough to be used in real world payment situations.

Messaging

In our drive towards a Lot49-style system, we have introduced a messaging system which integrates with Lightning, and where Lightning payments to relay nodes can be used to incentivise successful message delivery.

Whilst goTenna Mesh has a native (text) messaging facility — in fact this is what we are already using to send Lightning traffic — this would not be suitable for our messaging system payloads. This is because although goTenna can natively send an E2E encrypted message between devices, the entire message is encrypted which meant that routing nodes would not be able to access the in-flight payment information. We must send an unencrypted goTenna message, along with an encrypted message portion.

We opted for a scheme where an E2E encrypted message is appended to the cleartext update_add_htlc message, satisfying the above requirement. When the final receiver decrypts the message, they can take the SHA256 hash of the decrypted message to learn the preimage they should use to settle the HTLC, with the SHA256 hash of that being the payment_hash of the payment. This means that the receiver can decrypt their message and fulfil the HTLC in return for delivery, compensating the routing nodes for their availability and bandwidth.

Currently routing nodes are programmed to take 1 satoshi as a routing fee each, therefore the sender should include as much fee as they believe the maximum number of hops might be. We are using 10 satoshis (less than $0.001) per message to give us enough fees for 10 hops. Routing nodes are currently “free” to try and steal fees beyond these 1 satoshis per hop (especially if they know they are the last hop!) but if they are too greedy the “fee pool” could run out before the final hop is reached. In that case they won’t get anything because the message and payment is not routed to the recipient and is therefore never settled. This appears to open up griefing attack vectors, but it is not much different to current Lightning hodl invoices or poorly performing nodes: if the route fails, you must wait for the payment to timeout and try again.

Future

There are a number of improvements we have planned for future work. First we plan to implement this protocol on mobile devices. We would then like to introduce the concept of a “Gateway node”, who has the ability to bridge the gap between the mesh intranet and the wider Lightning network on the internet. This could make it possible, for example, to broadcast a message anywhere in the world by sending it to the Blockstream Transmission service with a sufficient Lightning relay fee.

We believe there is room to further reduce communication round trips for both Lightning in its current form and also following an eltoo type upgrade to Lightning after the noinput / anyprevout soft fork is adopted by Bitcoin. In a post-eltoo world, it appears to be possible to reduce payment updates for our mesh system to 0.5 round trips total. This means both a message and its HTLC payment can be sent together, with no response required from the next relay node. Likewise payment fulfillment can be performed with a single message for a total of 1 round trip to send an incentivized message and receive delivery confirmation.

There may be some ways to achieve a similar round trip profile with Lightning’s current HTLC constructions, if you are prepared to sacrifice some security. Whilst bitcoin and Lightning pride themselves on being secure, this sacrifice in security is worth investigating in the context of micropayments that are below the current on-chain dust limit of Bitcoin.

Conclusions

With some small changes to C-Lightning and the addition of the lnproxy plugin we can send encrypted messages over the goTenna mesh network and incentivize their delivery with a Lightning based micropayment. This iteration improves on the lightningtenna design by allowing more than 2 nodes to participate, checking off many low-hanging efficiency improvements and reducing payment sequence times from minutes down to seconds.

We look forward to working with the Lightning community to continue to refine this project and adapt it for different low-bandwidth communication systems. It should be straightforward to generalize LNTenna to work with different mesh radios, amateur radio digital modes and portable satellite uplinks.

With this iteration of the project all of the pieces are in place to begin testing the world’s first mesh communication network incentivized by decentralized Lightning payments.

The code for this project can be found at https://github.com/willcl-ark/lnproxy/tree/gotenna

Comments