[Proposal: XX] Audit for Staking Rewards Insurance Cover Contract

Yes, it is already added in the solidity interface
You have:

function isCandidate(address candidate) external view returns (bool);
function isSelectedCandidate(address candidate) external view returns (bool);
function points(uint256 round) external view returns (uint256);

But I would still consider adding it directly as a feature of the pallet as it provides many benefits on developing/auditing/optimizing the solution, but it is up to you or the community.

These methods are already available but they don’t help much.

isCandidate(address candidtae) → only tells us if it is an active candidate, so it does not catch when a candidate signs zero blocks in a round while being active
points(uint256 round) → returns the points of the current round for all collators whereas we need points of last round for each collator

Even if we still had the above we would still need all delegators per collator, their delegations, and requests. Note that this is a lot to ask for from another contract in a single SC transaction, so the logic would have to be split among several calls and gas would go up. Still, this would be a fair price to pay to avoid oracles (I think).

The oracle the contract uses is based on the LIDO oracles with some enhancements:

  • Oracle membership is determined by the active set itself which is more secure
  • StakeBaby can veto a report

That is not totally correct.
points allows you to retrieve the previous round points too (that is why it takes the round number as argument).

Here is how I see the logic working:

  1. Collator register on the contract and stores GLMRs on it
  2. At every round, a call to the contract is made for each registered Collator on the contract to know its awardedPoints, by calling the points on the previous round.
  3. If the awarded points is under a certain threshold (ex: under “(total points / total collactors) - 10%”) then retrieve all delegators and compensate for the missed blocks (this accounts for being out of the top selected or for being a bad node and not producing enough blocks). The treshold ideally should be using an average of multiple rounds as the randomness of the author selection makes it possible for a collator to perform less or more for a given round. (You can also set the threshold to just 1 points, meaning it will apply only if you don’t produce any block at all)

(retrieving all (or only the top) delegators for a collator is not possible through precompiles yet, I’ll see if we can add it for RT2100. But as you suggested you can provide a function which gets a batch of delegator and verify they belong to the collator (and didn’t receive compensation yet for the given round))

(This logic would be a lot easier/cheaper to implement in the pallet directly :stuck_out_tongue: )

My bad, i meant to say thay awardedPoints does not provide the points for any round (it actually returns zero if the requested round is 2 rounds ago or older). This is manageable though.

The problem is thay it does not return points per collator, i.e. it does not take the collator address as an argument.

I think your assumption is that it returns the points of the caller. Even if it did work that way, it wouldn’t work cause

  • the caller is the contract, not the caller of the contract
  • we cannot really ask collators to sign anything as their keys should be in cold storage
  • if the points report needs to be signed by the collator that it reports, then a collator could avoid paying cover by not calling the method for that round

Regarding delegators - indeed we can provide that as an argument (I keep forgetting this pattern, lol). However, there I no method to check if a delegator delegates with a collator. This would be a very useful addition for sure. A hacky way of doing it is to try to bond more and catch the error, but this requires that the contract is the delegator!

Yes, sorry, the points is cumulating all the awarded points. We are going to add the awarded points per collator.

To know if a delegator is in the top list (the one supposed to receive rewards) of a collator, you can use the isInTopDelegators which requires a “delegator” and a “collator” address. (This is in RT2100 so it is not accessible currently in our networks)

The payout however will be difficult, because you need to maintain the list of delegators for each collator and each round. We do that for the pallet staking (to distribute the rewards) and the size of the data has brought few challenges. It would have to be done in multiple blocks if you use a Smart Contract.

Regarding points and isInTopDelegators - awesome, this would be very useful when available.

The payout is not an issuein the SC. The claimable cover balance accumulates for every delegator and they can withdraw it themselves, i.e. there are no ERC20 transfers when a collator misses a round, only debits/credits in the relevant internal uint256s. There is a test case that pays 700 delegators so we are well above the 300 required.

The last bits I see missing from the precompile are

  • how much a delegator is delegating to a collator (not covered by isInTopDelegators )
  • how much the undelegation request is for, if any

Any plans to add these in the staking precompile?

To make it clear to others reading this (it was brought to my attention that it is not), the current version of SC is 100% functional without requiring these methods . The discussion above has been about making it better with methods that don’t exist yet.

I like the idea to include the code in the pallet and get it audited together with the other changes to the runtime for free to us. That’s actually a brilliant way to go forward and we can differentiate Moonbeam further from other EVM chains by including those mechanisms on the protocol level. We also wouldn’t rely on having certain methods exposed via precompile as we can just add the required getters directly in the runtimes codebase where needed without changing a public interface that should be as stable as possible to not introduce version dependencies in third party apps.

I’d be honored to see this idea moving on to protocol level.

I’d be sad to hear that the grant was rejected on that basis. As a matter of principle, a grantee should not be denied a grant by a grantor because the grantor decided to implement the idea themselves.

I’ve discussed with the team and implementing it in the pallet would be a far better option yes.

But Purestake is probably not going to implement it, but will definitely help. The initial developers for this project should be the one providing a PR to include in the pallet. The amount of code needed to support the idea is small (mostly because the logic of payout, who has been collating, storing round information and … is already implemented). However our standards require a significant amount of documentation and tests associated with changes.

Concerning the grant, because Purestake would take in charge the audit for the pallet changes, I don’t see a reason to keep it.
However if the cost of developing the solution is significant on your side, you can probably repurpose the grant to cover this part I suppose.

Thank you for your clear answer, Alan. Unfortunately, we don’t have the expertise currently to take on the pallet PR.

We put about 480m manhours into this project. That would be a considerable sunk cost for us. Therefore, we will go ahead and launch it. Our collator friends have promised to help us financially with the audit.

Also, when the PureStake solution rolls out, we will include the pallet insurance data at stakeglmr.com and stakemovr along with our contract data. If at some point our contract does not make sense due to the protocol coverage, we will roll it out.

I do say this with a pinch of salt, but I understand that business is business, and we are keen to continue innovating on Moonbeam grounds. In fact, this is only the beginning.

2 Likes

I understand your point of view, but let me first repeat that Purestake has currently no intention to implement it. This could change if the community considers it an important feature and requests for it (which is not the case at the moment).

Moonbeam network is open source and, while the majority of the developlent is done by Purestake, other people can (and some did already) contribute to it. Purestake is encouraging others to do so and is happy to offer help/advice and to include our current audit process to ensure its quality and safety.

However, I believe that, even if a good amount of work has been done in the solution you propose, it is in the interest of the network and the community to pursue the solution that offers more sustainability and performances.

If you still want to go on with your solution, we will try to help providing the missing precompile access but we already know it won’t be easy. Providing accessors to all the delegators for exemple generates a high PoV which is problematic for the network, so we will have to think of an balanced solution.

4 Likes

I appreciate that.

Most things done on protocol level would be more performant. The questions then, is not if we can make it faster, but if the chain should bother checking for these conditions on every single block. Insurance events are rare and sporadic. That might mean the the checking-logic is better placed off-chain.

A few things you need to consider for a protocol solution:

Distributing insurance claims to 300 delegators is as heavy as rewards distribution. The selectedCandidates number is bound but the candidates number is not. So, an edge case is having a huge number of insured candidates getting kicked out.

You must handle scheduling insurance deposit withdrawals (by collators) separately, similarly to how you handle undelegations. You cannot simply use a collator’s reducible balance to cover insurance claims. An insurance has meaning only when it has a duration. If the normal balance is used to cover insurance claims, and it is not locked, then it has 0 duration because the collator can withdraw their deposit and stop covering claims at any time.

Moreover, the “whenExecutable” for insurance deposit withdrawals needs to be computed based on the total delegations of the collator, and the insurance deposit amount. Intuitively speaking, a bigger deposit covers a longer period and must have a longer withdrawal delay.

In summary, I think that this will result to a fair amount of code and extra stuff for the chain to do. I am definitely not versed in Substrate though so, I could be dead wrong.

1 Like

This is not how it works, if you look at the pallet staking, you realize that those similar events (like the delegation snapshot or the round rotation or payouts) are only checked once during the whole round. The team put a lot of effort ensuring efficiency (making sure we use as less block space in a round as possible for exemple) and stability (things like doing a payout of all the delegators in a single block for exemple would reduce stability of the block production or even could stall the network, that is why we split it to 1 payout per block per collator)

This is true and is why it would be better to include it in the staking logic. This would be included during the reward payout logic, which is already reading the delegators list and already performing a transfer to each delegator, so adding an insurance payout to it would not increase the PoV size, (FYI: The PoV size is mostly impacted by how many data you read/write that wasn’t read/written before in the block), nor increase significantly the computation time.

You are correct, the insurance should be similar to the “collator bond” in the sense it would be a deposit (so you can’t withdraw or used it). It could have a delay for withdrawal, similar to the system for staking already in place (Ex: you need to wait X rounds before your request to unstake is enacted) or it could be permanent and unreserved when the candidate leaves.

In my opinion, the staking insurance mechanism and payout is very similar to the current staking reward payout. They both pay the delegators based on the number of block produced (reward if blocks are produced, insurance is none) by a collator. They both need to be executed at the same time (during the following rounds). This justifies to spend a bit more time and include it in the staking pallet, I believe :slight_smile:

2 Likes

Just wanted to add that a

delegatedAmount(delegator, collator)

method would be extremely useful, not only for the insurance contract.

We are currently working on a pool and one issue we have is that delegations can be canceled (kicked from the top 350, collator deregisters, etc.). When this happens, the balance is returned to the contract, but the contract has no idea where it came from which results in double counting. There are ways to reconcile the balances, but they are not ideal.

The above method would also allow SCs to check the compounded returns, which is impossible now.

Yes, the delegated amount is in the list of methods we want to add. Unfortunately because it requires access to multiple storage items, we can’t simply provide it as it is, so it is likely to be in the next RT (RT2200).

Concerning the pool, there is a nomination pool pallet already implemented in substrate, that we could simply add to Moonbeam if there is a need for it. IIRC, Polkadot just recently added it to their networks to support more nomations.

2 Likes

I’m confused - does this mean the treasury funds are no longer needed?

I thought the grant was denied. Am I wrong?

Our collator friends decided to step in and help to audit the smart contract because it will help us compete with the whales and provide a service to our delegators. We were able to come up with nearly $5K which is not enough for a full audit, but better than nothing.

If it’s of any help, I have experience in static analysis & exploitation using both automated tools and code review, along with experience developing smart contracts.

We have released the Rewards Cover contract on Moonbeam.

You can see the covered collators at stakeglmr.com and manage cover at
https://stakeglmr.com/dashboard/dash-cover
https://stakeglmr.com/dashboard/dash-cover-members

We chose to release on Moonbeam first to enable community collators and delegator-backed collators to differentiate in the coming active set shuffling. The contract will still make use of the new methods in the staking precompile, when the 2100 runtime is activated.

StakeBaby has no other source of Moonbeam income other than the collator we run. We have used all the funds we’ve made with the collator to build stuff for the network and pay for on-going cloud costs. We appreciate all the support we can get to remain in the active set in these difficult times :slight_smile:

2 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.