top of page

Breaking Fluidity for glory and $50K

Today we'll review a bug discovered at the end of last year. I'll try to cover it from an educational perspective so that the reader can feel they too could have found it and hopefully spark some motivation.


 

Intro


Fluidity is a new type of crypto product that, simply put, aims to incentivize spending money. Stables are deposited into a contract such as fUSDC (fluid USDC) which mint for the sender the received amount. The underlying funds are put in LPs such as AAVE or Compound to accrue yield. This yield is distributed to the winners of an off-chain raffle. To get raffle tickets, you simply transfer money (winning sender receives 80%, receiver receives 20%). There's some nuances regarding how reward sizes are determined and how they deter abuse from circulating funds around, but that's out of scope for this report.


Now that we have an overview, let's get a picture of the contracts. The on-chain component is actually very simple and comprised of the following contracts:


Token.sol

The main contract, an ERC20 token. Allows users to mint fUSDC or burn fUSDC to receive USDC. Supports emergency shutdown and owner-defined settings such as mint limits. Has two functions for the distribution of rewards which will be discussed.

WorkerConfig.sol

CompoundLiquidityProvider.sol

AaveV3LiquidityProvider.sol

Dispatching rewards


There are two methods for dispatching rewards to users. One is batchReward(). It's called by the reward oracle and specifies three params:

  • Winner[] rewards - list of addresses and winnings amounts.

  • uint firstBlock - rewards are covering a time segment starting from this block.

  • uint lastBlock - rewards are covering a time segment ending at this block.

The function is essentially a loop that mints rewards to winning addresses. It marks in storage all blocks that have been part of this reward batch, never to be used again.


The second method is manualReward(). The Fluidity architecture batches a large number of rewards to be sent together using the previous method. It may take time until that batch is sent, so users have the option of calling manualReward() to receive their reward before the standard function is queued. For this to be accepted, they must produce a signature, provided by the oracle, proving the claimed reward is legit (they can do so via the front end).


function manualReward(
    address contractAddress,
    uint256 chainid,
    address winner,
    uint256 winAmount,
    uint firstBlock,
    uint lastBlock,
    bytes memory sig
) external {

Strategy outline


Great, so we have a decent understanding of the protocol operation, which is a requisite to starting to figure out how things can go wrong. There are a lot of good ways to move forward from here:

  1. Line-by-line inspection of all the code in scope, achieving a deep level of understanding.

  2. Formulating ideas about all the scenarios the protocol needs to handle correctly and validating they are.


I preach that code-diving is always worth it:

  1. Understanding the code better unlocks additional ways to challenge its correctness

  2. Even if you find 0 bugs, you're guaranteed to achieve two things:

  • Familiarizing yourself with yet another code base

  • Boosting your code-diving skills, making it a tad bit faster in the next dive


At this point, I did a top-to-bottom review of the contracts. Unfortunately, almost all of the code was very simple and linear. The main catalyst of bugs is code complexity, so seeing this was demotivating. However, simple code may foreshadow it's not taking into account all the edge cases.


The elephant in the room


I have talked before about the invisible state machine. Mathematically, all contracts can be reduced to something of this sort.

This is a relatively simple state machine that describes the functioning of a calculator. Our job as auditors or bounty hunters can be reduced to:

  1. Mapping (mentally or visually) the target codebase to a state machine in a way that loses as little precision as possible.

  2. Finding an arrow, or in other words a list of inputs to the machine, that leads to a state in which the desired behavior does not match the actual behavior.

In fact, hunters follow this process all the time without being conscious of it. However, every now and then it pays to take a step back and be fully aware of the invisible puzzle.


Uncovering hidden complexity


Enough rambling and back to Fluidity. From the codebase dive, I've picked up on some detail: the protocol has considered there's a need to support multiple batchReward() invocations arriving out of order:

// this might not happen if our transactions go through out of order
if (lastBlock > lastRewardedBlock_) lastRewardedBlock_ = lastBlock;

In this line in batchReward(), we only update the lastRewardedBlock_ if it's smaller than the current lastBlock. In manualReward(), lastRewardedBlock_ must be lower than the blocks rewarded for. If it would be set unconditionally, and rewards E-H would be executed before A-D, lastRewardedBlock_ would be lowered and won't take into account E-H. Users could get double their rewards (batchReward() + subsequent manualReward()). However, they've thought of that case so all is good.


This explains why the state transition is handled correctly for

  1. batchReward()

  2. manualReward()

It's now time to explain how it's handled for:

  1. manualReward()

  2. batchReward()

And also:

  1. manualReward()

  2. manualReward()


The relevant part in manualReward():

// user decided to frontrun
require(
    firstBlock > lastRewardedBlock_,
    "reward already given for part of this range"
);
for (uint i = firstBlock; i <= lastBlock; i++) {
    require(manualRewardedBlocks_[winner][i] == 0, "reward already given for part of this range");
    manualRewardedBlocks_[winner][i] = BLOCK_REWARDED;
}
manualRewardDebt_[winner] += winAmount;

Firstly, we make sure another manualReward() for the same block range would fail by requiring the manualRewardedBlocks_ mapping to be previously 0, and now setting it to BLOCK_REWARDED.


Next, we make sure a future batchReward() will not distribute the rewards to the same user again, by increasing manualRewardDebt_[winner] by the winAmount.


We can see how this is used in batchReward():

for (uint i = 0; i < rewards.length; i++) {
    Winner memory winner = rewards[i];
    if (manualRewardDebt_[winner.winner] != 0) {
        winner.amount -= manualRewardDebt_[winner.winner];
        manualRewardDebt_[winner.winner] = 0;
    }
    require(poolAmount >= winner.amount, "reward pool empty");
    poolAmount = poolAmount - winner.amount;
    rewardFromPool(firstBlock, lastBlock, winner.winner, winner.amount);
}

If it discovers manualRewardDebt_ is not zero for the reward it needs to send out, it subtracts it from the winnings. For example, if the user retrieved a manual reward for A-D, and then A-D is processed for everyone, winner.amount would get zeroed out and no money is sent.


It seems like the reward functions are structured to be safe in any cross-operation. However the point mentioned about batched rewards arriving out of order made me realize the state machine is actually more complicated than meets the eye. At any point, there may be more than two batchReward() transactions in the mempool, for different block ranges. It follows that users can claim rewards manually for multiple block ranges. This means the protocol must ensure the state handles these functions correctly, no matter their order:

  1. batchReward(A-D)

  2. batchReward(E-H)

  3. manualReward(A-D)

  4. manualReward(E-H)

This is actually significantly more complex! We've seen that the protocol maintains the invariant that a user cannot be manually rewarded for anything earlier than the latest batch reward. So a lot of these sequences are protected. Two other sequences can be reduced to linear transitions which are handled well:


  1. batchReward(A-D)

  2. manualReward(A-D)

  3. batchReward(E-H)

  4. manualReward(E-H)

And:

  1. batchReward(E-H)

  2. manualReward(E-H)

  3. batchReward(A-D)

  4. manualReward(A-D)


However, there were sequences that I found did introduce some additional complexity. For example:

  1. manualReward(A-D)

  2. batchReward(E-H)

  3. manualReward(E-H)

  4. batchReward(A-D)

Let's review the key protection again:

if (manualRewardDebt_[winner.winner] != 0) {
    winner.amount -= manualRewardDebt_[winner.winner];
    manualRewardDebt_[winner.winner] = 0;
}

Imagine the same user received $X in blocks A-D, and $Y in blocks E-H. manualRewardDebt_ would increase by $X in transaction (1) and then be decreased from $Y in transaction (2). This is actually dangerous. If $Y<$X , transaction (2) would overflow which would revert the entire batch. In (3), manualRewardDebt_ would increase by $Y and protocol would send the user's rewards. In (4), it would attempt to decrease $X+$Y from $X, guaranteeing overflow. Essentially, we've broken the state machine, which should serve all these requests without a problem.


Deducing impact


The next stage is uncovering the impact of this sequence of transactions. It seems the situation is recoverable as long as there's no more than one manualReward() that has not been paired with batchReward(). For example, in sequence (1), (2), (4), (3), although (2) would revert, (4) would zero out manualRewardDebt_, then after (3) would raise it by $Y, (2) could be called again to complete the E-H batch. Essentially, we've avoided ever getting to the fatal scenario where debt is $X+$Y.


Now that we've established the root cause and the conditions which break the reward mechanism, it's possible to specify the attack vector:


Provided that some user has received rewards in more than one batch, AND that more than one of those batches has not been processed via batchReward(), an attacker could permanently lock the rewards of everyone except the common user, by frontrunning with the corresponding manualReward() calls.


Bounty scope


Freezing of batched rewards was, at the time of finding, a HIGH severity impact in the scope table. Let's evaluate the conditions:

  1. More than one pending batchReward() - we have evidence that protocol has taken this possibility into consideration, therefore it is safely in scope.

  2. reward address collision across > 1 batches - seems extremely likely over a long period of time. Also, we can consider that the project was still in its infancy, so there were not a lot of users to pick from in the raffle to begin with.

This gave me the confidence that the report is eligible for HIGH impact. Importantly, the bounty specified $50K as the reward rather than "up to $50K". It meant the project could not deduct from the reward for low likelihood.


POC


I wanted to POC this attack as usual in Tenderly. It would have taken 20mins. Unfortunately, the Fluidity contracts are deployed behind BeaconProxy proxies, which Tenderly doesn't know how to work with (API deduction and debugging).


The next resort was using the built-in tests written in Hardhat, but that was a nightmare because none of the setups were working properly.


This forced me basically write a Foundry test suite for Fluidity from scratch, which took about 3 hours, mostly of configuring all the deployment addresses and connecting the contracts together. After getting a setup working, I wrote 3 tests of increasing complexity, to make sure the setup is working as intended. Only after validating these tests passed did I finally write the attack flow test. Doing it this way is important for two reasons:

  1. When the attack test works, you have the confidence it's not a false positive due to a bad setup

  2. When the attack test fails, it's easy to trace what is going wrong based on the previous tests.


Project Response


I have to commend Fluidity for the quick and excellent response. The report was submitted on 29/12 and on 30/12 it was already confirmed by the team. They have actually not tried to argue or downgrade the finding at all, which is quite refreshing :)


The only issue was that they discussed a shortage of cash flow, and wanted to release the payout over time despite not mentioning this in their BBP. Because they were a great sport and appreciated the finding, I was happy to compromise and we've gone with the following payout schedule:

  1. Jan - $30K USDC

  2. Feb - $5K USDC

  3. March - $5K USDC

  4. April - $5K USDC

  5. May - $5K USDC

With an additional $10K in FLUID comp:

  1. April - $4K FLUID

  2. July - $3K FLUID

  3. Oct - $3K FLUID


Final words


This turned out to be quite long, but hopefully I was able to drive home the point about uncovering hidden complexity in the state machine. Personally, finding this kind of issue in a code base that is so simple is a great motivation that bugs have and always will be around for us to find, if only we look hard enough.





Recent Posts

See All
bottom of page