Rarely does a day go by without at least one hack affecting the web3 ecosystem. In fact, we've become so used to 6+ figures of value lost, that 5-figure losses barely make it to news feeds. This reality is unacceptable and unsustainable for an industry that aims to replace the banking system.
This article aims to take a broad look at the lines of defense available for projects and how they can be optimized to reduce security risks to manageable levels. To make things interesting, we will assume a successful attack has taken place and work backward to identify all security boundaries crossed.
For an attacker, a successful operation has to end with the laundering of stolen funds. To this end, an attacker will swap into native tokens like ETH to avoid blocklisting, then mix them through on-chain or off-chain mixers. To avoid getting tracked, they will usually fund the attack through mixed tokens or highly anonymized wallets (cash-bought). Top blockchain forensics firms and law enforcement may be able to de-anonymize entry or exit wallets. It is important to set up communication channels ahead of time and get a head start when the inevitable man-hunt begins.
A sophisticated negotiation plan needs to be put in place to prepare for all eventualities. A good strategy must take into consideration:
A strong monetary incentive to return the funds or to report the perpetrator.
Psychological manipulation of the attacker through carefully constructed clues and/or bluffs.
Setting up OpSec traps for the hacker to trip over.
Established professionals in the negotiation phase can guide projects through these stages. As always, preparation is the key to having a critical time advantage.
At any point, a smart contract can theoretically lose the amount of value it holds plus all funds approved to it. But what if each interaction would verify outflow does not exceed a set threshold, irrespective of the app-specific logic? Limited-loss schemes are incredibly powerful because ideally, they are independent of the rest of the functionality. This means in order to drain a smart contract, an attacker must discover unrelated bugs in both the application logic and circuit-breaker logic, exponentially lowering the odds.
Of course, these considerations need to be in tandem with decentralization principles.
The generalized MEV ecosystem has seen meteoric advancements in the last 2 years. Advanced white-hat bots continually monitor the mempool and in many cases are able to frontrun attacks and secure the funds. Some white-hat firms customize such bots to improve the detection of attacks on the client's infrastructure. They can also become an off-chain circuit breaker if they are given permission to put the system in lockdown once an attack has been detected.
Insurance is a bit of a curveball, as it doesn't actually thwart an attacker in any way, but still somewhat alleviates the monetary loss for the project. Given the premium is affordable, it is a great way to buy a good night's sleep for the development team. Note that taking all other precautions discussed will dramatically reduce liability costs and make insurance firms more willing to undertake the risks.
Bug bounties achieve several goals for a project:
They represent probabilistic around-the-clock coverage for severe bugs, serviced by motivated white-hats who spend time on their codebase
They are a public-facing, self-issued, skin-in-the-game confidence rating.
They are an effective black-hat deterrent
The economic incentive properties around BBs make it an expensive choice for projects, but an easy one to make given the alternative outcomes.
A successful bounty program needs to address these points:
Scope - Any contract that can cause material damage to the protocol or users needs to be covered. Furthermore, the scope must be up-to-date at all times and also cover any upgrades or changes scheduled for the near future.
White-hat relations - Projects need to show respect to the white-hat ecosystem by rewarding generously and in a timely fashion. Hackers spread the word very quickly about which projects are lowballing and which are trustworthy.
Scaling - Both game theory and empirical evidence show bounties need to be linear to the total value at risk. Scaling acknowledges the reality that white-hat and black-hat are a false dichotomy, and in fact, black-hats can be "enlightened" (or vice-versa). The takeaway is - projects need to structure their bounties accordingly, and when necessary, cap the TVL to the maximum bounty they can afford.
A security audit is equivalent to the statement: "A team/person of this skill level has spent this amount of time on these contracts, and only found the following documented issues". This directly teaches us the limitations of audits:
Time-capped - Additional time effort could have resulted in other, severe findings
Skill-capped - A better hacker or one with some specialized knowledge may find additional findings
Volume-capped - A squad of tens of black-hat hackers of equivalent skill stands a better chance of finding issues (If we accept the fact that the auditing process is not a deterministic state machine).
Snapshot-based - Any changes to the codebase post-audit quickly diminish confidence without additional auditing.
Having said that, auditing is still the single best process projects can undergo to secure their code.
It gains access to highly specialized hunters. They are capable of quickly learning a codebase and don't have the built-in bias the developers have with their code.
Similarly to BB, it deters black-hats who prefer to feast on unaudited code of similar TVL. More reputable audit teams -> better deterrent.
It can be scaled to increase confidence by iterating on multiple audit teams and/or decentralized audit contests.
Testing covers everything from the most basic sanity tests to complex integration tests. Ideally, any distinct state in the program should be tested properly, i.e. that it transitions to all other states correctly. When contracts become increasingly more complex, it is easy for the developers to not account for all possible state transition flows. Extensive fuzz testing will greatly help in uncovering those if they are written well.
A good test suite should eliminate all but the sneakiest bugs, which occur usually through a combination of several external factors that could not be simulated accurately. Hopefully, those will be caught during auditing.
Development is an integral part of a team's security posture. The repo branch which goes to production should be guarded by a strict PR review policy (at least two reviewers, 100% test coverage, etc.). In parallel to code reviews, developers need to be continuously educated about programming pitfalls, hacking techniques, and the latest exploits. All these will drastically reduce the risk of injecting flaws into the codebase.
Interestingly, many high-profile hacks could have been prevented before development ever began. The greatest catalyst of bugs is complexity, so one of the primary goals when designing a system architecture is keeping it to the absolute minimum. This means cutting down on features that have diminishing returns (when considering the potential threats being introduced). Also, moving any non-essential logic to the front end is recommended. Many security firms provide consultation from the architecture design stage and will assist in balancing out business needs and security awareness.
Each new line of defense introduced to the security posture is, in essence, an exchange of costs known ahead of time in favor of an order-of-magnitude reduction in the odds of a catastrophic event. It is effectively reducing the volatility of an already ultra-volatile ecosystem.
Last word of advice: the most important trait of a dev team is humility. Rekt is full of teams who thought they were smart contract experts and that their code is air-tight. Sure, some projects have done almost everything right and still ended up on the Rekt leaderboards, but such is the way of black swans. At the end of the day, multi-layered defense is our formation of choice, is it yours?