Dedaub is excited to participate in ETHDenver 2024. During the conference, Dedaub will showcase its advanced security technology solutions. Its team will members discuss the safety of Web3 applications, build partnerships, and share insights to enhance security standards within the Web3 ecosystem.
Visit Dedaub at Booth #251 in Devtopia at ETHDenver 2024!
Dedaub’s booth, #251, is in the vibrant Devtopia space. We invite technology enthusiasts to visit and attend one of the Suite demos, where we’ll explore the cutting-edge capabilities of static analysis, formal verification, Monitoring, and Alerting service.
In the demo, you will have the opportunity to learn about our tools that utilize formal analysis and statistical learning to examine possible states and paths of Smart Contracts, efficiently identifying vulnerabilities. Additionally, you will see how our customizable agents can provide essential insights into on-chain activities. Check out the Demo calendar on our Dedaub booth playbook.
Moreover, it is an excellent opportunity to interact with our team and discover how we can safeguard your Web3 projects.
Spotlight | Dedaub Talk
One of the main events during Dedaub’s participation at ETHDenver 2024 will be a talk by co-founder Yannis Smaragdakis, a respected authority on blockchain security. The presentation is scheduled for February 29, 2024, at 4:25 PM: “All Your Contract Are Belong to Us: Analyzing All Deployed SCs”
Every time there is a need to analyze a large number of Smart Contracts, Dedaub is the default partner–in war rooms, Ethereum Foundation impact studies, and widespread bugs.
Dedaub has built on its leading EVM decompiler to produce technology for querying all EVM smart contracts ever deployed. The talk will go over cool recent cases:
Ecosystem-level threats: use in major “war rooms,” e.g., ThirdWeb vulnerability.
About @EthereumDenver 2024
ETHDenver 2024, known as the Year of the SporkWhale, will occur in Denver from February 23 to March 3, 2024. It aims to turn the city into a hub for blockchain innovation. ETHDenver is a community-owned innovation festival powered by SporkDAO that offers a variety of activities, including workshops, technical presentations, bootcamps, and networking parties. Learn more.
As a founding collaborator of the Security Alliance (SEAL), Dedaub celebrates SEAL’s public debut, a significant milestone in crypto security. The alliance consists of more than 50 Web3 and cybersecurity organizations. Its goal is to strengthen the security of the cryptocurrency ecosystem. Before its public debut, SEAL connected users, developers, and experts and offered free Web3 simulation exercises.
SEAL’s dedication to setting high-security benchmarks within the crypto ecosystem aligns with our core capabilities. Dedaub is bringing to the table world-leading technologies and expertise in static and dynamic program analysis, reverse engineering, and ethical hacking. In the context of SEAL, we can contribute to developing more robust defense mechanisms against threats and ensure the blockchain ecosystem’s safety.
Dedaub supports the Whitehat Safe Harbor initiative and SEAL proactivity. This empowers ethical hackers to use cutting-edge tools like MEV bots to monitor and safeguard projects easily. The goal is to respond to challenges and incidents like the Nomad bridge hack.
Dedaub is proud to be part of SEAL, driving towards a more secure decentralized future.
Seal’s Public Debut | The security culture
By its very nature, the crypto market fosters a rigorous security culture. Its foundation on blockchain technology—a bastion of decentralized security—demands constant vigilance and innovation from its participants. It encourages the development of sophisticated security measures designed to protect against a wide range of threats.
Crypto security constantly changes and adapts to meet the challenges of advanced threats. Its strength relies on its community’sdge and expertise, including developers, researchers, and users, who work together to protect the infrastructure. Their collective efforts safeguard the system, embodying the core values that make Web3 a unique, resilient, and ever-growing reality.
Seal’s Public Debut | The security researchers’ playground
Crypto offers an exciting platform for security researchers, including those from web2 backgrounds, due to its complex challenges, high stakes, and the immediate impact of their work. This field merges theoretical knowledge with practical application, creating a rich environment for problem-solving.
Collaborating with SEAL through initiatives like SEAL Drills allows researchers to contribute while expanding their skill set significantly. These drills offer hands-on experience in real-world scenarios, enhancing their technical skills and understanding of blockchain intricacies. SEAL Drills prepare them to face formidable challenges and fosters a collaborative learning atmosphere with seasoned experts, making an ideal space for deploying and honing their security skills.
The collective and hands-on approach is crucial, especially when considering the advanced tools at our disposal, such as MEV bots, and the legal complexities surrounding their use.
Seal’s Public Debut | The Impact of MEV Bots under the Safe Harbor Agreement
The Whitehat Safe Harbor Agreement that SEAL promotes provides a legal framework for ethical hackers to conduct emergency rescues, primarily using MEV bots. This allows the community to monitor suspicious activities and take protection actions (when a protocol is under attack) without facing legal consequences.
The open and decentralized nature of cryptocurrency, which includes public code and lack of gatekeepers, makes it susceptible to hacking attempts. Therefore, it is important that security researchers are incentivized to protect it as much as attackers are motivated to steal funds.
In the past, many developers and security researchers were discouraged from assisting due to legal ambiguity with their employers. SEAL is promoting this initiative following its community members who regretted that more people would help if a legal framework existed.
Dedaub is committed to SEAL’s mission to protect decentralization and urges the community to join the cause.
About Security Alliance (SEAL)
Security Alliance (SEAL), established with the support of blockchain innovators, has quickly become a cornerstone in the advancement of Web3 security. This alliance represents a collaborative effort among premier experts, from audit firms to ethical hackers. It is dedicated to pushing the security boundaries in the Web3 space. As one of its founding members, Dedaub has been at the forefront of this initiative, driven by a mutual commitment to bolster Web3 security.
SEAL operates as a US 501(c)(3) nonprofit organization with the mission to protect the decentralized internet. Bringing together a diverse group of security experts—including auditors, bug bounty hunters, foundation security leaders, security researchers, and ethical hackers—marks a significant step in social coordination across different web3/crypto ecosystem sectors.
The alliance innovates with several key initiatives in the crypto ecosystem’s security framework. SEAL911 and SEAL Drills, for instance, are designed to provide immediate assistance and training against security threats, showcasing SEAL’s proactive approach to community support.
Additionally, the Safe Harbor Agreement for Whitehats, spearheaded by SEAL, emphasizes the alliance’s forward-thinking strategy to prepare for and mitigate future security threats. This agreement lays down a legal framework enabling ethical hackers to contribute to the crypto ecosystem’s security without fearing legal repercussions.
We invite the community to engage and provide feedback on the Whitehat Safe Harbor Agreement proposal hosted on Github. We welcome your insights until Pi Day, March 14, 2024.
Most Dapp developers have heard of and probably use the excellent Multicall contract to bundle their eth_calls and reduce latency for bulk ETL in their applications (we do too, we even have a python library for it: Manifold).
Unfortunately, we cannot use this same trick when getting storage slots, as we discovered when developing our storage explorer, forcing developers to issue an eth_getStorageAt for each slot they want to query. Luckily, Geth has a trick up its sleeve, the “State Override Set”, which, with a little ingenuity, we can leverage to get bulk storage extraction.
Bulk Storage Extraction | Geth Trickery
The “state-override set” parameter of Geth’s eth_call implementation is a powerful but not very well-known feature. (The feature is also present in other Geth-based nodes, which form the base infrastructure for most EVM chains!) This feature enables transaction simulation over a modified blockchain state without any need for a local fork or other machinery!
Using this, we can change the balance or nonce for any address, as well set the storage or the code for any contract. The latter modification is the important one here, as it allows us to replace the code at an address we want to query the storage for with our own contract that implements arbitrary storage lookups.
Here is the detailed structure of state-override set entries:
FIELD
TYPE
BYTES
OPTIONAL
DESCRIPTION
balance
Quantity
<32
Yes
Fake balance to set for the account before executing the call.
nonce
Quantity
<8
Yes
Fake nonce to set for the account before executing the call.
code
Binary
any
Yes
Fake EVM bytecode to inject into the account before executing the call.
state
Object
any
Yes
Fake key-value mapping to override all slots in the account storage before executing the call.
stateDiff
Object
any
Yes
Fake key-value mapping to override individual slots in the account storage before executing the call.
Bulk Storage Extraction | Contract Optimizoor
The following handwritten smart contract has been optimized to maximize the number of storage slots we can read in a given transaction. Before diving into the results I’d like to take an aside to dive into this contract as it’s a good example of an optimized single-use contract, with some clever (or at least we think so) shortcuts.
To better understand what’s going on we can take a look at the high level code (this was actually generated by our decompiler)
function function_selector() public payable {
v0 = v1 = 0;
while (msg.data.length != v0) {
MEM[v0] = STORAGE[msg.data[v0]];
v0 += 32;
}
return MEM[0: msg.data.length];
}
Walking through the code we can see that we loop through the calldata, reading each word, looking up the corresponding storage location, and writing the result into memory.
The main optimizations are:
removing the need for a dispatch function
re-using the loop counter to track the memory position for writing results
removing abi-encoding by assuming that the input is a contiguous array of words (32-byte elements) and using calldatalength to calculate the number of elements
If you think you can write a shorter or more optimized bytecode please submit a PR to storage-extractor and @ us on twitter.
Bulk Storage Extraction | Results
THEORETICAL RESULTS
To calculate the maximum number of storage slots we can extract we need three equations, the execution cost (calculated as the constant cost plus the cost per iteration), the memory expansion cost $$(3x+(x^2/512))$$ and the calldata cost.
We can break down the cost of the execution as follows:
The start, the range check and the exit will always run at least once
Each storage location will result in 1 range check and 1 lookup
Calculating the calldata cost is slightly more complex as its variably-priced: empty (zero-byte) calldata is priced at 4 gas per byte and non-zero calldata is priced at 16 gas. Therefore we need to calculate a placeholder for the average price of a word (32-bytes).
zero_byte_gas = 4
non_zero_byte_gas = 16
# We calculate this as the probability each bit of a byte is a 0
prob_rand_byte_is_zero = (0.5**8) # 0.00390625
prob_rand_byte_non_zero = 1 - prob_rand_byte_is_zero # 0.99609375
avg_cost_byte = (non_zero_byte_gas * prob_rand_byte_non_zero) + \
(zero_byte_gas * prob_rand_byte_is_zero) # (16 * 0.99609375) + (04 * .00390625) = 15.953125
Therefore the average word costs: $$15.953125 * 32 * x$$
We can combine all of these equations and solve for the gas limit to get the maximum number of storage slots that can be read in one call.
Therefore given a 50 million gas limit (which is the default for Geth) we can read an average of 18514 slots.
This number will change based on the actual storage slots being accessed, with most users being able to access more. This is due to the fact that most storage variables are in the initial slots of the contract, with only mapping and dynamic arrays being pushed to random slots (or people using advanced storage layouts such as those used in Diamond proxies).
PRACTICAL RESULTS
To show the impact of this approach, we wrote a python script which queries a number of storage slots, first using normal RPC requests and batched RPC requests for the normal eth_getStorageAt, and then comparing to the optimized eth_call with state-override set. All the testing code can be found in the storage-extractor repo, along with the bytecode and results.
To isolate variable latency as a concern, we ran the tests on the same machine as our node, with latency being re-added by utilizing asyncio.sleep to have a controlled testing environment. To properly understand the results, lets look at the best-case scenario of 200 concurrent connections.
In order to properly represent the three methods we need to set the y-axis to be logarithmic since standard parallel `eth_getStorageAt`s are too slow. As you can see even with 200 connections standard RPC calls are 57 times slower than RPC batching and 103 times slower than `eth_call` with state-override.
We can take a closer look at the difference between batching and call overrides in the next graph. As you can see, call overrides are faster in all scenarios since they require fewer connections, this is most noticeable with the graph in the top left which highlights the impact of latency on the overall duration.
Conclusion
To wrap up this Dedaub blog post, I’d like to thank the Geth developers for all the hard work they’ve been doing, and the extra thought they put into their RPC to enable us to do funky stuff like this to maximize the performance of our applications.
If you have a cool use of the state-override set please tweet us, and, if you’d like to collaborate, you can submit a PR on the accompanying github repo (storage-extractor).
At Dedaub, we have solid expertise in Smart Contract security, which allows us to contribute significantly to protecting the Web3 ecosystem, and we have recently achieved another milestone in our mission to establish trust and improve safety in the blockchain industry.
We are thrilled to announce the launch of the Dedaub TX Simulator Snap, a tool to transform how users engage with blockchain transactions.
What is the Dedaub TX Simulator Snap?
The Dedaub TX Simulator Snap is a cutting-edge tool that enables users to simulate transactions, evaluate the reliability and credibility of the accounts involved, and determine the financial consequences of their actions. Leveraging the extensive Smart Contract Database of Dedaub in real time, it provides users with up-to-date and comprehensive insights to make informed decisions.
Grant Permissions: The Snap will request the necessary access permissions during installation.
Frequently Asked Questions (FAQs)
HOW DOES THE DEDAUB TRANSACTION SIMULATOR WORK?
The Dedaub Transaction Simulator interfaces with Dedaub’s Smart Contract database, conducting real-time simulations of transactions that mirror the conditions of the specified network.
WHAT ARE THE KEY BENEFITS OF USING THE DEDAUB TX SIMULATOR?
Cost Efficiency: Save on gas fees by avoiding reverted transactions.
Informed Decision-making: Understand the financial implications of transactions before sending them on-chain.
Detailed Analysis: Get a comprehensive overview of asset transfers, state changes, gas consumption, and more.
HOW DO YOU INSTALL AND USE THE DEDAUB TX SIMULATOR?
The Dedaub Transaction Simulator does not execute transactions on-chain. Instead, it simulates them based on the network’s current state. During the testing phase, it does not carry out any actual transactions.
WHAT NETWORKS DOES THE SIMULATOR SUPPORT?
The currently supported networks are Ethereum Mainnet, Arbitrum, Optimism, Fantom, Avalanche, and Base.
HOW DO I REACH OUT FOR SUPPORT?
For any support inquiries related to the Dedaub Transaction Simulator, please contact our support team at contact@dedaub.com or through our Discord Support Channel.
About Dedaub
Dedaub has a history of over 200 audits for leading Web3 protocols and successful white-hat hacking endeavors that have safeguarded billions in Total Value Locked (TVL). The Ethereum Foundation trusts our team. We integrate academic research with practical hacker experience to offer unparalleled security services. To learn more about our journey and services, please visit https://dedaub.com.
The Arbitrum network experienced significant downtime on December 15 due to problems with its sequencer and feed. The network had been down for almost three hours. The major outage began at 10:29 a.m. ET amid a substantial increase in a type of network traffic called Inscriptions. Arbitrum’s layer-2 network had processed over 22.29 million transactions and had a total value locked of $2.3 billion. Despite the success of the network, the current design suffers from a significant chokepoint when posting transactions to L1, causing the sequener to stall. While advancements such as Arbitrum Nova and Proto-danksharding might alleviate these design issues, this is not the first time Arbitrum has experienced such issues – a bug in the sequencer also halted the network in June 2023.
Arbitrum Sequencer Outage | Background
Arbitrum is a Layer-2 (L2) solution which settles transactions off the Ethereum mainnet. L2s provide lower gas fees and reduce congestion on the primary blockchain (In this case, Ethereum, L1). The current incarnation of Arbitrum is called Nitro. Arbitrum Nitro processes transactions in two stages: sequencing, where transactions are ordered and committed to this sequence, and deterministic execution, where each transaction undergoes a state transition function. Nitro combines Ethereum emulation software with extensions for cross-chain functionalities and uses an optimistic rollup protocol based on interactive fraud proofs. The Sequencer is a key component in the Nitro architecture. Its primary role is to order incoming transactions honestly, typically following a first-come, first-served policy. This is a centralized component operated by Offchain Labs. The Sequencer publishes its transaction order both as a real-time feed and to Ethereum, in the calldata of an “Inbox” smart contract. This publication ensures the final and authoritative transaction ordering. Additionally, a Delayed Inbox mechanism exists for L1 Ethereum contracts to submit transactions and as a backup for direct submission in case of Sequencer failure or censorship.
Arbitrum Sequencer Outage | Root cause
In the two hours prior to the outage more than 90% of Arbitrum traffic consisted of Ethscriptions. Ethscriptions are digital artifacts on EVM chains created using Ethereum calldata. Unlike traditional NFTs managed by smart contracts, Ethscriptions make the blockchain data itself a unique NFT. They are inspired by Bitcoin inscriptions (Ordinals) but function differently. Creating an Ethscription involves selecting an image, converting it to data URI format, then to hexadecimal format, and finally embedding it into a 0 ETH transaction’s Hex data field. Each Ethscription must be unique; duplicate data submissions are ignored. Owners can use Ethscriptions IDs for proof or transfer of ownership. In practice the calldata or Ethscriptions look like the code below:
Calldata example of an Ethscription. This represents a token mint.
Since Ethscriptions are very cheap, one can do a lot of them for the same unit of cost. Indeed, a staggering 90% of transactions posted on-chain were Ethscriptions. Also, for a relatively low cost, the amount of transaction entropy that needed to be committed to L1 increased to 80MB/hr vs. the 3MB/hr that was typical before the traffic spike. We calculated this by looking at average on-chain transaction postings for the sequencer.
Now, look at the architecture diagram of Arbitrum below. Note that in order to commit transaction sequences to L1, the data poster needs to post the increased amount of data over a larger number of transactions. Prior to the outage, the number of transactions posted per hour was around 10 – 20x higher than the December mean.
However, the code responsible for posting these transactions has an in-built limitation that imposes limits to the rate at which L1 batches are posted. Prior to the outage, if there are 10 batches still in the L1 mempool, no more batches are sent to L1, stalling the sequencer. This limit was subsequently raised to 20 batches after the outage. This is probably not a good long-term solution however, as it increases the chances of batches needing to be reposted due to transaction nonce issues.
// Check that posting a new transaction won't exceed maximum pending
// transactions in mempool.
if cfg.MaxMempoolTransactions > 0 {
unconfirmedNonce, err := p.client.NonceAt(ctx, p.Sender(), nil)
if err != nil {
return fmt.Errorf("getting nonce of a dataposter sender: %w", err)
}
if nextNonce >= cfg.MaxMempoolTransactions+unconfirmedNonce {
return fmt.Errorf(
"... transaction with nonce: %d will exceed max mempool size ...",
nextNonce, cfg.MaxMempoolTransactions, unconfirmedNonce
)
}
}
return nil
Batch poster is responsible for posting the sequenced transaction sequence as Ethereum calldata.
Arbitrum Sequencer Outage | Recommendations
There are several indications that point towards the sequencer, and thus the network, not being tested enough in a realistic setting or in an adversarial environment. However, luckily the upcoming Proto-Danksharding upgrade to Ethereum should also help for reducing L1-induced congestion. Irrespective of this the Arbitrum engineers can consider the following recommendations:
Whether the Arbitrum gas price of L2 calldata is set too low, compared to other kinds of operations. Gas is an anti-DoS mechanism, which is intimately tied to the L1 characteristics. If this increase in L2 calldata causes a proportionally large increase in batch size, then attackers can craft L2 transactions with large calldatas that result in batches that don’t compress well under Brotli compression, causing a DoS attack on the sequencer. Note that Arbitrum Nova should not suffer as much from this issue as the transaction data is not stored on L1, only a hash is.
Whether there is a tight feedback loop between the size of the L1 batches currently in the mempool and L2 gas price. There is an indirect feedback loop, via the gas price on L1 and backlog sizes, but this may not be too tight. In addition, since the sequencer is centralized anyway, anti-DoS measures might be encoded directly into it to reject transactions. (Note: A more decentralized sequencer is being considered for the future, so this last measure wouldn’t work)
Long-term, the engineers more research into making the rollups more efficient to decrease the sizes of batches committed to L1. This may include ZKP rollups at some point.
Additionally, security audits to the sequencer should consider DoS situations, both through simulation/fuzzing and also by having auditors think of hostile situations through adversarial thinking based off their deep knowledge of the involved chains.
Finally, the Arbitrum team made a small change to the way transactions are soft-committed. In this change the feed backlog is populated irrespective of whether the sequencer coordinator is running, which carries its own risks but enables dApps running on Arbitrum to be more responsive during certain periods.
Disclaimer: The Arbitrum sequencer is solely operated by Offchain labs. Thus, most of the information regarding its operational issues (such as logs) are not publicly available so it’s hard to get a complete picture of the issue. Dedaub has not audited Arbitrum or Offchain labs software. Dedaub has however audited other (non-Arbitrum) software and projects running on Arbitrum such as GMX, Chainlink, Rysk & Stella.
Hello everyone, this is Yannis Bollanos, Security Researcher at Dedaub. A few days ago, we published a tweet about the thestandard.io exploit that took place on November 6th, 2023, which you can find here: https://twitter.com/dedaub/status/1734598398055981471.
The positive response from the X audience indicates a strong interest in the topic. As a result, I have decided to expand it into a blog post that can be used as a reference in the future.
Thestandard.io exploit occurred on November 6th, 2023, and according to Crypto.news, approximately 280K EUROs were at risk. Fortunately, most of the funds have been recovered, so this is a hack story with a happy ending.
After the excitement and tension of the moment subside, it is important to reflect on what happened and how we can prevent similar attacks in the future. It’s a great opportunity to re-emphasize that protocols should use defensive checks/assertions at every point their code interacts with a decentralized exchange (DEX).
The @thestandard.io protocol issues coins to users who open over-collateralized positions, helping the protocol’s assets maintain a stable value by adjusting liquidity provision by actual market rates.
In the @thestandard.io attack scenario, a SmartVault contract oversees the management of each user’s position, taking responsibility for adequately verifying the position’s liquidity. Users can issue coins by calling `mint`:
The SmartVault allows the exchanging of deposited collateral tokens through Uniswap’s V3 router (0xe592427a0aece92de3edee1f18e0157c05861564 on Arbitrum). Here is where things get interesting:
With amountOutMinimum set to 0, the swap operation would succeed no matter the extent of the slippage incurred.
There were no other safeguards in place to ensure a fair exchange for the value provided in the contract.
This enabled the owner of the vault contract to initiate a swap on a pool that might have been maliciously ’tilted,’ allowing for an exchange at an arbitrarily different price from the market price.
There are two ways to profit from this:
(1) Utilize a flash loan and purposely sandwich the swap operation between a tilting and an un-tiling swap on the pool. This is a fairly typical attack pattern commonly used in exploits.
OR
(2) Have the swap operation occur on a pool, the liquidity of which (as well as the execution price) is entirely controlled by the attacker. This can be done only on freshly created pools or in pools with near-0 liquidity.
The attacker chose option (2) since a Uniswap V3 pool for PAXG-WBTC didn’t exist then. Here’s how everything is put together to form the attack:
Attack Transaction:
The attacker creates the Uniswap v3 PAXG-WBTC pool
The attacker flash borrows 10 WBTC ( and a tiny extra amount to provide as initial liquidity)
The attacker provides 10 WBTC as collateral and mints as many EUROs as possible
The attacker provides liquidity to the PAXG/WBTC pool. WBTC and PAXG are at a 1:1 ratio within the tick range in which liquidity is minted. This is over-valuing PAXG by a lot.
The attacker swaps the deposited WBTC for PAXG, and the swap operation goes through the attacker-controlled pool. The vault is now under-collateralized, in terms of real value: the PAXG it obtained has much less value than the EUROs issued.
The attacker then burns all of his liquidity on Uniswap, and he notably receives ~9.9 WBTC. At this point, the attacker still holds the originally minted EUROs.
The attacker swaps 10k of his EUROs for USDCs. Some USDCs are then employed to obtain the few remaining WBTCs needed to repay the flash loan.
In the end, the attacker walks away with 280k EUROs and ~8.5k USDC.
Fortunately, the attacker has returned ~240k EUROs back to the protocol:
Smart Contract developers should not solely rely on assumptions about on-chain liquidity/asset prices. The code should consistently enforce these assumptions (within a reasonable deviation).
Transaction simulation tools improve developer and user experience when operating decentralized Web3 applications (Smart Contracts running on programmable blockchains).
These tools can lower the risk and guesswork during development, deployment, and subsequent operation of Web3 applications. And they’re particularly useful in hostile security environments such as public blockchains.
Transaction simulation tools allow developers and users to “dry-run” the execution of transactions on the blockchain without committing the state changes of this transaction to the ledger.
For example, an end user can deposit funds in a yield farming vault and understand what proportion of the vault the deposit would be entitled to.
Another example is the simulation of a decentralized autonomous organization (DAO) proposal to evaluate its integrity and functionality, ensuring it’s not malicious before implementation.
In this article, we will explore the user experience and security issues that users and developers face when interacting with Web3 applications and how transaction simulation tools can help mitigate them.
By the end of this article, you’ll better understand what transaction simulation tools do, how they work, and how they can improve both user and developer experience.
The Need for Transaction Simulation Solutions in Blockchain
Web3 applications, such as DeFi applications, enable novel financial primitives with many more possibilities for end users. However, the complexity and irreversibility of blockchain transactions have led to unexpected fund losses for many users, often due to poorly designed interfaces in these applications.
Loss of funds is not the only issue for Web3 applications. We often face reverted or out-of-gas transactions, wasting funds, which is especially detrimental to our experience when interacting with Web3 applications.
The impact of these challenges is not limited to regular end-users. Developers and Web3 teams face the complex task of ensuring their contracts perform as intended.
Interacting with a blockchain protocol in a complex manner, for instance, through a multisig account, is a highly daunting task. Typically, it can be accomplished by forking the blockchain, but this is time-consuming.
Real-world scenarios underscore how critical transaction simulation solutions are. For instance, in platforms Yearn Finance or Uniswap Labs, where complex financial transactions are constant, the necessity to simulate transactions is invaluable.
In these cases, simulations allow users to review the outcomes of Smart Contract transactions in a controlled environment, giving teams time to identify and address potential issues before running them on-chain.
Types of Transaction Simulation Solutions Available
The market offers a variety of transaction simulation solutions, each catering to different needs and preferences.
Browser Extensions are popular for their ease of use, integrating with web browsers to offer simulation capabilities alongside wallet interactions.
In-Wallet Simulations integrate with the wallet software, providing a seamless experience for users to simulate transactions within the wallet interface.
Standalone Tools are comprehensive software solutions. These offer advanced features and greater flexibility for complex simulations. Developers and organizations needing detailed analyses and custom simulation scenarios prefer standalone tools.
Advantages of Using Transaction Simulation Tools
ERROR PREVENTION
Error prevention is a crucial advantage of transaction simulation tools, as they enable developers to simulate transactions in a controlled environment.
This process helps identify and correct errors before executing them on the blockchain, significantly reducing the likelihood of costly mistakes such as failed transactions that consume resources without achieving their intended outcomes.
Consequently, these tools greatly enhance blockchain applications’ overall reliability and efficiency.
EDUCATIONAL VALUE
For newcomers to blockchain development, transaction simulation solutions are invaluable educational resources. They provide a hands-on, risk-free platform for understanding the intricacies of blockchain transactions.
They allow developers to experiment with different scenarios, gaining practical insights into the operation of Smart Contracts. This experiential learning accelerates any developer’s expertise in blockchain technology, empowering them to build more sophisticated and secure dApps.
Choosing the Right Transaction Simulation Solution
Selecting an appropriate transaction simulation solution is crucial for blockchain developers. These tools come in various forms, each suited to different needs and environments.
Factors to Consider:
Network Support: Ensure the tool supports all relevant blockchain networks your project interacts with. For instance, if your Smart Contract runs on Ethereum and Polygon, the chosen transaction simulation solution must accommodate both.
Ease of Integration: Assess how seamlessly the tool integrates into your existing development. A smooth integration minimizes disruptions and maintains development flow.
User Experience: Assess the tool’s user interface and usability. A good simulator should offer clear insights into the transaction process, aiding decision-making and error identification.
Type of Tool: Decide between browser extensions and wallet-based simulators. Browser extensions are generally more flexible and accessible to test across various wallets, whereas wallet-based solutions offer a more integrated experience.
EVALUATION CRITERIA:
Reliability and Support: Investigate the tool’s performance history and the provider’s responsiveness to support queries and updates.
Track Record: Consider the provider’s reputation within the blockchain community. Long-standing, positively reviewed tools often indicate reliability and efficacy.
RECOMMENDATIONS:
Opt for solutions that prioritize security and accuracy in transaction simulation.
Avoid tools that are overly complex or do not offer transparent processes, as these can hinder rather than help your development efforts.
Stay informed about the latest developments in transaction simulation technologies to ensure your choice remains relevant and effective.
Selecting the right tool is crucial. It must meet technical requirements and adhere to the highest security and efficiency standards in the blockchain space.
Dedaub Watchdog Transaction Simulator
The Dedaub Watchdog Transaction Simulator allows users to simulate transactions when interacting with complex Smart Contracts before committing to the main chain.
It allows an understanding of all the various actions that would happen without the risk of losing funds. The Dedaub Watchdog transaction simulation provides three approaches, depending on specific use cases:
Through the Dedaub Simulation API, developers can integrate simulation directly into their applications.
Through the read/write/simulate feature on any Smart Contract page in Watchdog.
When used by an end-user, such as in the latter two approaches, the transaction simulation presents relevant information in convenient formats through the Watchdog UI.
One such format is the (i) trace format, which contains all intermediate Smart Contract functions called, new Smart Contracts created, and events fired.
The other format contains fund transfer, and (ii) includes the amount of funds transferred, both for the user and other participants in the transaction.
(Trace format above)
(funds transferred above)
When used by Web3 users, an important use case is checking the legitimacy and reliability of the accounts and Smart Contracts involved in the transaction. By simulating transactions, users can also gain insight into potential outcomes, allowing them to identify risks proactively.
The Dedaub Watchdog Transaction Simulator leverages the Dedaub Smart Contract database. The database offers detailed, real-time information on all deployed Smart Contracts on-chain, providing deep insights into the workings of Smart Contracts.
Conclusion
In conclusion, transaction simulation tools, particularly those exemplified by the Dedaub Watchdog Transaction Simulator, represent an advancement in Web3 application development and user interaction. They provide an extra layer of security and insight, allowing developers and end-users to identify and rectify potential issues in Smart Contract transactions promptly. These tools prevent costly errors and fund losses and serve as educational resources for those new to blockchain technology. With their ability to simulate complex financial transactions in a controlled environment, transaction simulation solutions enhance the efficiency, reliability, and overall user experience of interacting with Web3 applications.
Web3 Monitoring continuously tracks blockchain activities, such as transactions and smart contract interactions, to identify anomalies, ensure security, and maintain operational transparency across decentralized networks. Web3 Monitoring empowers developers and organizations with real-time insights to safeguard their projects.
Why Blockchain Monitoring is Important
The need for security on the blockchain is ever-increasing, and the demands for innovative security solutions have also surged in recent years. The complexity of hacks and security breaches leaves no room for errors as the blockchain has shown to be unforgiving by design in punishing any possible lapses. In the last few years, attacks from private transaction pools have increased because attackers can bypass traditional defenses and exploit vulnerabilities without detection, limiting current security approaches and elucidating the need for more proactive measures. As codebases strengthen to counter these security risks, social engineering presents malicious actors with new ways to defraud people, hence the increased need for monitoring activity on the blockchain.
Web3 monitoring involves:
Analyzing activities over a specific timeframe can deliver security insights regarding potential malicious actors.
Establishing baselines of behavior and identifying anomalies based on user preferences and previous interactions.
Real-time wallet and token activity notifications to identify significant transfers and other risk indicators.
The customizable blockchain monitoring solution provided by Dedaub to detect on-chain activities, establish periodic executions, and create a custom alert using an enhanced PostgreSQL database to give a consistent view of blockchain data and maintain high efficiency in on-chain real-time monitoring embodies all the qualities of a sound Web3 monitoring system.
Web3 Monitoring as a Post-Audit Best Practice
Relying solely on smart contract audits to protect against hacks and security breaches is now considered outdated. While audits reduce the likelihood of attacks, they do not guarantee a secure system in the long run.
One important reason for this is that audits focus more on the codebase itself. Still, audits may only partially cover security issues arising from dependencies or the underlying blockchain architecture. In the blockchain environment, where threats are dynamic and evolving, new sophisticated attack vectors that may evade standard checks and vulnerabilities can occur, making a contract previously considered secure and vulnerable.
Contrary to public opinion that hacks occur suddenly, most attacks come with indicative signals usually present before the attack. By monitoring these stages of potential attack flags and signs with real-time monitoring, we finally have a system to cover security gaps and bolster the results of adequately audited smart contracts. Real-time monitoring of on-chain activity like transactions, multi-sig wallet operations, governance proposals, stacking, node infrastructure, and financial risks due to market manipulations to find out malicious incidents before they happen and prevent any breach that could have occurred in real-time can prove to solve about 98% of all security breaches. Monitoring helps to give risk insights and provide real-time detection of risks based on blockchain and meme pool data, allowing for recovery actions before any compromise.
How Dedaub Enhances Real-time Blockchain Monitoring
Dedaub’s real-time smart contract monitoring reinforces post-audit safeguards by identifying suspicious activities and offering fully customizable multichain protection against threats and unforeseen behaviors across Ethereum and other EVM-compatible chains.
The Dedaub Security Suite allows users to set up monitoring bots and queries to track on-chain activities and trigger custom actions through webhooks for free. It also flags unusual transactions and lets users stay alert to specific on-chain events with seamless cross-chain queries to ensure efficient monitoring.
With the monitoring star rating system, query ratings are now possible, allowing users to share their experiences and contribute to an expanding library of insights to help new and existing users find the best tools to achieve their goals faster and enhance functionality. The enhanced monitoring editor makes the query writing process quicker and easier to understand. It also gives suggestions in queries, together with an advanced error reporting system, to identify any issues arising from variables. The ability to join on-chain data with off-chain metadata also gives an essential edge in real-time monitoring.
Using multichain monitoring agents offers a network-agnostic solution that simplifies the process of tracking activity across multiple blockchains simultaneously. With the new cross-chain contract lists, managing data from various blockchain networks can be achieved through a single unified list.
Moreover, the advanced RPC fetch functions allow users to incorporate data directly from external REST APIs into their monitoring queries framework, increasing the capability and accuracy of real-time monitoring.
The latest version of Dedaub’s Security Suite introduces cutting-edge blockchain transaction monitoring features, including cross-chain capabilities, public function-based similarity, and an enhanced monitoring editor. These tools empower developers and organizations to maintain robust oversight with advanced security and efficiency. Learn more.
As most programmers would admit, the most annoying bugs are often the “little” ones. Tiny logic errors caused by a few wrong characters in a single line of code, compiling fine and remaining undetected, patiently waiting to crash our program at the worst possible moment. We’ve all written such bugs, spent countless hours debugging them, and uttered the most horrific profanities when we finally discovered that we lost our sleep over a couple of wrong characters.
But losing a night’s sleep over a little bug isn’t the worst of our worries. At least not if one writes software for NASA, whose Mars Climate Orbiter famously burned up in the Martian atmosphere due to a software bug. Well, NASA software is complex; such a catastrophic bug should clearly be complicated, impossible to understand by mere mortals, right? Far from it, the bug that led to the loss of the $125 million Mars Climate Orbiter was a trivial but crucial missing multiplication by 4.45. Europeans aren’t immune to little bugs either; the loss of ESA’s $370 million Ariane V rocket in just 39 seconds was caused but a simple integer overflow error.
Thankfully, for the longest time, one needed to be employed by a space agency to worry about a little bug having such enormous financial consequences. That is until Smart Contracts arrived! Now, programs consisting of a few hundred lines of relatively “simple” code, developed by small teams over a relatively short period of time, are directly responsible for safeguarding various types of multi-million-dollar assets. All it takes is one undetected little bug and we get, not a spectacular rocket explosion, but an equally spectacular crypto hack that makes the Mars Climate Orbiter seem like pocket change.
So, let’s look at an instructive example of such a little bug. Smart Contracts typically use Solidity modifiers to guard their functions, performing crucial security checks.
modifier isOwner() {
// Make sure we're called by our trusted owner before doing anything.
require(msg.sender == owner, "Caller is not owner");
_;
}
Writing such a check is simple, no need to be a NASA engineer to do it. But better double and triple-check it because the consequences of the tiniest of bugs in that line are enormous.
error CallerNotOwner(); // gas efficient and easy to recognize
modifier isOwner() {
// I wish this were valid code, but it isn't.
require(msg.sender == owner, CallerNotOwner());
_;
}
Not a big deal, you’ll say, require is just a combination of a check and a revert; we can rewrite it and perform the two steps manually.
modifier isOwner() {
// This works fine
if(msg.sender != owner)
revert CallerNotOwner();
_;
}
Mission accomplished, but you might have noticed a small detail. In the code above msg.sender == owner was replaced by its negation: msg.sender != owner. This is because require expects a condition that should hold, while its if/revert replacement expects a condition that should not. So, in general, we should replace
This negation of the Boolean expression is exactly the beginning of our “little bugs” story. Well, how hard is it to simply add a “!”? But that’s not exactly what we did above, is it? No programmer that appreciates code simplicity and elegance writes
if(!(msg.sender == owner))
Everyone would simplify it to
if(!(msg.sender == owner))
bringing the negation inside the Boolean expression. And what if the negated expression is more complex? Logic, being the foundation of computer science, provides us with simple rules:
!(A && B) is equivalent to (!A || !B)
!(A || B) is equivalent to (!A && !B)
Just carefully follow the rules inside the complex Boolean expression, and you’ll be fine. Easier said than done; I bet every single programmer with a few years of experience has incorrectly negated a Boolean formula at some point in their career.
So, it shouldn’t be surprising that this exact bug appeared in one of our recent audits. The above commit aimed at replacing a string error with a custom one and, in doing so, changed:
Did you spot the negation error? The expression is of the form A || (B && C), so its negation becomes !A && (!B || !C), the && in B && C should change to ||. So, the correct check should be
These two wrong characters (&& instead of ||) completely change the logic of the modifier; now an unauthorized call with updater != address(0) and _msgSender() != address(this)) will not trigger the error as it should, which could easily lead to a total loss of funds for this specific contract.
Of course, the point is not that Smart Contracts are impossible to secure: this bug was caught by the audit (the chances of catching it were very high), and even if it weren’t, we are confident that it would still have been found before releasing the code, either by manual inspection or automated tests.
But its mere existence shows that Smart Contracts, as with all programs, are not immune to little bugs. Even the simplest of changes require caution and should be properly tested and audited, both internally and by external teams, to minimize the chances of a catastrophic little bug as much as possible.
Summary: The root cause of the thirdweb critical vulnerability is that independent libraries implementing ERC2771 & Multicall, such as OpenZeppelin Libraries, interact badly, when combined. This allows attackers to spoof the _msgSender() with all sorts of access control implications including loss of funds.
The issue is complex, but can be explained using a simple analogy. Imagine a bank that will let one of the bank officials carry out a transaction on your behalf, as long as the instruction is written on a piece of paper with your verified signature. This is a very common scenario, for instance with some preferred bank clients. So, you go to the bank official and hand him a signed piece of paper. Your instructions are “take this sealed box to the cashier, open it, and give him what’s inside”. The bank official happily executes your signed instructions, after checking your id against your signature. The sealed box contains another piece of paper reading …”do a withdrawal on behalf of Elon Musk”, signed with a fake signature. The cashier takes this piece of paper from the bank official, thinking that the signature was checked, when, really, the only signature that was checked was on the instructions to deliver and open the box. That’s it!
Now let’s look into the technical mechanics for how this vulnerability works, and how to protect your project from this issue.
The Critical Thirdweb Vulnerability | Background
First, we need to cover some preliminaries. In particular we need to first understand the implementation of the ERC2771 standard and the OpenZeppelin Multicall library. ERC2771 gives the ability to have a “virtual” msg.sender, i.e., caller of a public function of a smart contract.
ERC2771 defines a contract-level protocol for Recipient contracts to accept meta-transactions through trusted Forwarder contracts. No protocol changes are made. Recipient contracts are sent the effective msg.sender (referred to as _msgSender()) and msg.data (referred to as _msgData()) by appending additional calldata.
ETHEREUM ERC-2771
Therefore, this virtual msg.sender, called _msgSender() is set by a trusted external party, the forwarder. And how does the forwarder tell the contract what is the virtual msg.sender? It appends an extra parameter to all calls. This means that all functions of a contract that supports such virtual msg.senders need to take in an extra parameter which they interpret as the msg.sender. The other side of the vulnerability is Multicall. It is a way to have a single call that becomes many calls (to the same contract) in sequence. How does this happen? By making all the info of the “many calls” be parameters of the “outer” single call.
The Critical Thirdweb Vulnerability | Root cause?
The problem with these two libraries is that the forwarders (in ERC2771) were not designed to work with multicall. They add a single _msgSender() parameter to the outer call of a multicall. But remember: all functions now expect this parameter! Where can they get it from? The parameters of the *outer* multicall.
So, if an attacker uses multicall to call, say, 3 functions in sequence, the attacker can define all the parameters to these function calls, including the _msgSender()! This means that the attacker can make a call appear to be coming from anyone!
The Critical Thirdweb Vulnerability | Evaluating the impact
We have tried to reach out to most large projects that might have been affected (in collaboration with thirdweb and OpenZeppelin) over the last few days. However, if you are worried about this issue affecting your contract, we have flagged any contract affected on Watchdog and made this information available to the public. However, the extent to which your contract is affected depends on actual implementation of the contract. First, evaluate functions with access to _msgSender() (transitively). Do these functions contract check access control mechanisms using _msgSender()? For example, can someone withdraw or burn coins for the _msgSender()? In that case, the issue affects your contract, critically. In many of these contracts there may be onlyOwner or onlyRole modifiers that make use of _msgSender(). In addition, look for common transfer functions such as safeTransferFrom() or transfer(). The effect is also modulated by the value of the assets held by the contract, or if this contract represents an asset. Make sure to find out if your contract is a token in a Uniswap-like liquidity pool. It is possible that all the liquidity in this pool could be stolen due to this issue.
The Critical Thirdweb Vulnerability | Mitigation
The rest of the article outlines mitigation. Thirdweb has developed and deployed a mitigation tool that can possibly assist you. A large number of affected contracts were deployed by their product. However, oftentimes you’d need to take additional actions. Should you require assistance the team at Dedaub can at least point you to the right information. You may contact us here. In the rest of the article we list some some mitigation options we’ve observed over the last few days to be successful. Legal disclaimer: This should not be construed to be professional advice by our team.
PREFERRED MITIGATION: DISABLE TRUSTED FORWARDER
Some ERC2771 library implementations allow resetting a trusted forwarder. Doing so will prevent any gasless transaction from being executed through the forwarder, solving the issue (albeit at the cost of missing functionality). Unfortunately there are many instances where resetting the trusted forwarder is not possible, so the rest of the mitigation steps apply. Otherwise, your smart contract is probably safe from this issue.
ADVANCED MITIGATION METHODS
These mitigation methods may take time and expertise to successfully execute. If time is critical, you can consider decreasing the blast radius in the next section in case an attacker hacks the contract while you are in the process of planning a mitigation.
If your contract is Upgradeable, prepare an upgrade. Removing either multicall (and all functionality that delegatecalls to the same smart contract) prevents the attack. In addition, removing ERC-2771 functionality also prevents the attack. Other ways to prevent this attack involve adding a module that allows doctoring of the contract’s storage and removing the trusted forwarders in this way. This latter is difficult to execute correctly.
DECREASING THE BLAST RADIUS
Some steps can be taken in cases you do not manage to mitigate the issue in a timely fashion to limit the amount of stolen assets from your contract. This can be done in several ways:
Ask your users to remove approvals from your contract. You can additionally check which users have approved your smart contract to transfer funds by checking on app.dedaub.com, navigating to your smart contract, navigating to balances and then allowers. Note that publicly announcing removal of approvals can work both in favor and against you since malicious hackers could be tipped off.
Pausing your contract may help users from continuing to use it, but, depending on the implementation it might not prevent the attack.
Remove liquidity from Uniswap-like pools in case the token is held by a pool, otherwise the liquidity in this pool may be drained in some cases.
Conclusion
The thirdweb vulnerability is an unfortunate issue that came about due to the composability of libraries in a single smart contract, through inheritance mechanisms. Unfortunately, although libraries are supposed to be abstractions, when it comes to security abstractions can easily be broken and implementations can affect each other in unforseen ways. This was even the case despite the overwhelming majority of affected libraries were developed by the same organization. In their defence however, it is very hard to make libraries interoperable, and, furthermore even harder to make them upgradable. Our audit team at Dedaub regularly finds issues in smart contracts that employ “safe” 3rd party libraries. Our decompiler and contract analysis tools really help in such cases as they work on the actual deployed code of a smart contract. We regularly find issues related to upgradability, but other issues may be lurking.
We would like to commend the work of countless other security engineers who have helped reach out to affected projects!