Nomad Bridge Hack: $190M Lost to a Copy-Paste Replay Attack
Nomad lost $190M because a routine upgrade marked the zero hash as a valid Merkle root. Suddenly every message verified. The exploit needed no skill — users literally swapped the recipient address and hit send.
The Bug in One Sentence
Nomad's Replica contract treated the zero hash as a confirmed Merkle root, so any message — including completely fabricated ones — passed proof verification.
That's it. No reentrancy, no flash loan, no clever math. A single misconfigured storage slot turned a cross-chain bridge into an open ATM.
How Nomad's Message Verification Was Supposed to Work
Nomad is an optimistic bridge. Messages are committed to a Merkle tree on the origin chain, the root is relayed to the destination, and after a fraud window the root is marked `confirmed`. To withdraw on the destination chain, you submit:
1. The message body
2. A Merkle proof
3. The index in the tree
The Replica contract hashes your proof against the message leaf, walks up the tree, and checks whether the resulting root exists in `confirmedRoots`. If yes, the message executes.
Simple. Battle-tested pattern. Used by dozens of bridges.
The Vulnerable Code
Here's the simplified version of what shipped after the April 2022 upgrade:
```solidity
function process(bytes memory _message) public returns (bool) {
bytes32 _messageHash = keccak256(_message);
// acceptableRoot returns true if the root has been confirmed
require(acceptableRoot(messages[_messageHash]), "!proven");
// ... execute message
}
function acceptableRoot(bytes32 _root) public view returns (bool) {
uint256 _time = confirmAt[_root];
if (_time == 0) {
return false;
}
return block.timestamp >= _time;
}
```
Look closely. The check relies on `confirmAt[_root]` being nonzero. During the upgrade, an initialization routine ran:
```solidity
// initialize.sol — the fatal line
confirmAt[0x00] = 1;
```
The team intended to mark a specific committed root as trusted. Instead, the zero hash got blessed.
Now consider what happens when a user submits a message that was never proven. The mapping `messages[_messageHash]` returns its default value: `0x00`. The require becomes `require(acceptableRoot(0x00))`, which returns `true`.
Every unproven message verifies.
The Exploit
The first attacker drained a chunk of WBTC. Their transaction was visible on Etherscan. Other observers noticed something unusual: the calldata referenced no real Merkle proof, and the recipient was the attacker's address.
So people did the obvious thing. They copied the transaction, opened MetaMask, replaced the 20 bytes for the recipient, and clicked Send. It worked.
Within hours, more than 300 distinct addresses had pulled funds. Total drained: ~$190M. Roughly $36M was later returned by whitehats. The rest is gone.
This is the only major bridge hack in history where the attack vector required less skill than minting an NFT.
The Fix
Never treat default mapping values as valid state. Two layers of defense:
```solidity
function process(bytes memory _message) public returns (bool) {
bytes32 _messageHash = keccak256(_message);
bytes32 _root = messages[_messageHash];
// Defense 1: explicitly reject zero
require(_root != bytes32(0), "!proven");
// Defense 2: never allow zero in the confirmed set
require(acceptableRoot(_root), "!confirmed");
// ... execute message
}
function setConfirmation(bytes32 _root, uint256 _time) internal {
require(_root != bytes32(0), "zero root");
require(_time != 0, "zero time");
confirmAt[_root] = _time;
}
```
And in the initializer:
```solidity
function initialize(bytes32 _committedRoot) public initializer {
require(_committedRoot != bytes32(0), "empty root");
confirmAt[_committedRoot] = 1;
}
```
Three lines. That's the difference between a working bridge and a $190M smoking crater.
Why Audits Missed It
Nomad had been audited. The bug wasn't in the original contract logic — it was introduced by an upgrade that changed a single constant. The auditors had reviewed the storage layout. They had reviewed the verification function. They had not reviewed the initialization script paired with the new default-zero behavior of `messages[hash]`.
This is a recurring theme. Upgrade scripts and constructors get less scrutiny than core logic, but they write the storage that the core logic trusts. If you're shipping a proxy upgrade, the initializer deserves the same paranoia as a withdraw function. Run your contracts through our [free AI audit](https://www.cryptohawking.com/audit) before mainnet — it specifically flags zero-value sentinels in mappings and uninitialized roots.
Lessons for Bridge Developers
**1. Zero is not a sentinel.** Any time a mapping returns `bytes32(0)` by default, you must explicitly reject it before using it in a security decision. The EVM hands you zero for free; treat it like radioactive material.
**2. Replay protection must be explicit.** Nomad's Replica also lacked a per-message executed flag. Even after the root bug, a `processed[messageHash]` mapping would have limited each forged message to a single execution. Multiple drains of the same asset wouldn't have been possible.
**3. Upgrade scripts are part of the threat surface.** When you change defaults, regenerate your invariants. Did the new code assume any value in storage that the old code didn't write? Walk every path from initialization through verification.
**4. Mempool exposure is now a permissionless attack vector.** Once the first transaction landed, the bug was self-documenting. Any complex bug that produces a copyable transaction will be copied within minutes. Build under the assumption that your first exploiter is your loudest one.
For protocols handling more than a few million in TVL, AI tooling is the floor, not the ceiling. Cross-chain messaging logic in particular benefits from a [manual audit](https://www.cryptohawking.com/audit/manual) where someone walks the full state machine — initialization, upgrade, normal operation, paused operation — and asks what a zero return value means at every step.
The Postmortem That Wasn't
Nomad's official postmortem was thin. The team's recovery plan involved relaunching with a whitehat refund process and partial restitution. The brand never fully recovered. The protocol exists today, but TVL never returned to pre-hack levels.
Meanwhile the lesson generalized poorly. Two months later, other bridges with similar Merkle-verification patterns shipped without adding explicit zero-root checks. The class of bug is still present in production code.
If you're writing a bridge, a light client, or any verification contract that uses Merkle proofs against a stored root, audit your initialization path today. The bug is trivial. The cost is not.
FAQ
Was the Nomad hack a smart contract bug or a key compromise?
Pure smart contract bug. No keys were stolen, no admin functions abused. A routine contract upgrade initialized the `confirmAt` mapping such that the zero hash was treated as a valid Merkle root. Since unproven messages return zero from the `messages` mapping by default, every forged message passed verification. The exploit required no signatures, no privileged access, and no cryptographic sophistication. It is the cleanest example of a state-initialization bug in DeFi history.
Why could hundreds of different wallets drain Nomad simultaneously?
Because the bug was visible on-chain and the attack required no skill. After the first attacker's transaction landed on Etherscan, anyone could copy the calldata, swap in their own recipient address, and resubmit. There was no rate limit, no per-message executed flag, and no off-chain monitoring that could pause the contract in time. This was the first decentralized bridge hack — a kind of permissionless looting that has not been repeated at this scale since.
How do I check my own contract for this class of bug?
Search for every mapping that influences access control or verification. For each one, ask: what happens if the key is unset and the returned value is zero? If a zero value can satisfy your require checks, you have a vulnerability. Add explicit `!= 0` guards on the read side and `!= 0` guards on the write side. Also audit your initializer — make sure it cannot be replayed or front-run, and that it writes every storage slot the verification logic depends on.
Would a unit test have caught this?
Yes, trivially. A single test calling `process()` with a random unproven message would have reverted on a correct implementation and succeeded on the buggy one. The bug survived because integration tests focused on the happy path — submit valid proof, expect success — rather than negative paths like submit-nothing-and-expect-revert. Always test that invalid inputs fail. Fuzz your verification functions with zero-byte and default-value inputs. This is one place where 100% branch coverage genuinely matters.
Are optimistic bridges inherently more dangerous than light client bridges?
Not inherently, but they have a larger trusted-storage surface. Optimistic bridges store roots that have been challenged-or-not-challenged, which means a single state variable controls whether millions of dollars move. Light client bridges verify consensus signatures per message, which is more expensive but harder to corrupt with an init bug. Either model is safe with rigorous engineering; both are catastrophic without it. The bug class — trusting unverified storage — is universal.