Poly Network Hack — $611M From a Cross-Chain Manager Bug
In August 2021, an attacker drained $611M from Poly Network by calling a privileged function through the cross-chain manager itself. No private key was stolen. The contract trusted its own messages, and that was the entire bug.
The Bug in One Sentence
A cross-chain message handler was allowed to call any function on any contract it owned — including the function that changes who controls the bridge.
That's it. No reentrancy, no integer overflow, no flash loan gymnastics. Just a privileged proxy that did exactly what it was told, by someone who realized nobody was checking what it was being told to do.
How Poly Network Was Architected
Poly Network's Ethereum side had two key contracts:
**EthCrossChainManager (ECCM)** — verifies cross-chain messages signed by off-chain "keepers" and dispatches them by calling target contracts.
- **EthCrossChainData (ECCD)** — stores bridge state, including the keeper public keys. Owned by ECCM.
When a message arrived from another chain, ECCM verified signatures and then called the target contract using low-level `call` with attacker-controllable calldata. The intended flow was: cross-chain message → ECCM → asset lock proxy → release tokens to user.
The problem: ECCM owned ECCD. And ECCD had a function called `putCurEpochConPubKeyBytes` restricted to its owner. Guess who could trigger any call from ECCM's address?
The Vulnerable Pattern
Here's the simplified shape of the bug:
```solidity
contract EthCrossChainManager {
function verifyHeaderAndExecuteTx(
bytes memory proof,
bytes memory rawHeader,
bytes memory toMerkleValue
) public returns (bool) {
// 1. verify keeper signatures over rawHeader
require(verifySig(rawHeader, curEpochPubKeys), "bad sig");
// 2. decode the destination call
(address toContract, bytes memory method, bytes memory args)
= decode(toMerkleValue);
// 3. execute it. anything. anywhere.
(bool ok, ) = toContract.call(
abi.encodePacked(
bytes4(keccak256(abi.encodePacked(method, "(bytes,bytes,uint64)"))),
args
)
);
require(ok);
}
}
```
The attacker noticed the method name was hashed into a function selector with no allowlist. So they searched for any function on ECCD whose selector — when constructed via this concat-and-hash scheme — matched a function that could change keepers.
They found one. The string `f1121318093` (or similar collision) hashed into the same 4-byte selector as `putCurEpochConPubKeyBytes(bytes)`. They submitted a "cross-chain message" telling ECCM to call ECCD with that method name and their own public key as the argument.
ECCM dutifully called ECCD. ECCD checked `msg.sender == owner` — yes, the owner is ECCM. Keeper rotated. Game over.
Then the attacker signed normal-looking unlock messages with their new keeper key and drained $611M across three chains.
Why Signature Verification Didn't Help
This is the part that catches people. ECCM did verify keeper signatures. So how did the first malicious message get through?
It didn't need to bypass signature verification at the protocol level. The keepers (off-chain relayers) signed messages from the source chain. The attacker submitted a transaction that *looked like* a valid relayed message. Once the keeper key was swapped, everything afterward was "validly" signed by the attacker's key.
The deeper failure: ECCM was both the message executor and the privileged owner of state-controlling contracts. One role should never have implied the other.
The Fix
Two defenses, both should be applied:
**1. Allowlist callable targets and selectors.**
```solidity
mapping(address => mapping(bytes4 => bool)) public allowedCall;
function verifyHeaderAndExecuteTx(...) public {
require(verifySig(rawHeader, curEpochPubKeys), "bad sig");
(address toContract, bytes memory method, bytes memory args) = decode(toMerkleValue);
bytes4 selector = bytes4(keccak256(abi.encodePacked(method, "(bytes,bytes,uint64)")));
require(allowedCall[toContract][selector], "not allowed");
(bool ok, ) = toContract.call(abi.encodePacked(selector, args));
require(ok);
}
```
**2. Separate the executor from the admin.** ECCM should not own ECCD. Owner-only functions on the data contract should be guarded by a multisig or governance timelock, not by whoever happens to relay messages.
```solidity
contract EthCrossChainData {
address public admin; // multisig
address public executor; // ECCM
function putCurEpochConPubKeyBytes(bytes calldata pk) external {
require(msg.sender == admin, "only admin");
curEpochPubKeys = pk;
}
}
```
The principle is boring and unforgiving: **the entity that processes untrusted input must not be the entity that holds privileged rights over your state.**
What Made This $611M Instead of $61
Three multipliers turned a logic bug into the largest DeFi hack of 2021:
1. **Unified architecture across chains.** Same buggy ECCM deployed on Ethereum, BSC, and Polygon. One bug, three drains.
2. **Concentration of liquidity.** Poly Network's lock proxies held the full TVL of bridged assets — no per-message rate limit, no daily withdrawal cap.
3. **Single-keeper-set design.** Rotating one variable rotated authority over everything.
The attacker returned almost all funds within two weeks, claimed they did it "for fun," and was offered a $500k bounty plus a CSO job. Crypto is weird. The next attacker will not be a performance artist.
Practical Checklist for Cross-Chain Code
Never let a relayer-controlled call reach an admin function. Allowlist target + selector.
- Separate `executor` and `admin` roles on every state-holding contract.
- Add per-asset withdrawal rate limits — the [Nomad bridge hack](https://www.cryptohawking.com/blog) showed what happens without them.
- Treat the message decoder as adversarial input. Fuzz it.
- Audit the **graph of ownership**, not just individual contracts. Ask: "if X is compromised, what can it do to Y?"
If you're shipping a bridge or any contract that proxies external calls, run it through a [free AI audit](https://www.cryptohawking.com/audit) first to catch the obvious privilege-escalation paths. For production bridge code where eight figures are at stake, a [manual audit](https://www.cryptohawking.com/audit/manual) is not optional — it's cheaper than the alternative by four orders of magnitude.
The Lesson
Poly Network wasn't hacked because cryptography failed. It was hacked because a contract was trusted to call itself into doing things its designers never enumerated. Privilege should be enumerated, not implied. Every cross-chain call that lands on your contracts should be answering the question "is this exact action allowed?" — not "is this caller allowed to do anything?"
Write the allowlist. Split the roles. Cap the rate. Then sleep.
FAQ
Was the Poly Network attacker actually able to forge signatures?
No, and that's the elegant part. The attacker never broke any cryptography. They used the cross-chain manager's own call dispatch to invoke putCurEpochConPubKeyBytes on the data contract, swapping the legitimate keeper public keys with their own. After that, every subsequent withdrawal was 'validly' signed — by their key. The cryptography worked exactly as designed. The bug was in the authorization model, where the message-execution role implicitly held admin rights over the keeper registry.
Why did the function selector collision work?
EthCrossChainManager computed the target function selector by hashing a method name string passed in the cross-chain message, then concatenating type info. Because Solidity selectors are only 4 bytes, brute-forcing a string that hashes to the same selector as putCurEpochConPubKeyBytes is computationally trivial — minutes on a laptop. This is a recurring lesson: never let user input determine a function selector without an allowlist. 4-byte selectors are not a security boundary, they're a routing convenience.
How would a separation of roles have prevented this?
If EthCrossChainData required an independent admin multisig to call putCurEpochConPubKeyBytes — instead of trusting whoever owned it — EthCrossChainManager's call would have reverted with 'only admin'. The attacker could still spam malicious cross-chain messages, but none of them could escalate privilege. Splitting executor and admin into different addresses, with the admin behind a timelock, contains the blast radius when the executor processes adversarial input. It's the same principle as separating root from your web server's runtime user.
Did Poly Network change anything after the hack?
They patched the specific bug, added an allowlist on dispatched calls, and rotated keepers. But Poly was hacked again in July 2023 for around $10M when keeper private keys were compromised — a different bug, same fragile single-keeper-set design. The architectural lesson — concentration of authority in one rotatable variable — was not fully addressed. Robust bridges today use threshold signatures, multi-prover designs, or optimistic verification with fraud windows, not a single mutable keeper set behind a privileged proxy.
What should bridge developers check in their own code today?
Three things. First: every external call your relayer or message handler makes must hit an allowlist of (target, selector) pairs — no exceptions. Second: every state-changing admin function on contracts you own must be guarded by an address that is not the message executor, ideally a timelocked multisig. Third: add daily and per-tx withdrawal caps on lock proxies so a logic bug bleeds slowly instead of instantly. If any of these three are missing, you have a Poly Network waiting to happen.