What the KelpDAO Incident Reveals About RPC Trust in Cross-Chain Systems

The recent KelpDAO exploit that occurred on the 18th of April 2026 has already been widely discussed. It has often been described as a bridge incident, but that framing misses what actually went wrong. If one had to dig deep to uncover the details of what really happened, it would become very clear very quickly that there was no clear failure in contract logic, no broken signature scheme, and no evidence of compromised keys. Instead, the system behaved exactly as designed, but it did so using data that did not reflect the source chain's actual state.
This distinction is important as it fundamentally changes how the incident should be understood. The failure did not originate in execution, but in how the system determined what had happened on-chain. Therefore, once that assumption was wrong, everything that followed became a consequence rather than a cause.
So, to make this clear, the KelpDAO hack was ultimately a problem of blockchain data verification, not contract correctness. Considering this, today let's dive deep into what went wrong, why monitor tools did not flag it, assess the true extent of the hack’s impact, and what are the lessons that we all should learn.
The Incident Was Not a Smart Contract Failure

Most initial interpretations of what happened got it wrong. All the contracts involved were executed correctly, the message format remained valid, and the verifier produced an attestation based on the data it received. In practice, this means that from a purely internal perspective, the system behaved as expected.
Nothing inside the system appears broken. So, what happened?
The issue is that valid logic was applied to invalid inputs, which is a very different category of failure. The failure did not come from how the system executed, but from what it assumed to be true before execution began.
What Happened in the KelpDAO Incident
Clearly understanding the sequence of what happened is not particularly complex once the facts are laid out.
At 17:35 UTC on 18 April 2026, during Ethereum block 24,908,285, the system accepted a cross-chain message that should not have existed. Within this message, there was no corresponding burn event on the source chain, identified as Unichain. Even so, the message was accepted as valid, triggering the release of approximately 116,500 rsETH. In simpler terms, this means tokens were created on Ethereum without a matching burn or lock event on the source chain.
This alone is enough to indicate a failure, but the inconsistency becomes clearer when you examine the transaction sequence. On the source chain, the message nonce remained at 307, but on Ethereum, the system accepted nonce 308 as if the previous step had already occurred!
This gap should not be possible.
In a correctly functioning system, nonces progress in order, with each step reflecting a real event on the source chain. Here, on the other hand, the system effectively skipped a step and accepted a message that had no underlying transaction. At this stage, it is important to point out that it is not just that the message was invalid, but it is that the system had no way to recognise that it was invalid, because it was operating on data that did not reflect the actual state of the chain.
How the Attack Worked at the Infrastructure Layer
To understand this as a cross-chain exploit crypto event, the focus has to shift away from contracts and into the infrastructure that feeds them. The verifier, in this case LayerZero’s DVN (Decentralised Verifier Network), relied on a mix of internally operated and external RPC endpoint crypto providers to read source chain state, which is where the attacker concentrated their efforts.
In fact, the attack did not begin with contract interaction, but with mapping the RPC endpoints used by the verifier. Once those endpoints were identified, a subset of nodes was compromised, allowing the attacker to control what those nodes reported about the chain. At the same time, the remaining nodes were rendered unavailable through targeted disruption, reducing the system’s available data sources to only those under the attacker's control.
What really matters at that stage is how the system responded under those conditions. It did not pause or attempt to validate what it was seeing, but instead, it continued operating with whichever infrastructure remained reachable. In practice, that meant relying on nodes that were already under the attacker’s control.
The attacker did not need to break the protocol or bypass cryptography. They only needed to influence what the system believed had happened on-chain. As put by our CTO Isaac Zarb:
“The attacker didn’t break smart contracts or steal keys. They poisoned reality.”
Where the System Broke
Single Verifier Trust Model
The affected pathway relied on a 1-of-1 DVN configuration, which, in simple terms, means that a single verifier was sufficient to approve a message and trigger execution. This was problematic as there was no requirement for independent confirmation, and additionally, there was no mechanism to challenge the attestation once it had been produced.
It is important to point out that LayerZero’s own integration guidance makes this risk clear, so much so that they advise developers not to rely on single-verifier setups for production environments, within their integration checklist.

Basically, a single verifier does not create a failure on its own, but it removes the ability to detect one when it occurs.
RPC Treated as a Source of Truth
The biggest and most fundamental issue sits at the data layer. The verifier treated RPC responses as authoritative without actually validating them against alternative sources. There was no comparison between providers, no attempt to detect inconsistencies, and therefore no rejection of conflicting data.
Basically, the system accepted whichever response was available and internally consistent. Although that assumption holds in normal operating conditions, it becomes fragile when the infrastructure itself is part of the attack. Once RPC responses were influenced, the system lost its ability to distinguish between the real and fabricated state.
Why Redundancy Did Not Prevent the Incident
One of the more prevalent misconceptions around the KelpDAO exploit is that it could have been avoided by simply adding more RPC providers. While that might sound reasonable at first, it does not address what actually failed. In fact, multiple endpoints were already in place.
The issue was not the number of providers, but the absence of any mechanism to validate their responses against one another. Some nodes were compromised, while others were made unavailable, and the system continued operating with the remaining sources without questioning their reliability.
As best outlined by our CTO on LinkedIn:
“We run nodes out of multiple locations with multiple redundancies: That solves availability, but the KelpDAO incident makes it clear that redundancy must be coupled with in‑code verification.”
In simpler terms, redundancy ensures that a system keeps running, but it does not ensure that it is running on correct data.
The KelpDAO exploit is a clear example of RPC failure crypto conditions, where availability masks deeper issues at the data layer. If you want to explore how these failures affect systems in practice, the breakdown of what happens when an RPC goes down provides useful context.
Why Monitoring Did Not Catch It
The lack of early detection is now easier to understand, as we understand the attack model. The manipulation was selective: compromised nodes returned manipulated data to the verifier while continuing to provide normal responses to other observers.
Therefore, dashboards appeared consistent, monitoring systems did not trigger alerts, and external observers saw no obvious issues.
From the outside, the system looked healthy. Monitoring can only report what it sees, and even if that is already compromised, it cannot provide meaningful protection.
How the Impact Propagated
Once the incorrect state was accepted, the rest of the system behaved exactly as expected. A portion of the released rsETH was deposited into lending markets, where it was used as collateral to borrow ETH-based assets. In fact, out of the 116,500 rsETH the attacker managed to take control of, 89,567 rsETH were deposited on Aave as shown below.

From that point onward, the asset appeared valid across the ecosystem, which meant that pricing mechanisms treated it normally, and downstream protocols accepted it according to their own rules. The impact spread not because of additional failures, but because the same trust assumptions were applied across multiple systems.
What This Means for System Design
The lesson here extends beyond this specific incident. Any system that depends on external data needs to consider how that data is verified before it is used, and therefore, this applies to cross-chain messaging, oracle systems, and any workflow that relies on off-chain reads.
At this point, it helps to compare how systems treat oracles and RPC nodes.
Oracles are usually treated as untrusted inputs. Systems expect to be wrong, so their data is checked, compared, and validated before use. RPC nodes, on the other hand, are often treated as simple infrastructure, and their responses are typically accepted without question.
In reality, this assumption is not true. As this incident showed, RPC nodes can be influenced or manipulated in the same way as any external data source. Therefore, they should be treated as part of the system’s trust model, not just as a way to access the chain.
What Secure RPC Architecture Requires
Avoiding this class of RPC failure in crypto requires a shift in approach, as it is clearly not enough to just add more endpoints. In fact, systems need to change how they use them.
This does not mean that independent providers do not matter, but only if they are genuinely independent. Response comparison is essential, particularly for high-value actions, as it allows systems to detect inconsistencies before acting.
This is where quorum-based verification becomes important. Instead of relying on a single RPC response, systems can query multiple providers and only proceed if their answers match.
Infrastructure providers like Spectrum Nodes make this possible by providing access to independent nodes and regions, but the validation itself must be implemented in the application. Without that layer, multiple endpoints still behave like a single source of truth.
Perhaps most importantly, systems need to be able to refuse to act, especially when only one source is available. Under such conditions, continuing execution is often the wrong decision. This shift from fail-open to fail-safe is a key part of resilient design.
These principles are closely aligned with broader infrastructure practices. For a deeper perspective, the discussion on decentralised blockchain infrastructure is worth reviewing, along with the role performance plays in reliability, as outlined in Why High Performance RPC Matters.
Practical Questions for Teams
For teams building cross-chain or data-dependent systems, the implications are quite direct. It is worth asking whether RPC responses are verified before execution, whether a single endpoint can influence system behaviour, and whether inconsistencies across providers can be detected.
Furthermore, it is equally important to consider whether the system prioritises availability over correctness and whether it can identify conflicting views of the same block if they occur.
Evidently, these questions are not theoretical as they map directly to the conditions that enabled the KelpDAO exploit.
Where Spectrum Nodes Fit
The takeaway from this incident is not that systems need more endpoints, but that they need to use them differently. Reliable RPC infrastructure is part of the solution, but the real value lies in how systems source, compare, and validate data before acting on it.
This is the layer Spectrum Nodes focuses on. It is not just about providing access, but about supporting how that access is used within a system’s architecture.
If you want to see how this works in practice, the breakdown of How Spectrum Handles Requests at Scale offers a more concrete view. And if you are looking to apply these principles to your own setup, you can contact our team to explore it further.
Conclusion
This incident did not break cryptography, but it exposed the assumption that the data used to verify the blockchain state is inherently trustworthy. This assumption holds under normal conditions, but it becomes fragile once infrastructure itself becomes part of the attack surface.
For systems that move value across chains, contract-level correctness is only part of the picture. The data those contracts rely on needs to be verified just as carefully. Without that, even a well-designed system can produce the wrong result.
Frequently Asked Questions
What is an RPC node in blockchain?
An RPC node is an interface that allows systems to read blockchain data and submit transactions. It provides access to chain state, such as balances, events, and blocks, through remote procedure calls.
How did the KelpDAO exploit happen?
A forged cross-chain message was accepted because the verifier relied on manipulated RPC responses. The attacker influenced which data sources were available, leading the system to accept an incorrect view of the chain state.
Why are multiple RPC providers not enough?
Multiple providers improve availability, but they do not guarantee correctness. Without response comparison or validation, a system can still accept incorrect data if it uses compromised sources.
What is RPC poisoning in crypto?
RPC poisoning is a scenario in which a node returns incorrect or manipulated blockchain data to a specific consumer while appearing normal to others. This can lead systems to act on false information.
How can blockchain systems verify data correctly?
Systems can verify data by comparing responses across independent providers, requiring quorum agreement, and performing consistency checks before execution. This approach is often referred to as multi-RPC validation or quorum-based verification.