DORA in Practice: Lessons from the Trenches of Implementation

Practical experience implementing the EU Digital Operational Resilience Act (DORA)

DORARESILIENCE

Idaye Braimah

4/6/20268 min read

white concrete building during daytime
white concrete building during daytime

When I first sat down with a major financial services organization to map out their Digital Operational Resilience Act (DORA) compliance strategy, I expected the usual regulatory implementation exercise — dense requirements, tight timelines, and a fair amount of organizational friction. What I did not fully anticipate was how much of our time and energy would be consumed not by the Act's headline obligations, but by something far more foundational: figuring out what the definitions actually mean in practice and how far they reach.

DORA, which became applicable across the European Union on January 17, 2025, is an ambitious piece of regulation. It aims to harmonize digital operational resilience requirements for financial entities, covering everything from ICT risk management and incident reporting to third-party risk oversight and resilience testing. On paper, the framework is logical and well-structured. In practice, however, its implementation reveals a series of interpretive challenges that demand careful judgment from legal and compliance teams. This post shares some of the key lessons I have drawn from that experience, focusing on four themes that I believe every practitioner navigating DORA should have on their radar.

The Broad Universe of "ICT": A Definition That Defines Everything

The first challenge we encountered — and one that continued to ripple through every subsequent workstream — was the sheer breadth of DORA's definition of "ICT" and, by extension, "ICT services." Under Article 3 of the Regulation, ICT is defined expansively to include digital and data services provided through ICT systems on an ongoing basis, encompassing hardware, software, and network infrastructure. This is not limited to what most organizations would instinctively think of as their "technology stack." It potentially captures a wide range of services and arrangements that, in ordinary business parlance, might not be described as technology services at all.

Consider, for example, a subscription to a market data terminal, a cloud-hosted HR platform, or even an outsourced document management service. Under a plain reading of DORA's definitions, each of these could qualify as an ICT service provided by a third party. The practical consequence is significant: the definition effectively sets the perimeter for almost every major obligation under the Regulation. What counts as "ICT" determines which third-party relationships must be governed, which incidents must be reported, and which systems fall within the scope of resilience testing.

For legal and compliance teams, this means that the definitional analysis is not a preliminary step to be completed and set aside. It is a living, ongoing exercise that must be revisited as the organization's technology landscape evolves and as regulatory guidance matures. I would encourage practitioners to resist the temptation to adopt an overly narrow reading of "ICT" in the hope of limiting their compliance burden. The safer — and, I would argue, the more defensible — approach is to begin with a broad interpretation and then apply a risk-based lens to prioritize efforts. Regulators are far more likely to scrutinize an entity that failed to capture a material ICT relationship than one that initially cast a wide net and refined its approach over time.

The Register of Information: Where Definitions Meet Documentation

Nowhere does the breadth of DORA's ICT definition create more immediate, tangible work than in the obligation to establish and maintain a register of information under Article 28(3). The register is intended to serve as a comprehensive inventory of all contractual arrangements with ICT third-party service providers. It is both a compliance artifact and a supervisory tool — regulators can request it at any time, and financial entities are expected to keep it current and accurate.

In theory, this is a straightforward data collection exercise. In practice, it is anything but. Because the definition of "ICT services" is so broad, the universe of contractual arrangements that must be captured in the register is correspondingly large. During our initial scoping exercise, we found that the number of potentially in-scope arrangements far exceeded what the client's procurement and vendor management teams had anticipated. The register did not just need to include the obvious cloud service providers and core banking system vendors. It also needed to account for dozens of smaller, sometimes informal, technology-adjacent relationships — from SaaS tools adopted by individual business units to data analytics providers engaged under legacy contracts with ambiguous service descriptions.

The practical difficulties here are threefold. First, there is a data gathering challenge. Many organizations simply do not have a centralized, up-to-date inventory of all their third-party ICT arrangements. Contracts may be held across multiple departments, in different formats, and with varying levels of detail. Second, there is a classification challenge. For each arrangement, the entity must determine whether the provider qualifies as an "ICT third-party service provider" under DORA and whether the services rendered fall within the Regulation's scope. This requires a careful, contract-by-contract analysis that cannot be fully automated. Third, there is a maintenance challenge. The register is not a one-time deliverable. It must be kept current as contracts are entered into, amended, or terminated, which demands robust internal processes and clear lines of ownership.

My advice to practitioners is to start early and be pragmatic. Begin with a reasonable, risk-prioritized approach to populating the register, focusing first on the arrangements that are most clearly in scope and most critical to the entity's operations. Document your methodology and your reasoning for inclusion or exclusion decisions, so that you can demonstrate a thoughtful, good-faith approach if questioned by a supervisor. And invest in the governance infrastructure — templates, workflows, ownership protocols — that will make ongoing maintenance sustainable. The register of information is not a box-ticking exercise. It is a window into the organization's ICT dependency landscape, and regulators will treat it as such.

Incident Reporting: Navigating the Gray Areas

If the register of information is where DORA's definitional breadth creates documentation challenges, incident reporting is where it creates judgment challenges. Under Articles 17 through 23, financial entities are required to classify and report ICT-related incidents, with "major incidents" triggering mandatory notification to competent authorities within prescribed timeframes.

The difficulty lies in determining what qualifies as a "major" incident. DORA sets out a series of classification criteria, including the number of clients affected, the duration and geographical spread of the incident, the data losses involved, the criticality of the services impacted, and the economic impact. Delegated regulations further specify quantitative and qualitative thresholds. In principle, these criteria provide a structured framework for assessment. In practice, they leave substantial room for interpretation, particularly in the early stages of an incident when information is incomplete and evolving.

Consider a scenario that is not at all hypothetical: a financial entity experiences a disruption to an internal system that processes client transaction data. The disruption lasts three hours. During that window, a subset of transactions is delayed but not lost, and no client data is compromised. The system is restored without material financial loss. Is this a "major" incident? The answer is not immediately obvious. It depends on how you count affected clients (direct users of the system, or all clients whose transactions were in the pipeline?), how you assess "criticality" of the impacted service, and whether the three-hour duration crosses the applicable threshold when weighed against the other criteria.

These are not abstract questions. They are the kinds of real-time judgments that incident response teams and compliance officers must make under pressure, often with incomplete information and tight reporting deadlines. The initial notification for a major incident must be submitted within four hours of classification (or within twenty-four hours of detection, depending on the applicable framework), which leaves very little time for deliberation.

What I have found most helpful in practice is to invest heavily in preparedness. Develop detailed internal classification playbooks that translate DORA's criteria into decision trees tailored to your organization's specific systems and services. Run tabletop exercises that force your teams to apply those playbooks to realistic scenarios. And establish clear escalation protocols so that classification decisions are made — and documented — at the appropriate level of seniority. The goal is not to eliminate gray areas, because that is not possible, but to ensure that your organization has a defensible, well-reasoned process for navigating them.

The "When in Doubt, Report" Dilemma

This brings me to what I have come to think of as the central tension in DORA's incident reporting regime: the question of whether to report when you are genuinely unsure whether a threshold has been met.

There is a natural — and, I believe, understandable — instinct among compliance professionals to err on the side of over-reporting. The logic is straightforward: the reputational and regulatory consequences of failing to report a major incident are severe, while the consequences of submitting a report that turns out, in hindsight, to have been unnecessary are comparatively mild. Regulators, for their part, have generally signaled that they would rather receive a notification that proves to be a false alarm than discover after the fact that a reportable incident was not escalated. This creates an implicit "when in doubt, report" norm that many practitioners are internalizing.

But this norm is not without its own risks. Over-reporting can flood supervisory authorities with notifications, diluting the signal-to-noise ratio and potentially undermining the efficiency of the reporting framework. It can also create internal fatigue within the organization, as teams expend time and resources preparing reports for incidents that do not ultimately warrant them. And there is a subtler concern as well: if an entity routinely submits initial notifications that are later downgraded or withdrawn, it may raise questions about the rigor of its classification process. An over-reporting pattern can, paradoxically, signal the same underlying deficiency as under-reporting — namely, that the organization lacks a robust methodology for assessing incident severity.

The pragmatic path, in my experience, lies in building a classification framework that is transparent, consistently applied, and well-documented. When you do decide to report out of an abundance of caution, say so explicitly in the notification. Frame it as a precautionary submission pending further analysis, and commit to providing a prompt update once the assessment is complete. This approach demonstrates both diligence and maturity. It tells the regulator that you take the obligation seriously, that you have a process, and that you are exercising judgment in good faith.

Equally important is the internal feedback loop. After each incident — whether reported or not — conduct a post-incident review that evaluates the classification decision in light of the full facts. Use these reviews to refine your playbooks, calibrate your thresholds, and build institutional knowledge. Over time, this iterative process will reduce the frequency of genuinely ambiguous cases and give your teams greater confidence in their real-time judgments.

Key Takeaways for Practitioners

Reflecting on the implementation journey so far, a few overarching lessons stand out.

First, take the definitions seriously. DORA's definitions are not boilerplate. They are load-bearing provisions that determine the scope and intensity of your compliance obligations. Invest the time to analyze them carefully, document your interpretive positions, and revisit them as regulatory guidance evolves.

Second, treat the register of information as a strategic asset, not just a compliance obligation. A well-maintained register gives you visibility into your ICT dependency landscape, supports your third-party risk management program, and positions you well for supervisory engagement. A poorly maintained register, by contrast, is a liability.

Third, prepare for incident classification before incidents happen. The time to debate whether a particular scenario constitutes a "major" incident is not during the incident itself. Build your playbooks, run your exercises, and establish your escalation protocols now.

Fourth, develop a principled approach to the reporting threshold. Neither reflexive over-reporting nor aggressive under-reporting serves your interests. Aim for a transparent, well-documented classification process that allows you to exercise good-faith judgment and demonstrate that judgment to your regulator.

Finally, remember that DORA implementation is not a destination — it is an ongoing process. The regulatory framework is still maturing, supervisory expectations are still forming, and the operational environment is constantly changing. The organizations that will navigate this landscape most effectively are those that build adaptive, learning-oriented compliance programs rather than rigid, point-in-time solutions.

DORA is, at its core, about resilience. The same principle should apply to the compliance programs we build to meet it.