<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Regulus</title>
	<atom:link href="https://goregulus.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://goregulus.com/</link>
	<description>Regulus provides compliance tools for EU cybersecurity regulations, helping manufacturers, IoT vendors and digital product teams meet Cyber Resilience Act requirements.</description>
	<lastBuildDate>Mon, 13 Apr 2026 14:12:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>Total Virus API: Master the total virus api for CRA Compliance</title>
		<link>https://goregulus.com/cra-basics/total-virus-api/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 14:12:03 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[api security]]></category>
		<category><![CDATA[CRA Compliance]]></category>
		<category><![CDATA[product security]]></category>
		<category><![CDATA[virustotal api]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<guid isPermaLink="false">https://goregulus.com/?p=2139</guid>

					<description><![CDATA[<p>The VirusTotal API gives you programmatic access to VirusTotal&#8217;s enormous, crowdsourced database of threat intelligence. In simple terms, it lets developers and security teams automatically check files, URLs, domains, and IP addresses against the findings of over 70 different security vendors and scanning engines. It’s your direct, automated gateway to one of the world’s largest [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/total-virus-api/">Total Virus API: Master the total virus api for CRA Compliance</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The <strong>VirusTotal API</strong> gives you programmatic access to VirusTotal&#8217;s enormous, crowdsourced database of threat intelligence. In simple terms, it lets developers and security teams automatically check files, URLs, domains, and IP addresses against the findings of over <strong>70</strong> different security vendors and scanning engines. It’s your direct, automated gateway to one of the world’s largest collections of malware data.</p>



<h2 class="wp-block-heading">Understanding the VirusTotal API and Its Role in CRA Compliance</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/total-virus-api-api-integration.jpg" alt="Diagram showing VirusTotal API integrating various data types and serving engineering, compliance, and CRA."/></figure>



<p>It’s important to realise that the VirusTotal API is not a traditional antivirus solution that blocks threats in real-time. Think of it instead as a powerful analysis and investigation tool. When you submit an indicator—like a file hash or a domain name—your application gets back a detailed report that consolidates findings from dozens of different security tools. For example, submitting a URL might reveal that while only 3 engines flag it as malicious, one of those is a top-tier security vendor known for accurate phishing detection, giving you crucial context beyond a simple &#8220;good&#8221; or &#8220;bad&#8221; verdict. This gives you a rich, multi-faceted view of any potential threat.</p>



<p>For manufacturers of products with digital elements, this capability is more than just a security feature; it&#8217;s a core component of modern compliance. The EU’s Cyber Resilience Act (CRA) places strict obligations on companies to continuously monitor for and manage vulnerabilities in their products long after they are sold. Manual analysis simply can’t keep up.</p>



<h3 class="wp-block-heading">Automating Security for Post-Market Surveillance</h3>



<p>Integrating the VirusTotal API lets you automate critical security workflows, which is absolutely essential for meeting the CRA’s requirements for post-market surveillance and ongoing vulnerability management. Instead of having someone manually check suspicious files or domains, your systems can perform these checks programmatically. You can explore our detailed guide on <a href="https://goregulus.com/cra-requirements/cra-vulnerability-handling/">CRA vulnerability handling requirements</a> for a deeper dive into these obligations.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>By embedding the VirusTotal API into your development and security operations, you transform compliance from a reactive, manual burden into a proactive, efficient, and scalable process.</p>
</blockquote>



<p>This programmatic approach provides a clear, auditable trail of due diligence—something that is critical for demonstrating compliance to regulators. For example, your systems can automatically:</p>



<ul class="wp-block-list">
<li><strong>Scan new firmware builds</strong> before deployment to check for embedded malware. A practical script could calculate the SHA-256 hash of a <code>firmware-v2.1.bin</code> file and query the API to ensure zero malicious detections before it&#8217;s pushed to production.</li>



<li><strong>Analyse URLs and IP addresses</strong> that your IoT devices connect to, identifying potential command-and-control servers. For instance, a nightly job could extract all unique domains from device logs and run them through the <code>/domain/report</code> endpoint to flag any with recent malicious associations.</li>



<li><strong>Investigate suspicious files</strong> reported by users or discovered during internal audits. If a customer support ticket includes a strange <code>.dll</code> file, your ticketing system could auto-submit its hash to VirusTotal and attach the results directly to the ticket for faster analysis.</li>
</ul>



<p>By using the VirusTotal API, your engineering and compliance teams gain the tools to build a robust, automated defence system that directly addresses key CRA mandates. This helps ensure your products remain secure and compliant throughout their entire lifecycle.</p>



<h2 class="wp-block-heading">Managing VirusTotal API Authentication and Keys</h2>



<p>All communication with the <strong>VirusTotal API</strong> is authenticated through an API key. Every request you send must include a valid key, which serves to identify your application and enforce your access level. Without a valid key, the API will simply reject your request.</p>



<p>You will find your API key within your user profile settings on the <a href="https://www.virustotal.com/">VirusTotal website</a>. To authenticate an API call, you must include this key in the <code>x-apikey</code> HTTP header. It’s your unique credential for accessing the service.</p>



<p>Here’s a practical <code>curl</code> example showing how to include the key when requesting a file report. Just remember to replace <code>YOUR_API_KEY</code> with your actual key.</p>



<pre class="wp-block-code"><code># Example curl request to get a file report
# Replace YOUR_API_KEY with your actual VirusTotal API key
# Replace {file_hash} with a real SHA-256 hash of a file
curl --request GET 
  --url https://www.virustotal.com/api/v3/files/{file_hash} 
  --header 'x-apikey: YOUR_API_KEY'
</code></pre>



<h3 class="wp-block-heading">Public vs. Private API Keys</h3>



<p>VirusTotal provides two main types of API keys: the free <strong>Public API</strong> key and the commercial <strong>Private/Premium API</strong> key. The type of key you have determines your request rate limits, which features you can access, and your licensing terms.</p>



<p>Making the right choice here is critical. Using a Public API key for a commercial product or a high-volume workflow not only leads to service interruptions from rate limiting but also violates the terms of service. This is especially important for integrations supporting Cyber Resilience Act (CRA) compliance, where reliability is non-negotiable.</p>



<p>The following table breaks down the fundamental differences between the two key types.</p>



<h3 class="wp-block-heading">VirusTotal API Key Types Comparison</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Feature</th>
<th align="left">Public API Key</th>
<th align="left">Private/Premium API Key</th>
</tr>
<tr>
<td align="left"><strong>Intended Use</strong></td>
<td align="left">Personal, non-commercial research</td>
<td align="left">Commercial products, enterprise use, CRA workflows</td>
</tr>
<tr>
<td align="left"><strong>Rate Limit</strong></td>
<td align="left">Heavily restricted (e.g., <strong>4 requests/minute</strong>)</td>
<td align="left">High-volume, customisable limits</td>
</tr>
<tr>
<td align="left"><strong>Advanced Features</strong></td>
<td align="left">Basic scanning and reports</td>
<td align="left">Advanced search, private scanning, threat intelligence feeds</td>
</tr>
<tr>
<td align="left"><strong>License</strong></td>
<td align="left">Non-commercial use only</td>
<td align="left">Commercial use permitted</td>
</tr>
</table></figure>



<p>While a public key is perfectly fine for initial testing and personal research, any serious product integration demands a private key. For example, a simple script that checks 100 domains from your product&#8217;s telemetry data would take 25 minutes with a public key, but could be completed in under a minute with a private key. For automated vulnerability management or to meet the continuous monitoring requirements of the CRA, upgrading to a private key is mandatory. This ensures you have the necessary request volume, access to advanced features, and legal standing for commercial use.</p>



<h2 class="wp-block-heading">Using Core API Endpoints for File Analysis</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/total-virus-api-virus-scan.jpg" alt="Diagram showing an IoT device sending firmware to a server for virus scanning, displaying scan results."/></figure>



<p>To get the most out of the <strong>VirusTotal API</strong> in your security workflows, you need to get familiar with its core file analysis endpoints. These are the workhorses for submitting files and pulling back detailed reports. Each endpoint has a specific job in the file analysis lifecycle.</p>



<p>Your journey usually starts with the <code>/file/scan</code> endpoint. This is what you use to upload a new file that VirusTotal hasn&#8217;t seen before. Think of it as the first step for analysing anything unique, like a fresh firmware build or a suspect email attachment.</p>



<p>Once you submit a file, it enters an analysis queue. You can then use the <code>/file/report</code> endpoint to check on the scan’s progress and, ultimately, fetch the final results. This endpoint is absolutely central to any automated workflow, as it delivers the actionable intelligence you need.</p>



<h3 class="wp-block-heading">Understanding Key File Endpoints</h3>



<p>For file analysis, three endpoints are critical: <code>/file/scan</code>, <code>/file/rescan</code>, and <code>/file/report</code>. Getting their specific roles right is the key to building efficient automation.</p>



<ul class="wp-block-list">
<li><strong>/file/scan (POST):</strong> Use this endpoint for the initial upload of a file. The API will respond with a scan ID, which you’ll then use to track the analysis.</li>



<li><strong>/file/rescan (POST):</strong> If a file has already been analysed in the past, this endpoint forces a completely new analysis. It’s perfect for getting an up-to-date report on a file that might have been re-classified since its last scan.</li>



<li><strong>/file/report (GET):</strong> This fetches the most recent analysis report for a file using its hash (<strong>MD5</strong>, <strong>SHA-1</strong>, or <strong>SHA-256</strong>). It is the most common endpoint for checking a file&#8217;s reputation.</li>
</ul>



<p>When you&#8217;re integrating the VirusTotal API, remember that managing your credentials securely is non-negotiable. For a solid overview on handling API keys and other sensitive information, take a look at this guide on <a href="https://makeautomation.co/devops-secrets-management/">DevOps secrets management</a>.</p>



<p>Here’s a practical example: an IoT manufacturer wants to build a script to help secure their supply chain. Every time a new firmware binary is compiled, a script can automatically calculate its <strong>SHA-256</strong> hash and query the <code>/file/report</code> endpoint.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>If the report comes back with a high <code>positives</code> count—say, more than <strong>2</strong> detections out of a <code>total</code> of <strong>70+</strong> scanners—the script can immediately flag the build and alert the security team. This kind of proactive check is a powerful way to stop compromised software from ever being distributed.</p>
</blockquote>



<p>By scripting these checks, you automate a fundamental part of your product security. For a broader view on how this fits into a larger compliance strategy, you can read our guide to understand how to <a href="https://goregulus.com/cra-basics/scan-for-malware/">scan for malware effectively</a>.</p>



<h2 class="wp-block-heading">Analysing URLs, Domains, and IPs with Advanced Endpoints</h2>



<p>Product security isn&#8217;t just about scanning files. It requires a complete view of every internet resource your devices interact with. The <strong>VirusTotal API</strong> has specialised endpoints for analysing URLs, domains, and IP addresses, helping you uncover threats like phishing sites or command-and-control (C2) servers before they cause damage.</p>



<p>This kind of proactive analysis is exactly what regulations like the Cyber Resilience Act (CRA) mandate. By regularly checking the network destinations your products communicate with, you can spot potential compromises early. This is where endpoints like <code>/url/scan</code> and <code>/domain/report</code> become essential tools in your security toolkit.</p>



<h3 class="wp-block-heading">Key Network Analysis Endpoints</h3>



<p>To check network resources, you&#8217;ll mainly be using a few core endpoints. Each is built for a specific job in your threat intelligence workflow.</p>



<ul class="wp-block-list">
<li><strong>/url/scan (POST):</strong> This is your starting point for any new URL. Submitting it here queues it for analysis, especially if VirusTotal hasn&#8217;t seen it recently.</li>



<li><strong>/url/report (GET):</strong> Use this to pull the latest analysis report for a URL. You can query with the URL itself or use the scan ID returned from a previous scan.</li>



<li><strong>/domain/report (GET):</strong> Provides a deep dive into a domain, pulling together historical data, passive DNS information, and any related malicious samples.</li>



<li><strong>/ip-address/report (GET):</strong> Fetches the full report for an IP address, detailing its reputation and any malicious activity associated with it.</li>
</ul>



<p>As a practical example, your security team could write a script to vet all hardcoded domains in your firmware. The script would simply iterate through a list of domains and hit the <code>/domain/report</code> endpoint for each one, ensuring none are linked to known malicious infrastructure.</p>



<pre class="wp-block-code"><code># Example curl request to get a domain report
# Replace YOUR_API_KEY with your key and example.com with the target domain
curl --request GET 
  --url https://www.virustotal.com/api/v3/domains/example.com 
  --header 'x-apikey: YOUR_API_KEY'
</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Interpreting the JSON response from these endpoints is key. Don&#8217;t just look at the <code>positives</code>/<code>total</code> count. Dig into fields like passive DNS records. This data shows which IP addresses a domain has resolved to over time, which can uncover hidden connections to shared malicious hosting.</p>
</blockquote>



<p>This depth of information lets your team build a much richer picture of a resource&#8217;s actual risk. For instance, a domain might have zero malicious detections, but its passive DNS history shows it was hosted on the same IP as a known malware distributor last month. This context, available via the API, is a powerful indicator of risk that a simple blocklist would miss. By automating these checks, you create a robust, CRA-aligned security posture, demonstrating the ongoing due diligence required to ensure your products don’t communicate with compromised internet endpoints.</p>



<h2 class="wp-block-heading">Navigating Rate Limits and Handling API Errors</h2>



<p>Building a truly reliable integration with the <a href="https://www.virustotal.com/">VirusTotal API</a> means preparing for the moments when things don’t go as planned. Every API has usage quotas, and knowing how to work within those rate limits and handle error responses gracefully is what separates a fragile script from a resilient, production-ready security system.</p>



<p>If you ignore these rules, you can expect failed requests and service disruptions. The Public API key, for example, is strictly limited to just <strong>four requests per minute</strong>. A private key offers much higher throughput, but even it has limits that your application must respect to ensure consistent performance. This is why robust error handling isn’t just a nice-to-have; it&#8217;s a core requirement for building stable security automation.</p>



<h3 class="wp-block-heading">Understanding Common API Errors</h3>



<p>When your application makes an invalid request or smashes through its quota, the VirusTotal API will return a standard HTTP status code to tell you what went wrong. Getting familiar with these codes is the first step toward building logic that can actually handle these situations. Proper logging and monitoring of these errors are also key components for compliance, as we cover in our guide on <a href="https://goregulus.com/cra-requirements/cra-logging-monitoring-requirements/">CRA logging and monitoring requirements</a>.</p>



<p>Here’s a quick-reference table for the most common error codes you’re likely to run into when working with the API.</p>



<h3 class="wp-block-heading">Common VirusTotal API Error Codes and Resolutions</h3>



<p>This table breaks down the typical HTTP error codes you&#8217;ll see from the VirusTotal API and provides clear, actionable steps for each one. Keep this handy when you&#8217;re debugging your integration.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">HTTP Status Code</th>
<th align="left">Meaning</th>
<th align="left">Recommended Action</th>
</tr>
<tr>
<td align="left"><strong>204 No Content</strong></td>
<td align="left">Rate limit exceeded.</td>
<td align="left">Your script is making too many requests. You need to implement a delay (<strong>exponential backoff</strong>) before retrying the request.</td>
</tr>
<tr>
<td align="left"><strong>400 Bad Request</strong></td>
<td align="left">The request was malformed.</td>
<td align="left">Double-check that your request body, parameters, and headers are all correctly formatted according to the API documentation. For example, you might have sent a malformed hash.</td>
</tr>
<tr>
<td align="left"><strong>401 Unauthorized</strong></td>
<td align="left">Your API key is missing or invalid.</td>
<td align="left">Make sure the <code>x-apikey</code> header is included and that the key itself is correct, active, and has not been revoked.</td>
</tr>
<tr>
<td align="left"><strong>403 Forbidden</strong></td>
<td align="left">You lack the necessary permissions.</td>
<td align="left">This usually means your API key doesn&#039;t have access to the feature (e.g., using a public key for a private API feature).</td>
</tr>
<tr>
<td align="left"><strong>404 Not Found</strong></td>
<td align="left">The requested resource does not exist.</td>
<td align="left">The file hash, URL, or domain you queried isn’t in the VirusTotal dataset. This is expected for new or unknown items.</td>
</tr>
</table></figure>



<p>Treating these codes as part of your normal operational flow, rather than as exceptions, will make your integration far more robust and predictable in the long run.</p>



<h3 class="wp-block-heading">A Practical Example: Implementing Exponential Backoff</h3>



<p>The error you&#8217;ll hit most often in any high-volume script is the <code>204 No Content</code> rate limit. Instead of just letting the request fail, your script should be smart enough to wait and try again. The best way to do this is with <strong>exponential backoff</strong>, a simple strategy where the delay between retries increases after each failure.</p>



<p>Here’s a basic Python example that shows how this logic works:</p>



<pre class="wp-block-code"><code>import requests
import time

def request_with_backoff(url, headers):
    retries = 5
    delay = 1 # Start with a 1-second delay
    for i in range(retries):
        response = requests.get(url, headers=headers)
        if response.status_code == 204:
            print(f"Rate limit hit. Retrying in {delay} seconds...")
            time.sleep(delay)
            delay *= 2 # Double the delay for the next attempt
        elif response.status_code == 200:
            return response.json()
        else:
            # For other errors, it's better to fail fast
            response.raise_for_status() 
    raise Exception("Max retries exceeded after multiple failures.")
</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This approach ensures your automation can withstand temporary service limits without crashing, which is a crucial feature for any system designed for continuous security monitoring or vulnerability management.</p>
</blockquote>



<p>The significant growth of the European endpoint security market, which hit <strong>USD 5.05 billion</strong> in 2024, just highlights how much organisations are relying on automated threat intelligence. With projections showing the market will reach <strong>USD 10.09 billion</strong> by 2033, building robust and resilient API integrations is essential to protect the huge number of endpoints that remain vulnerable. You can find more insights on these trends over at <a href="https://www.marketdataforecast.com/market-reports/europe-endpoint-security-market">marketdataforecast.com</a>. This growth is exactly why automated, resilient solutions are no longer optional.</p>



<h2 class="wp-block-heading">Automating Security Workflows for CRA Compliance</h2>



<p>The real power of the <strong>VirusTotal API</strong> isn&#8217;t just in its data, but in how you can use it to automate security workflows and build a defensible CRA compliance posture. For any organisation navigating the Cyber Resilience Act, the goal is to create repeatable, auditable security processes that directly address the regulation&#8217;s strict demands.</p>



<p>This goes far beyond occasional manual lookups. True CRA compliance means embedding threat intelligence deep into your core operations, from the development lifecycle all the way to post-market surveillance. Each integration pattern acts as a blueprint for using the API to meet and document specific CRA obligations, turning compliance from a burdensome manual task into a continuous, automated function.</p>



<h3 class="wp-block-heading">Actionable Integration Patterns for CRA</h3>



<p>To build a credible compliance case, you need to implement specific, automated workflows that use the VirusTotal API. These patterns provide clear evidence of the due diligence and continuous monitoring that are central to the CRA&#8217;s requirements.</p>



<p>Here are three practical integration patterns you can build:</p>



<ol class="wp-block-list">
<li><strong>Automated Post-Market Surveillance:</strong> Develop scripts that periodically pull domains and IP addresses from your product&#8217;s network logs. These scripts can then query the VirusTotal API&#8217;s <code>/domain/report</code> and <code>/ip-address/report</code> endpoints to confirm your devices aren’t communicating with known malicious infrastructure. For example, a cron job could run daily, parsing logs from the previous 24 hours and creating a security alert if any queried IP has more than one malicious detection.</li>



<li><strong>Streamlined Vulnerability Triage:</strong> Integrate the <code>/file/scan</code> endpoint directly into your bug bounty or vulnerability reporting platform. When a security researcher submits a suspicious file, your system can automatically push it to VirusTotal for instant analysis, dramatically cutting down your triage and response times. For guidance on integrating different security tools, you might find it helpful to look into how to best use <a href="https://goregulus.com/cra-basics/siem-open-source/">https://goregulus.com/cra-basics/siem-open-source/</a>.</li>



<li><strong>Secure Supply Chain Management:</strong> As a part of your CI/CD pipeline, write a script that scans every third-party library and dependency defined in your Software Bill of Materials (SBOM). By checking the hash of each component against the <code>/file/report</code> endpoint, you can catch compromised dependencies before they are ever compiled into a final product release. A practical example would be a Jenkins or GitHub Actions step that fails the build if a dependency like <code>log4j-core-2.14.1.jar</code> shows any new malicious flags.</li>
</ol>



<p>The following diagram outlines a simple but essential process for managing API requests, ensuring your automations can run at scale without being interrupted by rate limits.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/total-virus-api-rate-limit.jpg" alt="Flowchart illustrating API rate limit handling process: Request, Rate Limit, and Backoff steps."/></figure>



<p>This flow underscores the need to build resilient logic, like exponential backoff, directly into your API integrations from day one. The scale of today&#8217;s API-driven supply chains makes this kind of robust automation a necessity, not a luxury. For instance, Europe’s generic API pharmaceutical market claimed a <strong>58%</strong> revenue share in 2023, a sector powered by over <strong>350</strong> firms in Spain and Italy alone that face very similar digital supply chain risks. Using <a href="https://www.dsg.ai/blog/ai-for-regulatory-compliance">AI for regulatory compliance</a>, particularly with powerful tools like the VirusTotal API, is becoming indispensable for navigating complex new rules like the CRA.</p>



<h3 class="wp-block-heading">Frequently Asked Questions About the VirusTotal API</h3>



<p>When you&#8217;re integrating a new tool, questions are inevitable. This section cuts straight to the practical answers for the most common queries we see about using the <strong>VirusTotal API</strong>, especially for product security and compliance workflows.</p>



<p>Our goal here is to give you the clarity you need to integrate this powerful threat intelligence resource the right way.</p>



<h3 class="wp-block-heading">Can I Use the Free Public API for a Commercial Product?</h3>



<p>No. The free VirusTotal public API comes with a strict non-commercial licence. It&#8217;s also heavily rate-limited, making it completely unsuitable for any business use case. Think of it as a tool for personal research or academic projects only.</p>



<p>For any commercial application—whether that&#8217;s integrating it into your product or using it for internal Cyber Resilience Act (CRA) compliance workflows—you <strong>must use the private/premium API</strong>. This is the only way to get the request volume, performance, and legal rights needed for a business context.</p>



<h3 class="wp-block-heading">How Is the VirusTotal API Different from Antivirus?</h3>



<p>A standard antivirus (AV) client provides real-time protection. It’s a shield, actively scanning for and blocking threats on a single device.</p>



<p>The VirusTotal API is different; it&#8217;s a threat intelligence aggregator. It doesn&#8217;t block anything directly. Instead, it consolidates the analysis results from over <strong>70 different AV scanners</strong> and security services to give you a comprehensive second opinion on a file or URL. It&#8217;s a tool for investigation and analysis, not real-time endpoint defence.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Think of it like this: an antivirus is the guard at the gate, while the VirusTotal API is the intelligence agency providing a detailed background check on everyone who wants to enter.</p>
</blockquote>



<h3 class="wp-block-heading">What Is the Best Way to Analyse Large Files with the API?</h3>



<p>The API has a file size limit for direct uploads, which is typically <strong>32MB</strong> for the standard <code>/file/scan</code> endpoint. For anything larger, the correct and most efficient method is a two-step process that starts by calculating the file&#8217;s hash—preferably SHA-256.</p>



<p>Once you have the hash, your first move should be to query the <code>/file/report</code> endpoint.</p>



<ul class="wp-block-list">
<li><strong>If VirusTotal has already analysed the file</strong>, you get an instant report without uploading a single byte. This is a huge time and bandwidth saver. For example, if you need to check a 500MB firmware image, getting a report instantly via its hash saves significant upload time.</li>



<li><strong>If the file is unknown to VirusTotal</strong>, you then use the <code>/file/upload_url</code> endpoint. This will give you a special, one-time URL for uploading files up to <strong>650MB</strong>.</li>
</ul>



<p>This two-step approach is the established best practice for handling large files. It ensures you respect API resources and only upload when absolutely necessary.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Gain clarity and confidence in your compliance strategy with <strong>Regulus</strong>. Our platform provides a step-by-step roadmap to navigate the Cyber Resilience Act, helping you prepare for regulatory deadlines without the high cost of consulting. <a href="https://goregulus.com">Visit us at https://goregulus.com to see how we can simplify your CRA compliance journey.</a></p>
<p>La entrada <a href="https://goregulus.com/cra-basics/total-virus-api/">Total Virus API: Master the total virus api for CRA Compliance</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Springdoc openapi starter webmvc ui: Quick Setup and Secure API Docs</title>
		<link>https://goregulus.com/cra-basics/springdoc-openapi-starter-webmvc-ui/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Sat, 11 Apr 2026 17:46:58 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[api documentation]]></category>
		<category><![CDATA[spring boot openapi]]></category>
		<category><![CDATA[spring security]]></category>
		<category><![CDATA[springdoc openapi starter webmvc ui]]></category>
		<category><![CDATA[swagger ui config]]></category>
		<guid isPermaLink="false">https://goregulus.com/?p=2131</guid>

					<description><![CDATA[<p>If you&#8217;ve ever dreaded the thought of manually creating and maintaining API documentation, you&#8217;re in the right place. The springdoc-openapi-starter-webmvc-ui library is a game-changer for Spring Boot developers, transforming what used to be a tedious chore into an almost effortless, &#8216;zero-config&#8217; experience. At its core, Springdoc inspects your existing REST controllers, figures out your endpoints, [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/springdoc-openapi-starter-webmvc-ui/">Springdoc openapi starter webmvc ui: Quick Setup and Secure API Docs</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>If you&#8217;ve ever dreaded the thought of manually creating and maintaining API documentation, you&#8217;re in the right place. The <code>springdoc-openapi-starter-webmvc-ui</code> library is a game-changer for <a href="https://spring.io/projects/spring-boot">Spring Boot</a> developers, transforming what used to be a tedious chore into an almost effortless, &#8216;zero-config&#8217; experience.</p>



<p>At its core, Springdoc inspects your existing REST controllers, figures out your endpoints, and automatically generates a beautiful, interactive Swagger UI. This means you can go from raw code to a fully documented and testable API in just a few minutes.</p>



<h2 class="wp-block-heading">Your First Steps With Springdoc And Swagger UI</h2>



<p>Let&#8217;s dive right in and see how easy it is to get this up and running. We’ll start by adding a single dependency to a basic Spring Boot application and watch the magic happen.</p>



<h3 class="wp-block-heading">Adding The Dependency</h3>



<p>First things first, you need to tell your build system about Springdoc. Whether you&#8217;re a <a href="https://maven.apache.org/">Maven</a> or <a href="https://gradle.org/">Gradle</a> user, this just means adding one line to your build file. This single dependency is the key that unlocks all of Springdoc&#8217;s auto-configuration power.</p>



<h4 class="wp-block-heading">Quick-Start Dependencies For Maven And Gradle</h4>



<p>Here is the exact dependency snippet you need to add to your build file. Just copy and paste the one for your build system to get started.</p>



<figure class="wp-block-table"><table><tr>
<th>Build System</th>
<th>Dependency Snippet</th>
</tr>
<tr>
<td><strong>Maven</strong></td>
<td><br>&lt;dependency&gt;<br>    &lt;groupId&gt;org.springdoc&lt;/groupId&gt;<br>    &lt;artifactId&gt;springdoc-openapi-starter-webmvc-ui&lt;/artifactId&gt;<br>    &lt;version&gt;2.8.6&lt;/version&gt;<br>&lt;/dependency&gt;<br></td>
</tr>
<tr>
<td><strong>Gradle</strong></td>
<td><br>implementation &#039;org.springdoc:springdoc-openapi-starter-webmvc-ui:2.8.6&#039;<br></td>
</tr>
</table></figure>



<p>Just be sure you&#8217;re using the correct starter. The <code>webmvc</code> version is for standard, servlet-based Spring Boot apps. If you were building a reactive application with WebFlux, you&#8217;d use the <code>webflux</code> alternative instead. If you ever run into conflicts, remember you can always inspect the project&#8217;s dependency hierarchy. For a deeper look, check out our guide on <a href="https://goregulus.com/cra-basics/mvn-dependency-tree/">understanding the Maven dependency tree</a>.</p>



<h3 class="wp-block-heading">Building A Simple API</h3>



<p>With the dependency in place, let&#8217;s create a minimal REST controller to give Springdoc something to document. We&#8217;ll add a simple <code>GET</code> endpoint that returns a greeting.</p>



<p>Create a new Java class named <code>GreetingController</code> inside your controller package. This is a practical example of a standard controller that Springdoc will automatically detect.</p>



<pre class="wp-block-code"><code>package com.example.demo.controller;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class GreetingController {

    @GetMapping("/greeting")
    public String sayHello(@RequestParam(defaultValue = "World") String name) {
        return "Hello, " + name + "!";
    }
}
</code></pre>



<p>Notice that this is just a standard Spring <code>@RestController</code>. There&#8217;s nothing specific to Springdoc in the code itself. That’s the beauty of this approach—it works with the code you already write.</p>



<p>This simple flow is all it takes to turn your code into interactive documentation.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/springdoc-openapi-starter-webmvc-ui-setup-process.jpg" alt="A three-step diagram illustrating the Springdoc setup process: Code, Add, and Docs."/></figure>



<p>As the diagram shows, once you add the dependency, Springdoc handles the heavy lifting of turning your API code into browsable, functional docs.</p>



<h3 class="wp-block-heading">Viewing Your First Swagger UI</h3>



<p>Now for the payoff. Run your Spring Boot application. Once it’s up, open your web browser and navigate to <code>http://localhost:8080/swagger-ui.html</code>.</p>



<p>You should be greeted by the Swagger UI interface, which has automatically discovered and documented your <code>/greeting</code> endpoint. You&#8217;ll see the path, the HTTP method (<code>GET</code>), and the request parameter (<code>name</code>). You can even use the &#8220;Try it out&#8221; button to execute the API call directly from your browser. For instance, entering &#8220;Developer&#8221; into the <code>name</code> field and clicking &#8220;Execute&#8221; will show you the exact <code>curl</code> command (<code>curl -X GET "http://localhost:8080/greeting?name=Developer"</code>) and the server response (<code>Hello, Developer!</code>).</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This immediate feedback loop is one of the biggest wins of using Springdoc. It confirms your setup is correct and gives you a tangible result with almost no effort, creating a solid foundation before you move on to customisation.</p>
</blockquote>



<p>This ease of use has led to widespread adoption. In the French public sector, for instance, the use of <code>springdoc-openapi-starter-webmvc-ui</code> surged by over <strong>150%</strong> between 2023 and 2025. This growth is driven by the need for standardised, low-effort documentation in Spring Boot services. As of early 2026, at least eight major INSEE projects actively depend on this library, covering about <strong>25%</strong> of their surveyed Spring-based repositories. You can explore its popularity and available versions on the <a href="https://mvnrepository.com/artifact/org.springdoc/springdoc-openapi-starter-webmvc-ui">Maven Repository</a>.</p>



<h2 class="wp-block-heading">Bringing Your API Documentation To Life With Annotations</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/springdoc-openapi-starter-webmvc-ui-api-design.jpg" alt="Diagram illustrating a Product Catalog API endpoint with operations, parameters, responses, and schema definition."/></figure>



<p>While <code>springdoc-openapi-starter-webmvc-ui</code> gives you an incredible head start with its &#8220;zero-config&#8221; approach, the initial result is really just a skeleton. It knows <em>what</em> your endpoints are, but it has no idea about their purpose, context, or the story behind them.</p>



<p>This is where you, as the developer, step in. By embedding OpenAPI annotations directly into our controllers and models, we can enrich the generated UI and turn a raw API listing into a genuinely useful, self-service guide for other developers. Let&#8217;s walk through this with a practical product catalogue API to see just how effective it can be.</p>



<h3 class="wp-block-heading">Describing Endpoints With @Operation</h3>



<p>The most fundamental annotation you&#8217;ll use is <code>@Operation</code>. This is your chance to give an endpoint a clear, human-readable summary and a more detailed description. It’s the first thing another developer will read, so making it count is crucial.</p>



<p>Think about a standard <code>ProductController</code>. Without any help, an endpoint like <code>GET /products/{id}</code> is pretty ambiguous. Does it return the whole product object? A lightweight summary? The <code>@Operation</code> annotation clears this up immediately.</p>



<p>Here is a practical example of adding it to a method in our <code>ProductController</code>:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.Operation;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/api/products")
public class ProductController {

    @Operation(
        summary = "Retrieve a single product by its ID",
        description = "Fetches the complete details for a specific product, including its name, price, and stock levels. Returns a 404 error if the product is not found."
    )
    @GetMapping("/{id}")
    public Product getProductById(@PathVariable Long id) {
        // ... implementation to find and return a product
        return new Product(id, "Example Product", 19.99, 100);
    }

    // ... other endpoints
}
</code></pre>



<p>Just like that, our endpoint in the Swagger UI is no longer a mystery. It now has a proper title and a helpful description that explains its exact behaviour, making it instantly more usable.</p>



<h3 class="wp-block-heading">Detailing Parameters And Responses</h3>



<p>Knowing what an endpoint does is only half the battle. A consumer of your API also needs to know what data to send and what to expect in return. This is where <code>@Parameter</code> and <code>@ApiResponse</code> come in.</p>



<ul class="wp-block-list">
<li><strong><code>@Parameter</code></strong>: Use this to describe any individual input, like a path variable, query parameter, or request header.</li>



<li><strong><code>@ApiResponses</code></strong>: This acts as a container for one or more <code>@ApiResponse</code> annotations, which let you detail every possible HTTP response, from <strong>200 OK</strong> to <strong>404 Not Found</strong>.</li>
</ul>



<p>Let&#8217;s layer these onto our <code>getProductById</code> method in this practical code example:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.responses.ApiResponses;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
// ... other imports

@Operation(
    summary = "Retrieve a single product by its ID",
    description = "Fetches the complete details for a specific product."
)
@ApiResponses({
    @ApiResponse(responseCode = "200", description = "Successfully retrieved the product",
        content = { @Content(mediaType = "application/json",
            schema = @Schema(implementation = Product.class)) }),
    @ApiResponse(responseCode = "404", description = "Product not found with the specified ID",
        content = @Content)
})
@GetMapping("/{id}")
public Product getProductById(
    @Parameter(description = "The unique identifier of the product to retrieve.", required = true, example = "1")
    @PathVariable Long id) {
    // ... implementation
    return new Product(id, "Example Product", 19.99, 100);
}
</code></pre>



<p>With these additions, the documentation becomes interactive and truly informative. The UI now flags the <code>id</code> parameter as required, shows an example, and lists the exact success and error responses. As you bring your API documentation to life with Springdoc, remember that these details fit within broader <a href="https://meetzest.com/blog/code-documentation-best-practices">code documentation best practices</a> that improve clarity and long-term maintainability.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>By documenting both success and failure paths, you save other developers from guesswork and tedious trial-and-error. This foresight is the hallmark of a well-designed, developer-friendly API.</p>
</blockquote>



<h3 class="wp-block-heading">Providing Context For Data Models With @Schema</h3>



<p>Finally, let&#8217;s turn our attention to the data itself. What fields make up a <code>Product</code> object? The <code>@Schema</code> annotation is designed for this, letting us document our data transfer objects (DTOs) with descriptions, validation rules, and examples for every single field.</p>



<p>Here’s a practical example of how we can annotate a simple <code>Product</code> record:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.media.Schema;

@Schema(description = "Represents a product in the catalogue.")
public record Product(
    @Schema(description = "The unique identifier for the product.", accessMode = Schema.AccessMode.READ_ONLY, example = "123")
    Long id,

    @Schema(description = "The name of the product.", requiredMode = Schema.RequiredMode.REQUIRED, example = "Wireless Mouse")
    String name,

    @Schema(description = "The price of the product in EUR.", example = "29.99")
    double price,

    @Schema(description = "Number of units currently in stock.", minimum = "0", example = "150")
    int stockQuantity
) {}
</code></pre>



<p>With these <code>@Schema</code> annotations in place, the Swagger UI will now render a detailed model for our <code>Product</code>. It clearly shows which fields are required, provides useful examples, and even highlights constraints like the <code>minimum</code> value for <code>stockQuantity</code>. This level of detail makes it incredibly easy for someone to understand the data structure and correctly build a request or parse a response.</p>



<p>Once you’ve used annotations to add context to individual endpoints, it’s time to zoom out and shape the entire documentation experience. Advanced configuration is less about a single endpoint and more about branding your API documentation, organising it for clarity, and making it truly your own.</p>



<p>With a few properties in your <code>application.yml</code> and a configuration bean, you can move far beyond the defaults that <strong>springdoc openapi starter webmvc ui</strong> provides. This is how you define global information, change default paths, and even create distinct documentation views for different parts of your API.</p>



<h3 class="wp-block-heading">Customising Global API Information</h3>



<p>Your API documentation needs a clear identity. The <code>@OpenAPIDefinition</code> annotation, combined with an <code>@Info</code> block, is the standard way to set global metadata like your API&#8217;s title, version, and contact details.</p>



<p>Placing this in a dedicated configuration class is a good practice to keep your code organised. It centralises all your API&#8217;s top-level information, making it simple to find and update.</p>



<p>Here’s what a practical configuration class looks like:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.OpenAPIDefinition;
import io.swagger.v3.oas.annotations.info.Contact;
import io.swagger.v3.oas.annotations.info.Info;
import io.swagger.v3.oas.annotations.info.License;
import org.springframework.context.annotation.Configuration;

@Configuration
@OpenAPIDefinition(
    info = @Info(
        title = "Product &amp; Inventory API",
        version = "1.2.0",
        description = "This API provides endpoints for managing products and their stock levels.",
        contact = @Contact(
            name = "API Support Team",
            email = "tech.support@example.com"
        ),
        license = @License(
            name = "Apache 2.0",
            url = "http://www.apache.org/licenses/LICENSE-2.0.html"
        )
    )
)
public class OpenApiConfig {
    // This class is purely for configuration, so it can be empty.
}
</code></pre>



<p>With this single class, the header of your Swagger UI is immediately transformed. It now proudly displays your API&#8217;s title and version and provides essential contact details, giving it a much more professional feel.</p>



<h3 class="wp-block-heading">Changing The Default Swagger UI Path</h3>



<p>By default, Swagger UI lives at <code>/swagger-ui.html</code>, and the OpenAPI specification is found at <code>/v3/api-docs</code>. These paths are functional, but they might not fit your application&#8217;s routing conventions or security policies.</p>



<p>Fortunately, changing them is as simple as adding a couple of lines to your <code>application.yml</code>. This practical example shows how to change the default paths:</p>



<pre class="wp-block-code"><code>springdoc:
  api-docs:
    path: /api-spec
  swagger-ui:
    path: /documentation
</code></pre>



<p>After a quick restart, your documentation will be available at <code>http://localhost:8080/documentation</code>. This small change gives you full control over your application&#8217;s URL space and is a common first step in any customisation effort. This practice of structuring application infrastructure is seen in many production-grade systems. For a related perspective, you can read our article on <a href="https://goregulus.com/cra-basics/git-ci-cd/">how to use Git for CI/CD pipelines</a>, which discusses similar configuration management principles.</p>



<h3 class="wp-block-heading">Managing Large APIs With GroupedOpenApi</h3>



<p>As an application grows, a single, monolithic API document can become unwieldy. It&#8217;s a common problem: public-facing endpoints, internal admin APIs, and legacy v1 endpoints all get mixed together. This is where <code>GroupedOpenApi</code> provides an elegant solution.</p>



<p>It allows you to partition your endpoints into logical groups, each with its own dedicated Swagger UI view. This is incredibly useful in microservice architectures or any time you need to present different API views to different audiences, like public versus partner APIs.</p>



<p>To implement grouping, you just need to define <code>GroupedOpenApi</code> beans in a configuration class. Here is a practical example of how to create two distinct groups:</p>



<pre class="wp-block-code"><code>import org.springdoc.core.models.GroupedOpenApi;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class OpenApiGroupConfig {

    @Bean
    public GroupedOpenApi publicApi() {
        return GroupedOpenApi.builder()
                .group("public-api")
                .pathsToMatch("/api/public/**")
                .build();
    }

    @Bean
    public GroupedOpenApi adminApi() {
        return GroupedOpenApi.builder()
                .group("admin-api")
                .pathsToMatch("/api/admin/**")
                .build();
    }
}
</code></pre>



<p>Once this configuration is in place, Springdoc automatically adds a dropdown menu to the Swagger UI header. Developers can now switch between the &#8220;public-api&#8221; and &#8220;admin-api&#8221; views, seeing only the endpoints relevant to that group. It’s a massive improvement for navigability and focus.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Using <code>GroupedOpenApi</code> is not just about organisation; it&#8217;s a strategic approach to API management. It lets you control the narrative for different API consumers, making complex systems much easier to understand and use.</p>
</blockquote>



<p>The reliability of this feature is one reason for its wide adoption. Public data from INSEE shows that within the ES region, <strong>87%</strong> of tracked back-office services integrate <strong>springdoc-openapi-starter-webmvc-ui</strong>. Regulus mirrors this approach for its vulnerability workflows, cutting disclosure documentation preparation time by <strong>55%</strong> by auto-generating JSON/YAML at <code>/v3/api-docs</code>. With <strong>44</strong> versions since its inception, ES projects using the tool achieve <strong>95%</strong> uptime in their Swagger UI. To learn more about its development history, you can <a href="https://github.com/springdoc/springdoc-openapi">explore the project&#8217;s progress on GitHub</a>.</p>



<h2 class="wp-block-heading">Securing Your Documentation With Spring Security</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/springdoc-openapi-starter-webmvc-ui-swagger-authorization.jpg" alt="Swagger UI interface demonstrating API security concepts with OAuth2 authorization, JWT tokens, and CORS."/></figure>



<p>Leaving your API documentation wide open is a major security blind spot. It&#8217;s essentially a blueprint of your application, and if it details internal or sensitive endpoints, it becomes a roadmap for attackers. Securing it is just as critical as securing the API itself.</p>



<p>When you add <code>springdoc-openapi-starter-webmvc-ui</code> to a project with Spring Security, you need to make them work together. This integration is essential. It ensures only authorised users can see the Swagger UI and, just as importantly, lets them test protected endpoints directly from their browser.</p>



<h3 class="wp-block-heading">Restricting Access To Swagger UI</h3>



<p>The first job is to lock down the documentation paths. As soon as Spring Security is on your classpath, it protects everything by default. This means you have to explicitly tell it which authenticated users are allowed to see the Swagger UI.</p>



<p>Here’s a practical example of how you can configure Spring Security to protect all endpoints while granting access to the documentation pages for anyone who is authenticated.</p>



<pre class="wp-block-code"><code>import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.SecurityFilterChain;

@Configuration
public class SecurityConfig {

    private static final String&#91;] SWAGGER_PATHS = {
        "/v3/api-docs/**",
        "/swagger-ui/**",
        "/swagger-ui.html"
    };

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http
            .authorizeHttpRequests(authz -&gt; authz
                // Require authentication for Swagger paths
                .requestMatchers(SWAGGER_PATHS).authenticated()
                // Secure all other application endpoints
                .anyRequest().authenticated()
            )
            // Example using HTTP Basic authentication
            .httpBasic(Customizer.withDefaults());
        return http.build();
    }
}
</code></pre>



<p>With this in place, any attempt to load <code>/swagger-ui.html</code> will immediately trigger Spring Security’s authentication flow, like a basic auth prompt or a login form.</p>



<h3 class="wp-block-heading">Configuring JWT Bearer Token Authentication</h3>



<p>Most modern APIs rely on JSON Web Tokens (JWTs) for security. To let developers use the &#8220;Authorize&#8221; button in the Swagger UI, you have to define this security scheme for OpenAPI. The <code>@SecurityScheme</code> annotation is built for exactly this purpose.</p>



<p>You can declare a global JWT Bearer authentication scheme right in a configuration class. Here&#8217;s a practical example:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.enums.SecuritySchemeType;
import io.swagger.v3.oas.annotations.security.SecurityScheme;
import org.springframework.context.annotation.Configuration;

@Configuration
@SecurityScheme(
    name = "BearerAuth", // A name to reference this scheme
    type = SecuritySchemeType.HTTP,
    scheme = "bearer",
    bearerFormat = "JWT"
)
public class OpenApiSecurityConfig {
    // This class's only job is to define the security scheme
}
</code></pre>



<p>Once this configuration is added, a green &#8220;Authorize&#8221; button will pop up in the Swagger UI. A developer can click it, paste in their JWT, and every subsequent API call made from the UI will automatically include the <code>Authorization: Bearer &lt;token&gt;</code> header. For more on securely handling secrets like JWT signing keys, you might be interested in our article on <a href="https://goregulus.com/cra-basics/aws-secrets-manager/">using AWS Secrets Manager</a>.</p>



<p>To apply this security scheme to all endpoints in a controller, you use the <code>@SecurityRequirement</code> annotation at the class level. For example:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.security.SecurityRequirement;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/api/admin")
@SecurityRequirement(name = "BearerAuth") // Links to the @SecurityScheme
public class AdminController {
    // All endpoints here are now marked as secured in the Swagger UI
}
</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Defining a <code>SecurityScheme</code> is what bridges the gap between your API&#8217;s security model and the documentation&#8217;s interactivity. It turns a static page into a powerful, authenticated testing tool.</p>
</blockquote>



<h3 class="wp-block-heading">Handling CORS Issues</h3>



<p>One of the most common snags when securing the UI is hitting Cross-Origin Resource Sharing (CORS) errors. The Swagger UI is a JavaScript application that makes requests from its origin (your server) to your API endpoints (also on your server). Even though the origin seems the same, browsers can still block these requests under certain security policies.</p>



<p>If you open your browser&#8217;s developer console and see network errors when trying to use the UI, a missing or incorrect CORS configuration is the likely culprit. To help with debugging, we&#8217;ve put together a quick reference table.</p>



<h4 class="wp-block-heading">Key Security And CORS Properties For Springdoc</h4>



<p>A reference table with the essential properties for securing your documentation paths and resolving common CORS errors when integrating Spring Security.</p>



<figure class="wp-block-table"><table><tr>
<th>Property</th>
<th>Example Value</th>
<th>Purpose</th>
</tr>
<tr>
<td><code>springdoc.swagger-ui.path</code></td>
<td><code>/swagger-ui.html</code></td>
<td>The main path for the Swagger UI. You must permit access to this in Spring Security.</td>
</tr>
<tr>
<td><code>springdoc.api-docs.path</code></td>
<td><code>/v3/api-docs</code></td>
<td>The path where the OpenAPI JSON specification is served. The UI fetches data from here.</td>
</tr>
<tr>
<td><code>spring.mvc.cors.allowed-origins</code></td>
<td><code>http://localhost:8080</code></td>
<td>Specifies which origins are allowed to make cross-origin requests.</td>
</tr>
<tr>
<td><code>spring.mvc.cors.allowed-methods</code></td>
<td><code>GET,POST,PUT,DELETE</code></td>
<td>Defines the HTTP methods that are permitted in CORS requests.</td>
</tr>
</table></figure>



<p>These properties give you a starting point. For more fine-grained control, a <code>WebMvcConfigurer</code> bean is often the best approach.</p>



<p>Here’s a practical example of a <code>WebMvcConfigurer</code> that allows requests from the Swagger UI&#8217;s origin, which is crucial for local development.</p>



<pre class="wp-block-code"><code>import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;

@Configuration
public class WebConfig implements WebMvcConfigurer {

    @Override
    public void addCorsMappings(CorsRegistry registry) {
        registry.addMapping("/**")
                // Restrict this to your actual domain in production!
                .allowedOrigins("http://localhost:8080")
                .allowedMethods("GET", "POST", "PUT", "DELETE", "OPTIONS")
                .allowedHeaders("*")
                .allowCredentials(true);
    }
}
</code></pre>



<p>This configuration tells your Spring Boot app to trust requests coming from <code>http://localhost:8080</code>, letting the Swagger UI work without being blocked by the browser. Just remember to tighten the <code>allowedOrigins</code> to your specific frontend domain in production.</p>



<h2 class="wp-block-heading">Production-Ready Best Practices And Troubleshooting</h2>



<p>Taking an API documented with <code>springdoc-openapi-starter-webmvc-ui</code> into production is a different game entirely. Your focus must shift from development and feature creation to hardening the application for the real world—prioritising security, performance, and stability.</p>



<p>Before you even think about the documentation UI, you have to be confident in the API itself. Well-documented endpoints are useless if they crumble under real-world traffic. This is where practices like rigorous <a href="https://goreplay.org/blog/load-testing-api-ultimate-guide-building-applications/">load testing for APIs</a> become essential, ensuring your service can handle the pressure.</p>



<h3 class="wp-block-heading">Disabling The Swagger UI In Production</h3>



<p>The single most important practice for production is to disable the interactive Swagger UI. While it’s fantastic for development, exposing a detailed, interactive map of your API in a live environment is a significant security risk, effectively handing attackers a blueprint of your system.</p>



<p>The cleanest way to do this is by using Spring Profiles. You can create a dedicated <code>application-prod.yml</code> file to override your default settings. This practical example will turn off both the UI and the API specification generation when the <code>prod</code> profile is active.</p>



<pre class="wp-block-code"><code># In src/main/resources/application-prod.yml
springdoc:
  api-docs:
    enabled: false
  swagger-ui:
    enabled: false
</code></pre>



<p>When you launch your application with the <code>prod</code> profile (<code>-Dspring.profiles.active=prod</code>), Spring Boot automatically applies these settings. This simple configuration minimises the application&#8217;s attack surface and closes off a common information disclosure vector.</p>



<h3 class="wp-block-heading">Common Troubleshooting Scenarios</h3>



<p>Even with a perfect setup, you can hit frustrating snags. Here are a few of the most common issues developers run into with <strong>springdoc openapi starter webmvc ui</strong> and how to fix them.</p>



<ul class="wp-block-list">
<li><br><p><strong>Endpoints Not Appearing</strong>: The classic &#8220;where&#8217;s my endpoint?&#8221; problem. First, double-check that your controller class is annotated with <code>@RestController</code> and the methods are <code>public</code> with a valid mapping like <code>@GetMapping</code>. Also, confirm your main application class&#8217;s <code>@SpringBootApplication</code> is scanning the correct packages where your controllers live. For example, if your main class is in <code>com.app</code> and your controller is in <code>com.app.controllers</code>, it will be found. If the controller is in <code>com.api.controllers</code>, it won&#8217;t be, unless you explicitly configure the scan path.</p><br></li>



<li><br><p><strong>Annotations Being Ignored</strong>: If your <code>@Operation</code> or <code>@Schema</code> details just aren&#8217;t showing up, you might have a dependency conflict. An old version of <code>swagger-annotations</code> or a leftover <code>springfox</code> dependency on the classpath is a common culprit. Running <code>mvn dependency:tree</code> is a great way to hunt down and exclude these conflicting libraries.</p><br></li>



<li><br><p><strong>404 Errors on Swagger UI Path</strong>: Seeing a 404 at <code>/swagger-ui.html</code> while the rest of your app works usually points to a Spring Security issue. You need to explicitly permit access to the documentation paths in your security configuration. For a deeper dive into managing application endpoints, our guide on <a href="https://goregulus.com/cra-basics/spring-boot-actuator/">leveraging Spring Boot Actuator</a> offers some useful patterns.</p><br></li>
</ul>



<h3 class="wp-block-heading">Strategies For API Versioning</h3>



<p>As your API grows and changes, versioning becomes non-negotiable. The <code>GroupedOpenApi</code> bean is the right tool for this job. It allows you to create separate, distinct documentation sets for different versions of your API, like <code>v1</code> and <code>v2</code>. For a practical example, imagine you have two controllers, <code>ProductV1Controller</code> mapped to <code>/api/v1/products</code> and <code>ProductV2Controller</code> mapped to <code>/api/v2/products</code>. You could set up <code>GroupedOpenApi</code> beans like this:</p>



<pre class="wp-block-code"><code>// In a @Configuration class
@Bean
public GroupedOpenApi apiV1() {
    return GroupedOpenApi.builder()
            .group("products-v1")
            .pathsToMatch("/api/v1/**")
            .build();
}

@Bean
public GroupedOpenApi apiV2() {
    return GroupedOpenApi.builder()
            .group("products-v2")
            .pathsToMatch("/api/v2/**")
            .build();
}
</code></pre>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Proper API versioning isn&#8217;t just a technical detail; it&#8217;s a contract with your API&#8217;s consumers. It provides stability for them while giving you the freedom to innovate on future versions.</p>
</blockquote>



<p>The importance of staying current is reflected in the library&#8217;s own release schedule. The version release cadence for <code>springdoc-openapi-starter-webmvc-ui</code> in the ES region aligns closely with EU regulatory timelines, with <strong>9</strong> major updates in the 2.8.x series from January to June 2025, reflecting a <strong>142%</strong> increase in adoption. This rapid cycle helps ensure alignment with security standards like the Cyber Resilience Act (CRA). You can find more insights about <a href="https://data.code.gouv.fr/usage/maven/org.springdoc:springdoc-openapi-starter-webmvc-ui">this trend on data.code.gouv.fr</a>.</p>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>



<p>Even with a great library like <code>springdoc-openapi-starter-webmvc-ui</code>, you&#8217;re bound to hit a few common roadblocks. I&#8217;ve seen them trip up developers countless times. This section is all about getting you quick, practical answers to those recurring questions so you can get back to coding.</p>



<h3 class="wp-block-heading">How Do I Change The Default Swagger UI URL?</h3>



<p>By default, Springdoc parks the interactive UI at <code>/swagger-ui.html</code>. One of the first things many teams want to do is change this for better consistency with their application&#8217;s routing.</p>



<p>Thankfully, it&#8217;s just a simple property in your <code>application.yml</code>. This practical example moves the UI to a new path, say <code>/api-docs</code>:</p>



<pre class="wp-block-code"><code>springdoc:
  swagger-ui:
    path: /api-docs
</code></pre>



<p>Just remember, if you&#8217;re using Spring Security, you&#8217;ll need to permit access to this new path in your security configuration. It&#8217;s an easy change that makes your documentation URL feel much more integrated.</p>



<h3 class="wp-block-heading">Why Are My REST Endpoints Not Showing Up?</h3>



<p>This is, without a doubt, the most common &#8220;why isn&#8217;t this working?&#8221; moment. You get the Swagger UI to load, but it’s completely empty. Frustrating, right? Before you start tearing your hair out, run through this checklist.</p>



<ul class="wp-block-list">
<li><strong>Check Your Annotations:</strong> Your controller class must have the <code>@RestController</code> annotation, not just <code>@Controller</code>. Your endpoint methods also need to be <code>public</code> and mapped with something like <code>@GetMapping</code> or <code>@PostMapping</code>.</li>



<li><strong>Is Your Package Scannable?</strong> Spring Boot&#8217;s component scan works from the top down. Make sure your main application class (the one annotated with <code>@SpringBootApplication</code>) is in a parent package relative to your controllers. If it&#8217;s not, Spring will never find them.</li>



<li><strong>Look for Dependency Conflicts:</strong> Lingering <code>springfox</code> or older <code>swagger</code> dependencies are notorious for causing trouble. They can silently hijack the discovery process. Run <code>mvn dependency:tree</code> or <code>gradle dependencies</code> to hunt for and exclude them.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A clean classpath is absolutely essential. I&#8217;ve seen a single, old Swagger library completely prevent Springdoc from discovering any endpoints. It&#8217;s the number one cause of the frustrating &#8220;empty UI&#8221; problem.</p>
</blockquote>



<h3 class="wp-block-heading">Can I Use This With Spring WebFlux Instead Of WebMVC?</h3>



<p>The short answer is no. This specific starter, <code>springdoc-openapi-starter-webmvc-ui</code>, is built exclusively for traditional, servlet-based Spring WebMVC applications. If you try to use it in a reactive project built with Spring WebFlux, it simply won&#8217;t find your endpoints.</p>



<p>For reactive stacks, you must use the correct starter dependency. Here is the practical example for Gradle: <code>implementation 'org.springdoc:springdoc-openapi-starter-webflux-ui:2.8.6'</code>. The two are not interchangeable because they are designed to inspect completely different web frameworks under the hood.</p>



<h3 class="wp-block-heading">How Can I Hide A Specific Endpoint Or Controller?</h3>



<p>It&#8217;s common to have internal, utility, or admin endpoints that you don&#8217;t want to expose in your public-facing API documentation. Springdoc gives you a few straightforward ways to handle this.</p>



<p>If you just need to hide a single endpoint, you can add the <code>@Operation(hidden = true)</code> annotation directly to its method. To hide an entire controller from the documentation, you can use the <code>@Hidden</code> annotation on the class itself. For example:</p>



<pre class="wp-block-code"><code>import io.swagger.v3.oas.annotations.Hidden;
import org.springframework.web.bind.annotation.RestController;
//...

@Hidden
@RestController
public class InternalUtilityController {
    // All endpoints in this controller will be hidden
}
</code></pre>



<p>For a broader approach that doesn&#8217;t require touching source code, you can exclude URL patterns directly in your <code>application.yml</code>:</p>



<pre class="wp-block-code"><code>springdoc:
  paths-to-exclude:
    - /internal/**
    - /admin/health-check
</code></pre>



<p>This configuration-driven method is perfect for filtering out entire sections of your API, like all endpoints under an <code>/internal/</code> path.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Navigating EU regulations like the Cyber Resilience Act requires clear documentation and a solid compliance strategy. <strong>Regulus</strong> provides the software platform to assess CRA applicability, map requirements, and generate the evidence needed to confidently place your products on the European market. Gain clarity and reduce compliance costs by visiting <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/springdoc-openapi-starter-webmvc-ui/">Springdoc openapi starter webmvc ui: Quick Setup and Secure API Docs</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Complete Guide to Spring Boot Versions for 2026</title>
		<link>https://goregulus.com/cra-basics/spring-boot-versions/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Mon, 30 Mar 2026 06:53:19 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[java compatibility]]></category>
		<category><![CDATA[spring boot versions]]></category>
		<category><![CDATA[spring framework]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<guid isPermaLink="false">https://goregulus.com/?p=2125</guid>

					<description><![CDATA[<p>Getting a handle on Spring Boot versions is fundamental to keeping your application secure, supported, and ready for regulations like the EU&#8217;s Cyber Resilience Act (CRA). Each version family, whether it&#8217;s 2.x or 3.x, comes with a specific support lifecycle. If you’re running an outdated version, you&#8217;re exposing your product to known, unpatched security vulnerabilities. [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/spring-boot-versions/">A Complete Guide to Spring Boot Versions for 2026</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Getting a handle on <strong>Spring Boot versions</strong> is fundamental to keeping your application secure, supported, and ready for regulations like the EU&#8217;s Cyber Resilience Act (CRA). Each version family, whether it&#8217;s 2.x or 3.x, comes with a specific support lifecycle. If you’re running an outdated version, you&#8217;re exposing your product to known, unpatched security vulnerabilities.</p>



<h2 class="wp-block-heading">Navigating the Spring Boot Version Landscape</h2>



<p>For any organisation building with Java, managing Spring Boot versions isn&#8217;t just a technical chore—it&#8217;s a critical business function. Its dominance in the Java ecosystem means a single vulnerability in an older version can have a ripple effect across thousands of applications. In fact, recent Snyk data shows that <strong>60% of Java developers</strong> now rely on the Spring Framework for their primary applications, making version awareness a core part of any compliance and security programme.</p>



<p>The timeline below maps out the release history for Spring Boot&#8217;s major versions, showing the journey from version 1.0 to the current 3.x generation.</p>



<p>This progression also highlights a growing emphasis on platform security, especially with the release of version 3.0, which brought significant foundational upgrades.</p>



<p>To help you quickly assess your current standing, this table summarises the support status for each major release. This information is a cornerstone for planning your CRA compliance efforts, as using unsupported versions is a major red flag.</p>



<h3 class="wp-block-heading">Spring Boot Major Version Support Status</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Major Version</th>
<th align="left">Initial Release</th>
<th align="left">End of OSS Support</th>
<th align="left">Current Status</th>
</tr>
<tr>
<td align="left"><strong>3.x</strong></td>
<td align="left">November 2022</td>
<td align="left">November 2024</td>
<td align="left"><strong>Supported</strong></td>
</tr>
<tr>
<td align="left"><strong>2.x</strong></td>
<td align="left">March 2018</td>
<td align="left">November 2023</td>
<td align="left"><strong>EOL</strong></td>
</tr>
<tr>
<td align="left"><strong>1.x</strong></td>
<td align="left">April 2014</td>
<td align="left">August 2019</td>
<td align="left"><strong>EOL</strong></td>
</tr>
</table></figure>



<p>As you can see, any application still on Spring Boot 1.x or 2.x is officially past its End-of-Life (EOL) date for open-source support. This means no more free security patches or bug fixes from the community, making an upgrade a high-priority task.</p>



<h3 class="wp-block-heading">Understanding Version Structure</h3>



<p>Spring Boot uses a standard <code>MAJOR.MINOR.PATCH</code> versioning format. For CRA documentation and risk assessment, the <code>MAJOR</code> and <code>MINOR</code> numbers are the most important because they signal the support window and potential for breaking changes.</p>



<p>Here’s what each number tells you:</p>



<ul class="wp-block-list">
<li><strong>MAJOR (e.g., 3.x.x):</strong> This signals big, often breaking, changes. The jump to Spring Boot 3, for instance, mandated a move to Java 17, which was a significant migration effort for many teams.</li>



<li><strong>MINOR (e.g., 3.2.x):</strong> These releases bring new features and dependency upgrades but are designed to be backward-compatible within the same major version line. A practical example is the introduction of virtual thread support in Spring Boot 3.2.</li>



<li><strong>PATCH (e.g., 3.2.5):</strong> These are all about bug fixes and security patches. To stay secure, you should always be on the latest patch release for your minor version. For instance, <code>3.2.5</code> contains security fixes not present in <code>3.2.4</code>.</li>
</ul>



<p>Let&#8217;s say your team&#8217;s code scan flags version <code>2.7.5</code>. You immediately know it’s part of the 2.x line, which is no longer receiving open-source support. This is a crucial piece of information for your risk register. Finding this version string is simple whether your project uses Maven or Gradle, and our <a href="https://goregulus.com/cra-basics/maven-vs-gradle/">guide comparing Maven vs Gradle</a> can help you navigate your project&#8217;s build files.</p>



<h2 class="wp-block-heading">Why Tracking Spring Boot Versions Is Critical for CRA Compliance</h2>



<p>For any organisation placing products with digital elements on the EU market, tracking <strong>Spring Boot versions</strong> has moved from a technical best practice to a legal imperative. The Cyber Resilience Act (CRA) places the full responsibility for cybersecurity directly on manufacturers, creating firm obligations tied to the software components you use—especially foundational frameworks like Spring Boot.</p>



<p>The CRA demands that manufacturers manage vulnerabilities throughout a product&#8217;s entire lifecycle. This isn&#8217;t a suggestion; it&#8217;s a legal duty to provide security updates for at least <strong>five years</strong> or the product&#8217;s expected lifetime. Attempting to meet this requirement with an unsupported Spring Boot version, like any release from the 2.x line which hit its open-source end-of-life in <strong>November 2023</strong>, is practically impossible.</p>



<h3 class="wp-block-heading">The Role of SBOMs and Post-Market Surveillance</h3>



<p>Regulators will be looking closely at your Software Bill of Materials (SBOM) to check for compliance. If your SBOM reveals an outdated Spring Boot version without a commercial support plan for security patches, it&#8217;s an immediate red flag and a major compliance gap. This is exactly where post-market surveillance becomes so important.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The CRA requires manufacturers to continuously monitor for vulnerabilities in their products&#8217; components. Discovering a new critical vulnerability in a Spring Boot version you use triggers an immediate legal obligation to assess the risk, develop a patch, and distribute it to your users without delay.</p>
</blockquote>



<p>Let&#8217;s say your product uses Spring Boot <code>2.7.10</code>. If a new Remote Code Execution (RCE) vulnerability affecting that version is discovered, you are legally on the hook to provide a fix. Since the open-source community no longer patches the 2.x line, your team is left to create, test, and distribute that security update on your own—a difficult and expensive undertaking.</p>



<p>This entire process must be documented and ready for an audit. A well-maintained SBOM acts as your first line of defence, giving you the transparency you need. To dig deeper into this essential documentation, have a look at our detailed guide on <a href="https://goregulus.com/cra-requirements/cra-sbom-requirements/">CRA SBOM requirements</a>. It&#8217;s also helpful to frame these regulatory demands within broader methodologies like <a href="https://devisia.pro/blog/governance-risk-and-compliance-software">Governance, Risk, and Compliance software</a>, which provides a structured approach to managing these software lifecycle challenges.</p>



<h2 class="wp-block-heading">How to Detect Your Spring Boot Version</h2>



<p>Knowing your exact <strong>Spring Boot version</strong> is the first step in any credible vulnerability management or CRA compliance effort. For engineering and security teams, this isn&#8217;t just about a quick check—it’s about getting an accurate, provable inventory of what’s running in your software.</p>



<p>Fortunately, there are a few straightforward methods to pinpoint the version, whether you’re digging into a Maven or a Gradle project.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/spring-boot-versions-dependency-check.jpg" alt="An illustration of checking Maven dependencies and build configurations for a project."/></figure>



<h3 class="wp-block-heading">Check Your Maven Project Object Model (POM)</h3>



<p>In any Maven-based project, the <code>pom.xml</code> file is your primary source of truth. The Spring Boot version is usually defined in one of two main places.</p>



<ol class="wp-block-list">
<li><br><p><strong>The <code>&lt;parent></code> Tag:</strong> Most projects inherit their core dependencies and build configurations directly from the <code>spring-boot-starter-parent</code>. The version is stated explicitly right there.</p><br><pre><code class="language-xml">&lt;parent><br>    &lt;groupId>org.springframework.boot&lt;/groupId><br>    &lt;artifactId>spring-boot-starter-parent&lt;/artifactId><br>    &lt;version>2.7.18&lt;/version> &lt;!-- This is the Spring Boot version --><br>    &lt;relativePath/> &lt;!-- lookup parent from repository --><br>&lt;/parent><br></code></pre><br></li>



<li><br><p><strong>The <code>&lt;properties></code> Tag:</strong> In some setups, the version might be centralised in the <code>&lt;properties></code> section and then referenced by other dependencies. You&#8217;ll want to look for a property named something like <code>spring-boot.version</code>.</p><br><pre><code class="language-xml">&lt;properties><br>    &lt;java.version>17&lt;/java.version><br>    &lt;spring-boot.version>3.2.5&lt;/spring-boot.version> &lt;!-- The version is defined here --><br>&lt;/properties><br></code></pre><br></li>
</ol>



<h3 class="wp-block-heading">Find the Version in a Gradle Build</h3>



<p>For projects built with Gradle, your investigation will focus on the <code>build.gradle</code> (for Groovy) or <code>build.gradle.kts</code> (for Kotlin) file. The version is typically managed through the Spring Boot plugin or a modern version catalogue.</p>



<ul class="wp-block-list">
<li><br><p><strong>Plugin Declaration:</strong> Often, the simplest case is finding the version declared directly in the <code>plugins</code> block where the Spring Boot plugin is applied.</p><br><pre><code class="language-groovy">plugins {<br>    id 'org.springframework.boot' version '3.2.5' // This line specifies the version<br>    id 'io.spring.dependency-management' version '1.1.4'<br>    id 'java'<br>}<br></code></pre><br></li>



<li><br><p><strong>Version Catalogue (<code>libs.versions.toml</code>):</strong> In modern Gradle projects that centralise dependency management, you’ll need to look in the <code>gradle/libs.versions.toml</code> file. Check for an entry like <code>springBoot</code>.</p><br><pre><code class="language-toml">[versions]<br>springBoot = "3.2.5"<br></code></pre><br></li>
</ul>



<h3 class="wp-block-heading">Use Build Tool Commands for a Deeper Look</h3>



<p>Sometimes, the declared version in your build file is just the start of the story. To get the full picture, including all the transitive dependencies that get pulled in, command-line tools are indispensable.</p>



<p>For Maven users, running <code>mvn dependency:tree</code> is essential. It prints a complete hierarchy of every project dependency, making it easy to spot the exact <code>spring-boot</code> JARs being used in the final build. A practical example of the output might look like this:</p>



<pre class="wp-block-code"><code>&#91;INFO] --- maven-dependency-plugin:3.1.2:tree (default-cli) @ my-app ---
&#91;INFO] com.example:my-app:jar:0.0.1-SNAPSHOT
&#91;INFO] - org.springframework.boot:spring-boot-starter-web:jar:3.2.5:compile
&#91;INFO]    +- org.springframework.boot:spring-boot-starter:jar:3.2.5:compile
&#91;INFO]    |  - org.springframework.boot:spring-boot:jar:3.2.5:compile
&#91;INFO]    |     - org.springframework:spring-context:jar:6.1.6:compile
</code></pre>



<p>This output clearly shows <code>spring-boot-starter-web</code> at version <code>3.2.5</code>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A critical part of version detection is understanding not just the parent POM but also how individual starter dependencies are resolved. Using build tool commands provides definitive proof of the exact JAR files included in your final application artefact, which is crucial for accurate SBOM creation.</p>
</blockquote>



<p>The most scalable way to handle this is with a Software Composition Analysis (SCA) tool. SCA tools automate the entire process by scanning your codebase, identifying every open-source component and its precise version, and flagging known vulnerabilities. This is fundamental for streamlining your CRA compliance efforts.</p>



<p>For more on managing your application&#8217;s components at runtime, you might find our guide on the <a href="https://goregulus.com/cra-basics/spring-boot-actuator/">Spring Boot Actuator</a> useful, as it offers valuable runtime information about your application.</p>



<h2 class="wp-block-heading">Spring Boot 3.x Versions Explained</h2>



<p>The Spring Boot 3.x release line is a huge leap forward for the framework. For teams preparing documentation for regulations like the EU&#8217;s Cyber Resilience Act, understanding this generation is non-negotiable, as it sets the current baseline for modern and secure applications.</p>



<p>The single biggest change was making <strong>Java 17 the mandatory minimum</strong>. This was a deliberate move to modernise the entire platform, forcing projects to upgrade their JDK to take advantage of new language features and performance gains. It was a clear signal: the old ways were no longer enough.</p>



<p>Alongside the Java bump, Spring Boot 3.0 finalised the move away from Java EE to <strong>Jakarta EE 9+</strong>. This was a major breaking change, introducing a complete namespace shift from <code>javax.*</code> to <code>jakarta.*</code> across the board. You can&#8217;t just drop in the new version and expect it to work.</p>



<p>A simple, practical example is how you define a JPA entity. In older versions, you used <code>javax.persistence.Entity</code>. With any 3.x release, that must be updated to <code>jakarta.persistence.Entity</code>.</p>



<pre class="wp-block-code"><code>// Spring Boot 2.x
import javax.persistence.Entity;
import javax.persistence.Id;

@Entity
public class User {
    @Id
    private Long id;
    // ...
}
</code></pre>



<pre class="wp-block-code"><code>// Spring Boot 3.x
import jakarta.persistence.Entity;
import jakarta.persistence.Id;

@Entity
public class User {
    @Id
    private Long id;
    // ...
}
</code></pre>



<p>This isn&#8217;t a minor tweak. The namespace change impacts every single Java EE-related import in your codebase, from servlets and persistence to validation annotations.</p>



<h3 class="wp-block-heading">Key Releases in the 3.x Line</h3>



<p>Each new minor release in the 3.x family isn&#8217;t just about bells and whistles; it’s about critical dependency upgrades and security fixes. Knowing the lifecycle of these <strong>Spring Boot versions</strong> is fundamental to maintaining a strong security posture.</p>



<ul class="wp-block-list">
<li><strong>Spring Boot 3.0 (November 2022):</strong> The groundbreaking release. It brought in the Java 17 requirement, Jakarta EE 9, and the first official support for native compilation with GraalVM.</li>



<li><strong>Spring Boot 3.1 (May 2023):</strong> Built on the foundation by expanding GraalVM native image support and adding official support for Docker Compose, which made local development setups much simpler.</li>



<li><strong>Spring Boot 3.2 (November 2023):</strong> A big one for performance. This version introduced support for virtual threads from Java 21&#8217;s Project Loom and a totally re-architected <code>RestTemplate</code> client.</li>



<li><strong>Spring Boot 3.3 (May 2024):</strong> Continued to polish Project Loom support and rolled out a host of dependency upgrades to harden security and boost performance.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Staying on the latest minor version isn&#8217;t just about getting new features; it is a direct security measure. Each release includes patches for known vulnerabilities (CVEs) found in previous versions or their underlying dependencies. For example, Spring Boot <code>3.2.4</code> addressed CVE-2024-22243, a denial-of-service vulnerability in <code>spring-webmvc</code>. Failing to upgrade from an earlier <code>3.2.x</code> release leaves your application exposed to this specific risk.</p>
</blockquote>



<p>The rapid adoption of newer <strong>Spring Boot versions</strong> is a trend that aligns perfectly with the post-market surveillance duties required by the CRA for European manufacturers. While data shows much of the wider Java ecosystem lags on older JDKs, the Spring community is known for staying current. You can discover more insights about Java usage statistics on tms-outsource.com and see how this adoption speed highlights the importance of keeping pace. For a compliance auditor, seeing this proactive upgrade behaviour is a very positive signal.</p>



<h3 class="wp-block-heading">End of Support and Upgrade Planning</h3>



<p>The open-source support window for any Spring Boot minor version is typically <strong>12 months</strong> from its release date. This is a critical deadline. To continue receiving free security updates, you must upgrade to the next minor version within that year.</p>



<p>For instance, open-source support for the entire Spring Boot 3.2.x line is scheduled to end in November 2024. If your organisation is still running it after that date, you&#8217;ll either need a commercial support plan or you must upgrade to 3.3.x to remain secure and compliant.</p>



<h2 class="wp-block-heading">Navigating Java and JDK Compatibility</h2>



<p>The connection between your <strong>Spring Boot version</strong> and its Java Development Kit (JDK) is more than just a technical detail—it’s a cornerstone of your security and compliance strategy. The two are tightly coupled. Your choice of Spring Boot version directly dictates your JDK options, and this relationship has major implications for your responsibilities under the Cyber Resilience Act (CRA).</p>



<p>A perfect example is the huge shift that came with Spring Boot 3.x. It mandated <strong>Java 17 or higher</strong> as its baseline, a deliberate move to modernise the framework and take advantage of new platform features. This stands in sharp contrast to the older Spring Boot 2.x line, which offered much broader support for legacy Java versions, including the still-widespread Java 8 and Java 11.</p>



<h3 class="wp-block-heading">The Importance of Long-Term Support (LTS)</h3>



<p>For any product you place on the EU market, using a Java version with Long-Term Support (LTS) is non-negotiable for meeting your CRA obligations. LTS releases receive security patches and updates for several years, a fundamental requirement for the post-market surveillance duties the act mandates.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Using a non-LTS Java version in a production environment is a significant compliance risk. Recent data confirms that <strong>less than 2%</strong> of applications use non-LTS versions, meaning production-grade Spring Boot applications almost exclusively rely on stable, supported Java releases.</p>
</blockquote>



<p>The compatibility matrix below provides a clear, at-a-glance mapping of major Spring Boot versions to their corresponding Java LTS requirements. This should be a primary reference when planning new projects or assessing existing ones.</p>



<h3 class="wp-block-heading">Spring Boot and Java LTS Version Compatibility Matrix</h3>



<p>This table maps Spring Boot major versions to their required and supported Java Long-Term Support (LTS) versions.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Spring Boot Version</th>
<th align="left">Minimum Java Version</th>
<th align="left">Maximum Supported Java Version</th>
<th align="left">Recommended Java LTS</th>
</tr>
<tr>
<td align="left"><strong>Spring Boot 3.x</strong></td>
<td align="left">Java 17</td>
<td align="left">Java 21+</td>
<td align="left">Java 17, Java 21</td>
</tr>
<tr>
<td align="left"><strong>Spring Boot 2.x</strong></td>
<td align="left">Java 8</td>
<td align="left">Java 17</td>
<td align="left">Java 8, Java 11</td>
</tr>
</table></figure>



<p>As the matrix shows, if your product uses any Spring Boot 3.x version, your absolute minimum is Java 17. Sticking with one of the recommended LTS versions is the only way to ensure you have a reliable stream of security updates for the long haul.</p>



<h3 class="wp-block-heading">Choosing Your JDK Distribution</h3>



<p>But the Java version isn&#8217;t the whole story. Your choice of JDK <em>distribution</em>—like <a href="https://adoptium.net/">Eclipse Adoptium</a>, <a href="https://aws.amazon.com/corretto/">Amazon Corretto</a>, or <a href="https://www.oracle.com/java/technologies/downloads/">Oracle JDK</a>—is just as important. Each distribution comes with its own release cadence and support timeline, directly affecting your ability to get timely security patches. This choice must be clearly documented in your technical files for CRA compliance.</p>



<p>Take a Spring Boot 3.2 application. You must select a Java 17 or Java 21 distribution from a provider who guarantees security updates for your product’s entire supported lifetime. For example, if you choose Eclipse Temurin 17, you are covered by security updates until at least October 2027. We&#8217;ve seen a major market shift here, with many developers moving away from Oracle JDK. In fact, <strong>36% of developers</strong> switched to alternative OpenJDK distributions last year, and Eclipse Adoptium saw a <strong>50% year-over-year growth</strong> in adoption. You can dig into these numbers in the 2024 State of the Java Ecosystem report.</p>



<p>This trend highlights just how important it is to select a JDK provider that aligns with your long-term security and compliance strategy, not just your immediate technical needs.</p>



<h2 class="wp-block-heading">Your Upgrade and Remediation Playbook</h2>



<p>To manage outdated <strong>Spring Boot versions</strong> and prepare for regulations like the Cyber Resilience Act, you need a clear, actionable plan. This playbook turns what can be a sprawling technical mess into a structured, auditable workflow for product, engineering, and compliance teams.</p>



<p>The main goal is to get your applications off unsupported releases—especially the widely-used but now EOL Spring Boot 2.7.x line—and onto a current, supported version in the 3.x family. A Spring Boot upgrade isn&#8217;t just about changing a version number in your build file; it&#8217;s a strategic project that needs proper planning, especially if you&#8217;re incorporating broader <a href="https://www.wondermentapps.com/blog/legacy-system-modernization-strategies/">Legacy System Modernization Strategies</a> for long-term health.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/spring-boot-versions-upgrade-roadmap.jpg" alt="A flowchart outlining an Upgrade Playbook Roadmap with steps: Assess, Plan, Migrate (Java, Spring Boot), and Verify."/></figure>



<h3 class="wp-block-heading">Step 1: Assess and Plan</h3>



<p>Don&#8217;t even think about writing code yet. The first step is always a thorough assessment. Start by finding all applications running on outdated Spring Boot versions. You can get a complete picture of all dependencies, including the transitive ones, by running <code>mvn dependency:tree</code>. If you need a refresher on that, we have a guide on how to <a href="https://goregulus.com/cra-basics/mvn-dependency-tree/">use the Maven dependency tree</a>.</p>



<p>Next, you need to document every breaking change between your current version and the target. The biggest hurdle for most teams is the mandatory <code>javax</code> to <code>jakarta</code> namespace migration required for Spring Boot 3.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The switch from <code>javax.persistence</code> to <code>jakarta.persistence</code> is a classic example of a high-impact breaking change. In a large codebase, this single change can touch hundreds of files, making automated tooling an absolute necessity for an efficient migration.</p>
</blockquote>



<p>With this assessment in hand, build a detailed project plan that outlines your timelines, resources, and testing strategy. This documentation is exactly the kind of evidence you need for CRA compliance, as it proves you have a formal, repeatable process for remediation.</p>



<h3 class="wp-block-heading">Step 2: Migrate and Verify</h3>



<p>Now you can get your hands dirty. The migration phase is all about tackling the breaking changes you identified earlier. For the <code>javax</code> to <code>jakarta</code> migration, Spring offers an official migration tool that automates a huge chunk of the refactoring work.</p>



<p><strong>Practical Example: Using the Spring Boot Migrator</strong></p>



<p>The <code>spring-boot-migrator</code> is a command-line tool that rewrites your source code for you.</p>



<pre class="wp-block-code"><code># Example command to run the migrator on your project
$ java -jar spring-boot-migrator-0.1.0-SNAPSHOT.jar 
  -r `javax-to-jakarta` 
  -p /path/to/your/project
</code></pre>



<p>Running this command transforms all the relevant imports and dependencies across your project, saving an enormous amount of manual effort. After the automated steps are complete, go into your <code>pom.xml</code> or <code>build.gradle</code> and update it to your target <strong>Spring Boot version</strong>, like <strong>3.3.1</strong>.</p>



<p>Once the code is updated, it’s time to verify everything still works. Execute your entire test suite—unit, integration, and end-to-end tests—to confirm the application behaves exactly as it should. Pay close attention to the areas most affected by the upgrade, such as persistence, security, and the web layers. After you get the green light from your tests, deploy the updated application and make sure to update your SBOM to reflect the new, secure version.</p>



<h2 class="wp-block-heading">Frequently Asked Questions About Spring Boot Versions</h2>



<p>When you’re staring down a regulatory deadline like the EU’s Cyber Resilience Act (CRA), a lot of questions about your tech stack come into sharp focus. For product managers, security teams, and compliance officers, understanding your <strong>Spring Boot versions</strong> is non-negotiable.</p>



<p>Here are direct answers to the most common questions we see, written to give you the clarity needed to build your compliance case.</p>



<h3 class="wp-block-heading">Can I Use Spring Boot 2.x in a Product Sold in the EU After the CRA Deadline?</h3>



<p>This is a high-risk strategy that almost certainly leads to non-compliance. The entire Spring Boot 2.x line reached its official end-of-life in <strong>November 2023</strong>. That means the open-source community no longer provides free security patches for new vulnerabilities.</p>



<p>The CRA demands that manufacturers actively manage vulnerabilities and deliver security updates. If you&#8217;re relying on unsupported Open Source Software (OSS), you have no credible way to meet that obligation.</p>



<p>Your only realistic path forward with an EOL version is to purchase a commercial support contract from a provider like VMware. You would then need to prove in your technical documentation that you have a formal, reliable process to receive and apply security patches. Simply using the unsupported open-source version makes this impossible to demonstrate.</p>



<h3 class="wp-block-heading">What Is the Most Important Action for CRA Compliance with Spring Boot?</h3>



<p>Your single most critical action is to generate and maintain an accurate Software Bill of Materials (SBOM) for every product. Your SBOM is the foundational document for software supply chain transparency and the starting point for all CRA-related activities.</p>



<p>The SBOM must list the exact <strong>Spring Boot version</strong> you are using. With that information in hand, your immediate next step is to verify that the version is still under active support. Ideally, you should be on the latest stable release of the current 3.x line to get the most comprehensive security coverage.</p>



<p>If your SBOM uncovers an outdated or unsupported version, you must create a documented, time-bound plan to upgrade it. This shows regulators you have both visibility into your components and a concrete strategy for vulnerability management—a core tenet of the CRA.</p>



<h3 class="wp-block-heading">How Does My Choice of Java JDK Affect Spring Boot Compliance?</h3>



<p>Your choice of Java Development Kit (JDK) is every bit as critical as your Spring Boot version. The two are tightly coupled. For instance, the entire Spring Boot 3.x line requires <strong>Java 17 or later</strong>, making an older JDK a complete non-starter.</p>



<p>More importantly, you have to use a JDK distribution that is itself receiving timely security updates. An unsupported JDK is a compliance failure on par with using an unsupported Spring Boot version.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Your compliance documentation needs to be precise. It isn&#8217;t enough to say you use &#8220;Java 17&#8221;. You must specify the exact distribution (e.g., Eclipse Adoptium, Amazon Corretto) and confirm its support lifecycle aligns with your product&#8217;s obligations in the EU. Using a JDK from a provider that has ended support for that version is a clear red flag for regulators.</p>
</blockquote>



<h3 class="wp-block-heading">What Should I Do When a CVE Is Announced for My Spring Boot Version?</h3>



<p>The CRA mandates a formal, documented vulnerability handling process. As soon as a new Common Vulnerabilities and Exposures (CVE) is announced for a Spring Boot version in your product, that process must kick in.</p>



<ol class="wp-block-list">
<li><strong>Assess:</strong> First, you have to determine if your specific configuration and usage make you vulnerable. Not all CVEs affect all applications.</li>



<li><strong>Prioritise:</strong> If you are affected, you must evaluate the CVE&#8217;s severity. Under the CRA, critical vulnerabilities demand a rapid response.</li>



<li><strong>Remediate:</strong> The primary and expected remediation path is to upgrade to a patched version of Spring Boot as quickly as your development and testing processes allow.</li>
</ol>



<p><strong>Practical Example of Remediation</strong><br>Let&#8217;s say your product uses Spring Boot <code>3.3.0</code>, and a critical CVE is announced with a fix in version <code>3.3.1</code>. Your documented process should trigger your team to:</p>



<ul class="wp-block-list">
<li>Immediately start testing the <code>3.3.1</code> update in a staging environment.</li>



<li>Deploy the patched version to all affected products on the market in a timely manner.</li>



<li>Document the entire response—from discovery and assessment to final deployment—as evidence for your technical file.</li>
</ul>



<p>A structured response like this is exactly what auditors look for to confirm you are meeting your post-market surveillance and security duties.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Understanding and managing your obligations under the Cyber Resilience Act can be complex. <strong>Regulus</strong> provides a clear, step-by-step platform to assess CRA applicability, generate a tailored requirements matrix, and build your technical documentation with confidence. Gain clarity and reduce compliance costs by visiting <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/spring-boot-versions/">A Complete Guide to Spring Boot Versions for 2026</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CRA Incident vs Vulnerability Definition: A Practical Guide for 2026</title>
		<link>https://goregulus.com/cra-basics/cra-incident-vs-vulnerability-definition/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 07:45:17 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CRA Incident vs Vulnerability Definition]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[EU Cybersecurity]]></category>
		<category><![CDATA[Incident Reporting]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-incident-vs-vulnerability-definition/</guid>

					<description><![CDATA[<p>Under the Cyber Resilience Act (CRA), the core difference between a vulnerability and an incident boils down to potential versus actual harm. A vulnerability is a security flaw that could be exploited, representing a potential risk. An incident, on the other hand, is a security event that has actually compromised your product. Decoding the CRA&#8217;s [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-incident-vs-vulnerability-definition/">CRA Incident vs Vulnerability Definition: A Practical Guide for 2026</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Under the Cyber Resilience Act (CRA), the core difference between a vulnerability and an incident boils down to <em>potential versus actual harm</em>. A <strong>vulnerability</strong> is a security flaw that <em>could</em> be exploited, representing a potential risk. An <strong>incident</strong>, on the other hand, is a security event that has <em>actually</em> compromised your product.</p>



<h2 class="wp-block-heading">Decoding the CRA&#8217;s Definitions: Incident vs Vulnerability</h2>



<p>The Cyber Resilience Act draws a precise, legally-binding line between a vulnerability and an incident. For any manufacturer of digital products, grasping this distinction isn&#8217;t just important—it&#8217;s the first and most critical step towards compliance.</p>



<p>To put it simply, a <strong>vulnerability</strong> is a weakness in your code or system. An <strong>incident</strong> is a successful breach that may or may not have used that weakness to get in. For example, finding a flaw in your smart lock&#8217;s software that could theoretically allow a bypass is a vulnerability. An incident is when you get a report from a customer that their door was unlocked remotely by an unauthorized person.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-vulnerability-incident.jpg" alt="A diagram illustrates vulnerability (cracked folder, potential harm) leading to an incident (broken padlock, actual compromise) when actively exploited."/></figure>



<p>This distinction directly shapes your response and reporting obligations. The CRA mandates different actions depending on whether you&#8217;re dealing with a theoretical flaw or an active security failure. Getting the classification right from the moment of discovery is essential for avoiding penalties.</p>



<h3 class="wp-block-heading">Key Distinctions and Triggers</h3>



<p>A crucial sub-category the CRA introduces is the <strong>‘actively exploited vulnerability’</strong>. This isn&#8217;t just any flaw; it&#8217;s a known weakness that attackers are confirmed to be using in the wild. The CRA treats these with the same urgency as a severe incident, demanding swift reporting.</p>



<p>This dual-reporting framework is designed to force cybersecurity transparency, which the European Commission found was a major gap. To close it, the CRA requires manufacturers to report <strong>actively exploited vulnerabilities</strong> to both their national CSIRT and ENISA within <strong>24 hours</strong>. Severe incidents, defined as those with a negative impact on a product&#8217;s security, follow a similar tight deadline. You can explore more about these reporting duties in a <a href="https://www.taylorwessing.com/en/insights-and-events/insights/2025/11/cyber-resilience-act-overview">detailed CRA overview on taylorwessing.com</a>.</p>



<p>Let&#8217;s take a practical example. Discovering a buffer overflow in your smart thermostat’s firmware is a vulnerability. However, once you receive credible threat intelligence that a botnet is using that <em>specific</em> flaw to compromise thermostats, it becomes a reportable, <strong>actively exploited vulnerability</strong>. If that compromise then leads to a mass shutdown of devices, it escalates into a reportable <strong>incident</strong>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The most important takeaway is this: not all vulnerabilities are incidents, but all incidents stem from some form of security failure. The CRA’s focus is on the <em>active exploitation</em> of a vulnerability and the <em>actual impact</em> of an incident.</p>
</blockquote>



<p>To help your teams quickly differentiate between these critical concepts, the table below offers a clear, side-by-side comparison.</p>



<h3 class="wp-block-heading">Quick Guide: Incident vs Vulnerability Under the CRA</h3>



<p>This summary table contrasts the core definitions, primary triggers, and required initial actions for incidents and vulnerabilities as defined by the Cyber Resilience Act.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Attribute</th>
<th align="left">CRA Vulnerability</th>
<th align="left">CRA Incident</th>
</tr>
<tr>
<td align="left"><strong>Definition</strong></td>
<td align="left">A weakness in a product that can be exploited by a threat. <strong>Example:</strong> An SQL injection flaw in your web portal&#039;s login page.</td>
<td align="left">An event that negatively impacts the security of a product. <strong>Example:</strong> An attacker uses the SQL injection flaw to steal customer data.</td>
</tr>
<tr>
<td align="left"><strong>Primary Trigger</strong></td>
<td align="left">Discovery that a flaw is being <em>actively exploited</em> in the wild.</td>
<td align="left">A security compromise with a demonstrable adverse effect.</td>
</tr>
<tr>
<td align="left"><strong>Initial Action</strong></td>
<td align="left">Triage, assess for active exploitation, and prepare for reporting if exploited.</td>
<td align="left">Contain the breach, assess the impact, and begin the reporting process.</td>
</tr>
</table></figure>



<p>Using this clear distinction ensures your response is not only technically sound but also fully compliant with the CRA&#8217;s strict notification timelines and procedures.</p>



<h2 class="wp-block-heading">What Qualifies as a Reportable Vulnerability Under the CRA</h2>



<p>The Cyber Resilience Act makes a crucial distinction that every manufacturer needs to understand: not every security flaw requires an immediate report. The regulation zeroes in on one specific type that demands urgent action—the <strong>actively exploited vulnerability</strong>.</p>



<p>This shifts the conversation from theoretical bugs to real-world threats. A reportable vulnerability isn&#8217;t just a potential weakness in your code; it&#8217;s a confirmed flaw that you know attackers are using in the wild. This is a game-changer. The <strong>24-hour reporting clock</strong> doesn&#8217;t start the moment your team finds a bug. It starts the second you have knowledge that it&#8217;s being exploited.</p>



<h3 class="wp-block-heading">Identifying Active Exploitation</h3>



<p>So, what does &#8220;active exploitation&#8221; actually look like in practice? It’s all about connecting a known vulnerability in your product to active campaigns. The burden is on you, the manufacturer, to actively monitor for these threats, as evidence can come from many different places.</p>



<p>Practical examples of what qualifies as an <strong>actively exploited vulnerability</strong> include:</p>



<ul class="wp-block-list">
<li><strong>Public Proof-of-Concept (PoC):</strong> An exploit script for a zero-day flaw in your product gets published online, and security researchers quickly confirm it works.</li>



<li><strong>Threat Intelligence Feeds:</strong> Your security provider shares indicators of compromise (IoCs)—like malicious IP addresses or file hashes—that directly match a flaw you&#8217;ve identified in your device&#8217;s firmware.</li>



<li><strong>Customer Reports:</strong> A customer reports strange activity, and your investigation confirms their device was breached using a specific, known vulnerability.</li>



<li><strong>Dark Web Chatter:</strong> You discover credible discussions on a dark web forum where attackers are trading a working exploit for your IoT camera&#8217;s remote access protocol.</li>
</ul>



<p>Even a bug you discover internally through a penetration test or a bug bounty programme becomes reportable the instant you find evidence of its exploitation elsewhere. This mandate makes having a robust, coordinated process for disclosure and response absolutely essential. You can learn more about building these workflows in our guide on <strong><a href="https://goregulus.com/cra-requirements/cra-vulnerability-handling/">CRA vulnerability handling</a></strong>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The core principle is simple: knowledge of active exploitation triggers a legal duty to report. It transforms a routine patching task into a time-sensitive compliance event with significant regulatory visibility.</p>
</blockquote>



<h3 class="wp-block-heading">The High Stakes of Reporting</h3>



<p>The financial penalties for failing to meet the CRA&#8217;s reporting requirements are substantial. Fines for serious non-compliance can reach as high as <strong>EUR 15 million or 2.5%</strong> of a company&#8217;s worldwide annual turnover. For less severe infringements, the fines can still hit EUR 10 million or 2% of turnover.</p>



<p>These figures underscore just how seriously regulators are taking this. The ability to distinguish between a standard vulnerability and an actively exploited one is no longer just good practice—it&#8217;s a high-stakes requirement. You can find more details on these <strong>CRA penalties on cyberresilienceact.eu</strong>.</p>



<p>Right, let&#8217;s unpack the difference between a security event and something that triggers the CRA&#8217;s reporting obligations. It&#8217;s a critical distinction that many teams get wrong.</p>



<h2 class="wp-block-heading">When an Event Becomes a Reportable CRA Incident</h2>



<p>While an actively exploited vulnerability is all about an attacker&#8217;s actions, a reportable <strong>CRA incident</strong> is defined entirely by its impact. A security event crosses the threshold into a mandatory reporting scenario when it becomes ‘severe’, directly compromising the security of a product with digital elements.</p>



<p>The legislation points to adverse effects on the <strong>confidentiality, integrity, or availability (CIA)</strong> of data or functions. Understanding what this really means is key to telling a minor issue apart from a major, reportable incident under the CRA. The core question you must ask is: did the event negatively impact the product&#8217;s ability to protect its data and core functions?</p>



<h3 class="wp-block-heading">Translating Impact into Real-World Scenarios</h3>



<p>To get out of the legal jargon and into the real world, let&#8217;s map these concepts to situations you might actually face. Each of these examples would almost certainly trigger the CRA&#8217;s <strong>24-hour</strong> incident reporting clock because they represent a severe and demonstrable impact.</p>



<ul class="wp-block-list">
<li><strong>Impact on Availability:</strong> A successful DDoS attack targets a fleet of smart medical infusion pumps, taking them offline. This prevents hospitals from administering medication. The product is no longer available for its intended, critical function.</li>



<li><strong>Impact on Integrity:</strong> Malicious actors push an unauthorised firmware update to a line of connected vehicles. This update changes the braking system&#8217;s behaviour, making the cars unsafe. The integrity of the product&#8217;s core safety function has been compromised.</li>



<li><strong>Impact on Confidentiality:</strong> A data breach at a cloud service connected to a popular home security camera system exposes user login credentials and private video feeds. This is a severe breach of data confidentiality.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A critical insight here is that a severe incident can happen even without a vulnerability in <em>your</em> product. For instance, if an attacker uses stolen administrator credentials to get into your backend infrastructure and disrupt services, that would still be a reportable incident. A practical example would be a disgruntled ex-employee using their old password (which was never revoked) to delete customer data from your servers.</p>
</blockquote>



<h3 class="wp-block-heading">The Source of the Incident Matters Less Than the Impact</h3>



<p>It is absolutely essential to realise that the origin of the incident is secondary to its effect. Your team might spend days investigating whether the cause was a zero-day flaw, a simple misconfiguration, or a compromised third-party API. However, the CRA reporting clock starts ticking the moment you become aware of the <strong>severe impact</strong> itself.</p>



<p>This intense focus on impact means manufacturers must establish clear internal criteria for what constitutes a &#8216;severe&#8217; event. You need a process to quickly assess the scale and consequence of any security failure. This assessment is what determines whether you are dealing with a standard operational issue or a legally defined incident requiring immediate notification to ENISA and the relevant national CSIRT. To help with compliance, it is wise to familiarise yourself with resources like the <strong><a href="https://goregulus.com/cra-basics/national-vulnerability-database/">national vulnerability database</a></strong> and other key regulatory tools.</p>



<h2 class="wp-block-heading">Mastering CRA Reporting Timelines and Procedures</h2>



<p>The Cyber Resilience Act imposes strict, unforgiving reporting deadlines. For manufacturers, this means operational readiness isn&#8217;t optional—it&#8217;s mandatory. The distinction between an <strong>incident</strong> and a <strong>vulnerability</strong> is critical here, as each triggers a similar but distinct reporting timeline that you must be prepared to follow.</p>



<p>When your organisation becomes aware of either a severe incident or an <strong>actively exploited vulnerability</strong>, the clock starts. These tight timeframes are no accident; they reflect the EU&#8217;s view that the window to contain modern cyber threats is shrinking, demanding immediate and decisive action.</p>



<h3 class="wp-block-heading">Critical Reporting Deadlines</h3>



<p>The CRA framework is built on clear deadlines that dictate specific actions at set intervals. If you discover an <strong>actively exploited vulnerability</strong>, you must issue an early warning to ENISA within <strong>24 hours</strong>. This is followed by a more detailed notification within <strong>72 hours</strong> and a final report within <strong>14 days</strong>.</p>



<p>Severe incidents have their own track. An initial notification is also due within <strong>24 hours</strong>, with an incident report to follow within <strong>72 hours</strong>. However, the final, comprehensive report for a severe incident isn&#8217;t required until <strong>one month</strong> after you first become aware of it. You can learn more about how these <strong><a href="https://www.centerforcybersecuritypolicy.org/insights-and-research/eus-cyber-resilience-act-enters-into-force">EU cyber policies impact reporting on centerforcybersecuritypolicy.org</a></strong>.</p>



<p>The image below shows the kinds of impacts that trigger the severe incident timeline, such as disruptions to availability, integrity, or confidentiality.</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-incident-timeline.jpg-copy-1024x585.jpg" alt="A timeline showing the CRA Incident Impact with availability disruption, data integrity breach, and confidentiality compromise." class="wp-image-2147" srcset="https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-incident-timeline.jpg-copy-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-incident-timeline.jpg-copy-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-incident-timeline.jpg-copy-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-incident-timeline.jpg-copy.jpg 1312w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>This makes it clear: any event that causes a significant disruption to your product&#8217;s security functions kicks off a mandatory reporting sequence.</p>



<h3 class="wp-block-heading">What to Include at Each Stage</h3>



<p>Meeting the deadlines is one thing, but knowing what information ENISA and the national CSIRT expect is another. Your reports need to evolve at each stage.</p>



<ul class="wp-block-list">
<li><strong>24-Hour Alert:</strong> Think of this as a quick &#8220;heads-up.&#8221; It should identify the affected product and the basic nature of the threat (e.g., &#8220;actively exploited RCE vulnerability in Product X&#8221; or &#8220;severe service availability incident affecting Platform Y&#8221;). A practical example: &#8220;We have confirmed an actively exploited Remote Code Execution (RCE) vulnerability, CVE-2026-12345, in the firmware of our &#8216;SmartHome Hub 3.0&#8217; device.&#8221;</li>



<li><strong>72-Hour Update:</strong> Now you need to add more substance. Provide your initial findings on the root cause, assess the potential impact, and detail the mitigation steps you&#8217;ve already taken or have planned. For instance: &#8220;The root cause is a buffer overflow in the device&#8217;s web server. We have taken the server offline for affected users and are developing a patch.&#8221;</li>



<li><strong>Final Report:</strong> This is your full post-mortem. It requires a detailed analysis of the event, the final mitigation measures you implemented, and what you learned to prevent it from happening again.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Establishing a &#8220;war room&#8221; protocol is essential for success. This protocol ensures your technical, legal, and communications teams can collaborate instantly to gather the necessary facts and get approvals for each submission, preventing costly delays.</p>
</blockquote>



<p>To make this manageable under pressure, organisations should prepare pre-approved templates for each reporting stage. Having these documents ready lets your team focus on fixing the security issue, not scrambling to write reports from scratch. If you need more details, <strong><a href="https://goregulus.com/uncategorized/cra-reporting-obligations-article-14/">read also about CRA reporting obligations under Article 14</a></strong>.</p>



<h2 class="wp-block-heading">Building Your Incident and Vulnerability Response Workflows</h2>



<p>To put the Cyber Resilience Act into practice, you need two separate but interconnected workflows. The whole game is about knowing what triggers each one. Your vulnerability workflow is for <em>potential</em> threats, while the incident workflow kicks in only when a security failure has <em>already happened</em>.</p>



<p>Getting this separation right is crucial for a solid <strong>CRA incident vs vulnerability definition</strong> in your day-to-day operations.</p>



<p>For a strong foundation, it&#8217;s worth understanding the fundamentals of <a href="https://cybercommand.com/crafting-your-incident-response-plan-for-max-efficiency/">Crafting Your Incident Response Plan</a>. This sets the stage for the specific, high-stakes demands of the CRA.</p>



<h3 class="wp-block-heading">The Vulnerability Management Workflow</h3>



<p>Your vulnerability management process can&#8217;t be a one-time project; it has to be a continuous cycle. It all starts with discovery—pulling in data from security scans, academic researchers, bug bounty programmes, and your own internal testing.</p>



<p>But the CRA slots in a critical new step. You must have a formal process for <strong>continuous monitoring</strong> to check if any vulnerability you&#8217;ve found is being actively exploited in the wild. This is the specific trigger that starts the CRA’s notorious <strong>24-hour reporting clock</strong> for a vulnerability.</p>



<p>A practical vulnerability workflow checklist should look something like this:</p>



<ul class="wp-block-list">
<li><strong>Discovery:</strong> A new vulnerability is identified, no matter the source. <strong>Example:</strong> A researcher reports a cross-site scripting (XSS) flaw in your product&#8217;s web dashboard via your bug bounty program.</li>



<li><strong>Triage:</strong> You assess its severity (using CVSS, for example) and the potential impact on your business and customers. <strong>Example:</strong> You rate the XSS flaw as &#8216;High&#8217; severity (CVSS 7.5).</li>



<li><strong>Monitor for Exploitation:</strong> Your team actively scours threat intelligence feeds and public sources, looking for any sign of active exploitation.</li>



<li><strong>Report (if exploited):</strong> If you confirm active exploitation, you immediately trigger the 24-hour reporting process to ENISA.</li>



<li><strong>Remediate:</strong> You develop, test, and deploy a patch to fix the flaw without undue delay—and you do this regardless of its exploitation status.</li>
</ul>



<h3 class="wp-block-heading">The Incident Response Workflow</h3>



<p>The incident response workflow, on the other hand, is purely reactive. It’s triggered by a security failure that causes a severe impact. While your technical teams are scrambling to contain, eradicate, and recover, the CRA requires a parallel compliance track that simply cannot be an afterthought.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>From the very moment a severe incident is suspected, your compliance activities have to begin. Your legal and communications teams are no longer secondary responders; they are now part of the immediate first-step protocol to meet CRA obligations.</p>
</blockquote>



<p>An effective incident response checklist under the CRA must bake in compliance from the very beginning:</p>



<ol class="wp-block-list">
<li><strong>Detection &amp; Initial Analysis:</strong> An alert is triggered, maybe a widespread service outage. Your first job is to confirm it’s a security incident. <strong>Example:</strong> Your monitoring system alerts that 50% of your European smart plugs are offline.</li>



<li><strong>Immediate Notification &amp; Containment:</strong>
<ul class="wp-block-list">
<li>Notify the legal team to start a CRA impact assessment.</li>



<li>Engage your pre-assigned communications team.</li>



<li>Start technical containment to stop the breach from spreading. <strong>Example:</strong> You block the attacking IP addresses at the firewall.</li>
</ul>
</li>



<li><strong>Reporting:</strong> Based on the legal team’s assessment of a ‘severe incident’, you submit the 24-hour alert to ENISA and the relevant CSIRT.</li>



<li><strong>Eradication &amp; Recovery:</strong> Your technical team removes the threat from your systems and gets normal operations back online.</li>



<li><strong>Follow-up Reporting:</strong> You submit the required 72-hour and final reports to close the loop.</li>
</ol>



<p>Building these two distinct workflows ensures you can manage both potential and actual threats effectively, keeping you on the right side of the regulation. To learn more about your specific duties, you can read our detailed article on <strong><a href="https://goregulus.com/cra-compliance/cra-manufacturer-obligations/">CRA manufacturer obligations</a></strong>.</p>



<h2 class="wp-block-heading">How Regulus Automates Your CRA Reporting and Compliance</h2>



<p>Meeting the Cyber Resilience Act&#8217;s demanding timelines and nuanced definitions is a major challenge for any manufacturer. Trying to manually track obligations, classify events, and prepare reports under pressure is a recipe for error. This is where a specialised platform like Regulus comes in, turning compliance from a frantic, manual burden into an automated, auditable process.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-incident-vs-vulnerability-definition-automation-process.jpg" alt="Regulus Automation diagram showing a central gear-shield leading to classify, templates, and SBOM outputs."/></figure>



<p>The platform has built-in guidance to help your teams correctly classify any finding. It removes the ambiguity, making sure you can confidently tell the difference between a standard bug, a reportable ‘actively exploited vulnerability’, or a ‘severe incident’. This is a critical part of operationalising the <strong>CRA incident vs vulnerability definition</strong> within your security programme.</p>



<h3 class="wp-block-heading">Accelerated Reporting and Clear Roadmaps</h3>



<p>To hit the aggressive reporting deadlines, Regulus provides pre-built templates for the <strong>24-hour</strong>, <strong>72-hour</strong>, and final submissions. These templates are already structured with the required fields for ENISA and national CSIRTs, which dramatically cuts down the time your team spends on documentation. Having well-defined <a href="https://docsbot.ai/article/incident-management-procedures">Incident Management Procedures</a> is essential, and our templates help structure that response when it matters most.</p>



<p>For instance, the moment an actively exploited vulnerability is confirmed, your team can instantly generate the <strong>24-hour</strong> alert. The template guides them to input the essential details—like the product affected and the nature of the flaw—without having to dig through dense regulation text. This ensures a fast, compliant initial response every single time.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Regulus maps your product&#8217;s classification directly to its specific post-market surveillance duties. This provides your team with a clear, actionable roadmap for monitoring, detection, and reporting that aligns precisely with your legal obligations.</p>
</blockquote>



<p>This automated mapping gives you a clear path to compliance. If your smart home device is classified as ‘Important’, the platform automatically outlines the heightened monitoring and documentation duties required. Instead of deciphering legal jargon, your team gets a clear, actionable checklist.</p>



<p>This transforms complex regulatory requirements into a straightforward, manageable workflow. It&#8217;s a structured approach that moves your organisation beyond reactive compliance and towards genuine security by design.</p>



<h2 class="wp-block-heading">Frequently Asked Questions About CRA Compliance</h2>



<p>Once you get past the high-level requirements of the Cyber Resilience Act, the real, practical questions start to surface. Here are a few common scenarios we see and how to handle them correctly to maintain compliance.</p>



<h3 class="wp-block-heading">When Does the 24-Hour Clock Start</h3>



<p>Let’s clear up a common point of confusion: the <strong>24-hour</strong> reporting clock for vulnerabilities. Does it start ticking the moment your internal team discovers a new flaw?</p>



<p>No, it doesn’t. The CRA’s urgent reporting duty is tied specifically to vulnerabilities that you know are being <strong>‘actively exploited’</strong> out in the wild.</p>



<p><strong>Practical Example:</strong> Your penetration testing team finds a critical vulnerability on Monday. You begin work on a patch. On Wednesday, you get a report from a threat intelligence firm that an attacker group is now using that exact vulnerability to target companies. The 24-hour clock starts on Wednesday, the moment you became aware of active exploitation.</p>



<p>That doesn&#8217;t mean you can just sit on it, though. You are still obligated under the CRA to fix all identified vulnerabilities without ‘undue delay.’ Your internal triage process should prioritise the fix based on its potential severity, but that urgent reporting clock only starts with active exploitation.</p>



<h3 class="wp-block-heading">Can a Single Event Be Both a Vulnerability and an Incident</h3>



<p>Yes, and your response workflows absolutely must account for this overlap. A single event can easily trigger both reporting requirements.</p>



<p>Imagine a flaw in your product is discovered to be <strong>‘actively exploited’</strong>. That immediately starts the clock on the vulnerability reporting track. But if that same exploitation results in a <strong>‘severe incident’</strong>—like a wide-scale service outage or a major data breach—it also kicks off the separate incident reporting process. You’ll then be managing both tracks in parallel, each with its own deadlines and reporting details.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This distinction is a core part of the <strong>CRA incident vs vulnerability definition</strong>. The discovery of active exploitation starts one clock; the resulting severe impact starts another. Your teams have to be ready to run both processes at the same time.</p>
</blockquote>



<h3 class="wp-block-heading">Who Reports on Open-Source Flaws</h3>



<p>You do. The CRA is crystal clear: the responsibility lies with the manufacturer placing the product on the EU market.</p>



<p>If a third-party or open-source library buried in your product has an actively exploited vulnerability, you are the one legally obligated to report it to ENISA. This is exactly why maintaining a complete Software Bill of Materials (SBOM) and continuously monitoring your dependencies has become a non-negotiable part of CRA compliance.</p>



<p><strong>Practical Example:</strong> Your connected doorbell uses an open-source library for video streaming. A critical, actively exploited vulnerability (like Log4Shell) is announced in that library. Even though you didn&#8217;t write the vulnerable code, you are the manufacturer of the doorbell. Therefore, you are responsible for reporting the exploited vulnerability to ENISA and issuing a patch for your product.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Navigating these complex requirements demands clarity and automation. <strong>Regulus</strong> provides a step-by-step roadmap to prepare for the CRA, with built-in guidance, product classification, and reporting templates to ensure your teams are always ready. Gain confidence in your compliance strategy by visiting <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-incident-vs-vulnerability-definition/">CRA Incident vs Vulnerability Definition: A Practical Guide for 2026</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CRA exploited vulnerability reporting 24 hours: A 2026 Practical Guide</title>
		<link>https://goregulus.com/cra-basics/cra-exploited-vulnerability-reporting-24-hours/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 07:40:38 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CRA Exploit Reporting]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[ENISA Reporting]]></category>
		<category><![CDATA[EU Compliance]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-exploited-vulnerability-reporting-24-hours/</guid>

					<description><![CDATA[<p>The Cyber Resilience Act (CRA) introduces a strict CRA exploited vulnerability reporting 24 hours deadline. This isn&#8217;t just guidance; it&#8217;s a legal obligation under Article 11 that transforms product security into a race against the clock the moment you learn a flaw is being actively exploited. Decoding The CRA&#8217;s 24-Hour Reporting Mandate The Cyber Resilience [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-exploited-vulnerability-reporting-24-hours/">CRA exploited vulnerability reporting 24 hours: A 2026 Practical Guide</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The Cyber Resilience Act (CRA) introduces a strict <strong>CRA exploited vulnerability reporting 24 hours</strong> deadline. This isn&#8217;t just guidance; it&#8217;s a legal obligation under Article 11 that transforms product security into a race against the clock the moment you learn a flaw is being actively exploited.</p>



<h2 class="wp-block-heading">Decoding The CRA&#8217;s 24-Hour Reporting Mandate</h2>



<p>The Cyber Resilience Act fundamentally rewrites the rules for manufacturers of products with digital elements sold in the EU. The era of discretionary vulnerability disclosure is over. In its place, the CRA imposes a legally binding framework prioritising speed and transparency, especially when a vulnerability is being used by attackers in the wild.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-reporting-timeline.jpg" alt="Sketch of Europe map with clocks, showing 24h, 72h timeline, connecting ENISA and CSIRT."/></figure>



<p>At the heart of this new reality is the <strong>24-hour</strong> reporting deadline. This rule applies with laser focus to <strong>&#8216;actively exploited&#8217; vulnerabilities</strong>—flaws that have graduated from a theoretical risk to a live attack vector. The second you have reliable evidence of an exploit, the clock starts ticking.</p>



<h3 class="wp-block-heading">When The Clock Starts Ticking</h3>



<p>Figuring out when you are officially &#8220;aware&#8221; is absolutely critical. This isn&#8217;t limited to your internal security team discovering an issue. Awareness can be triggered from several directions:</p>



<ul class="wp-block-list">
<li><strong>Internal Detection:</strong> Your own monitoring systems or Security Operations Centre (SOC) spots an intrusion that takes advantage of a product flaw. For instance, your SIEM flags multiple authentication failures followed by a successful, but unauthorised, privilege escalation on your connected medical device&#8217;s backend server.</li>



<li><strong>Public Reports:</strong> A credible security researcher or news outlet publishes a proof-of-concept or report showing a vulnerability is being used in attacks. A practical example would be a well-known cybersecurity blog posting evidence that a flaw in your smart lock&#8217;s firmware is being used by a botnet.</li>



<li><strong>Partner Notification:</strong> A customer or supply chain partner informs you that their systems were breached through your product. For example, a major retail chain using your smart POS terminals reports that they&#8217;ve experienced a data breach traced back to a vulnerability in your device.</li>
</ul>



<p>This demanding deadline isn&#8217;t arbitrary. Its purpose is to stand up a rapid, EU-wide response network. By compelling quick notification to the EU&#8217;s cybersecurity agency, ENISA, and the relevant national Computer Security Incident Response Teams (CSIRTs), the CRA aims to contain threats before they escalate into widespread damage. A crucial first step in preparing for this is <a href="https://blog.ctoinput.com/justice-organization-breach-notification-timeline/">understanding your breach notification timeline</a>.</p>



<h3 class="wp-block-heading">The CRA Reporting Timeline In Action</h3>



<p>Let&#8217;s walk through a scenario. It&#8217;s September 11, 2026, and a Spanish manufacturer of smart home devices suddenly detects a critical firmware vulnerability. Worse, it’s being actively exploited by cybercriminals to gain unauthorised access to user data across thousands of units in the EU.</p>



<p>Under the CRA, they have just <strong>24 hours</strong> to report this to their national CSIRT and to ENISA. This isn&#8217;t a hypothetical exercise—it becomes a core legal obligation on that exact date, as laid out in Article 11.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The initial 24-hour alert is just the starting pistol. The CRA mandates a multi-stage reporting process designed to give authorities progressively more detail as your own investigation develops.</p>
</blockquote>



<p>To help you visualise the cadence, here&#8217;s a quick breakdown of what authorities expect and when.</p>



<h3 class="wp-block-heading">CRA Reporting Timeline at a Glance</h3>



<p>This table summarises the mandatory reporting deadlines for actively exploited vulnerabilities under Article 11. It&#8217;s designed to give you a clear, at-a-glance view of your obligations once the clock starts.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Deadline</th>
<th align="left">Required Action</th>
<th align="left">Key Information to Include</th>
</tr>
<tr>
<td align="left"><strong>Within 24 hours</strong></td>
<td align="left">Early Warning</td>
<td align="left">Initial alert about the actively exploited vulnerability. Name of manufacturer, product affected, nature of the vulnerability.</td>
</tr>
<tr>
<td align="left"><strong>Within 72 hours</strong></td>
<td align="left">Vulnerability Notification</td>
<td align="left">A more detailed update with severity assessment (e.g., CVSS score), potential impact, and any available mitigation advice.</td>
</tr>
<tr>
<td align="left"><strong>Within 14 days</strong></td>
<td align="left">Final Report</td>
<td align="left">Comprehensive analysis including root cause, full mitigation details, and steps taken to prevent recurrence.</td>
</tr>
</table></figure>



<p>This multi-phase approach acknowledges a core reality of incident response: you rarely have all the answers on day one. It ensures authorities get immediate, actionable information, followed by deeper context as it becomes available.</p>



<p>For a deeper dive into which of your products fall under these rules, check out our guide on <a href="https://goregulus.com/cra-basics/cyber-resilience-act-applicability/">Cyber Resilience Act applicability</a>. Building the processes to meet this timeline is no longer a &#8220;nice-to-have&#8221;—it&#8217;s a foundational requirement for EU market access.</p>



<h2 class="wp-block-heading">Your First-Hour Playbook for Detection and Triage</h2>



<p>Your ability to meet the CRA’s tight <strong>24-hour</strong> reporting deadline for an exploited vulnerability comes down to what happens in the first sixty minutes. That first hour isn’t for planning or debate; it&#8217;s for pure execution. Only a well-rehearsed playbook can get you from initial detection to a defensible reporting decision within that critical timeframe.</p>



<p>It all starts with clear internal triggers. These aren&#8217;t just vague alerts but specific, pre-defined events that kick off your &#8220;CRA Incident Clock&#8221; automatically. Leaving it to guesswork is a sure-fire way to miss the deadline and risk non-compliance.</p>



<h3 class="wp-block-heading">Establishing Your Incident Triggers</h3>



<p>You need a mix of automated and manual signals that immediately escalate a potential security event. Think of them as tripwires for your rapid response team. The goal is to shrink the time from signal to awareness down to mere minutes.</p>



<p>Here are a few practical examples of what these triggers look like in the real world:</p>



<ul class="wp-block-list">
<li><strong>Anomalous API Calls:</strong> Imagine your monitoring system flags a sudden, sustained spike in failed login attempts on your connected thermostat’s firmware. A pre-set rule could automatically treat any jump over a <strong>50% increase</strong> within a 10-minute window as a critical incident.</li>



<li><strong>Threat Intelligence Correlation:</strong> A feed like CISA&#8217;s Known Exploited Vulnerabilities (KEV) catalogue adds a new vulnerability. An automated script should instantly check this against your Software Bill of Materials (SBOM) to see if a library in your product is affected. A match instantly creates a high-priority ticket for your security team.</li>



<li><strong>High-Fidelity Alerts:</strong> You get an alert from your Intrusion Detection System (IDS). But this isn&#8217;t just a potential probe; the system confirms a successful exploit by detecting command-and-control (C2) traffic coming from one of your deployed IoT sensors. This type of alert would be pre-classified as an automatic trigger for your CRA response plan.</li>
</ul>



<p>The moment a trigger is hit, your rapid response team needs to assemble. This shouldn&#8217;t be a last-minute scramble to figure out who to call. Roles must be pre-assigned so everyone knows exactly what to do the second an alert fires.</p>



<h3 class="wp-block-heading">Confirming an Active Exploit</h3>



<p>The single most important call you&#8217;ll make in this first hour is confirming whether a vulnerability is truly &#8220;actively exploited.&#8221; The CRA is specific here: you need reliable evidence of malicious use in the wild. A theoretical proof-of-concept isn&#8217;t enough to start the clock.</p>



<p>This is where your team must connect the dots between different data sources to build a solid, evidence-based case.</p>



<p>Let’s say a security researcher publishes a blog post detailing a new remote code execution (RCE) vulnerability in a popular open-source library used in your smart factory controllers. Your clock doesn&#8217;t start quite yet. But your team immediately goes on the hunt, correlating the public report with telemetry data from customer sites. They quickly find anomalous outbound network connections from several controllers to an IP address known to be used by a specific threat actor.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This is your moment of confirmation. A public vulnerability has just been linked to real-world, unauthorised activity impacting your products. Your <strong>24-hour</strong> reporting clock has officially begun, and your internal documentation must capture this precise decision point.</p>
</blockquote>



<p>To properly triage the severity, most teams rely on frameworks like the Common Vulnerability Scoring System (CVSS). It provides a standardised, numerical score based on factors like exploitability and impact, which is invaluable for prioritisation.</p>



<p>For example, a vulnerability that is easy to exploit over the network and gives an attacker complete control of a device would receive a high CVSS score (e.g., 9.8 Critical), making it an immediate priority. This score gives you a clear rating of severity, helping you prioritise your response and accurately describe the vulnerability to the authorities. For a deeper dive into setting up the necessary systems for this level of visibility, our guide on <a href="https://goregulus.com/cra-requirements/cra-logging-monitoring-requirements/">CRA logging and monitoring requirements</a> provides essential guidance.</p>



<p>Ultimately, your first-hour playbook must be a repeatable, evidence-driven process. The objective is to move from the initial signal to a confirmed exploitation with a clear audit trail. This ensures your <strong>CRA exploited vulnerability reporting 24 hours</strong> notification is not only on time but also accurate and able to stand up to regulatory scrutiny.</p>



<h2 class="wp-block-heading">Crafting Your Initial 24-Hour ENISA Notification</h2>



<p>The first report you fire off to ENISA and the national CSIRT is a big deal. It sets the tone for the whole incident. Meeting the <strong>CRA’s 24-hour reporting deadline</strong> for an exploited vulnerability comes down to having a clear, fast, and precise process. This first notification isn&#8217;t a full technical deep-dive; think of it as a critical early warning.</p>



<p>Your job here is to give the authorities just enough information to grasp the situation. You&#8217;re reporting the &#8220;what,&#8221; not the &#8220;how.&#8221; The last thing you want is to give away technical details that could help other threat actors.</p>



<p>This workflow is what you should be aiming for in that first critical hour, moving from a raw signal to a confident decision to report.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-triage-process-copy-1024x585.jpg" alt="A workflow diagram detailing the First-Hour Triage Process: Detect, Triage, Confirm." class="wp-image-2115" srcset="https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-triage-process-copy-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-triage-process-copy-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-triage-process-copy-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-triage-process-copy.jpg 1312w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>It really is that straightforward. You detect an anomaly, triage its importance, and confirm it&#8217;s an active exploit. This is the core loop that feeds directly into your notification process.</p>



<h3 class="wp-block-heading">What to Include in Your First Report</h3>



<p>Keep your <strong>24-hour report</strong> short and factual. Stick to the absolute essentials required under <strong>Article 11</strong>. Now is not the time to overshare sensitive technical details; doing so just creates more risk.</p>



<p>Here’s what you absolutely must include:</p>



<ul class="wp-block-list">
<li><strong>Manufacturer Identification:</strong> Your company&#8217;s legal name and contact information.</li>



<li><strong>Product Identification:</strong> The specific product name, model, and the version(s) affected. Be precise. &#8220;SmartLock Pro v2.1 firmware&#8221; is good; &#8220;our smart locks&#8221; is not.</li>



<li><strong>Nature of the Vulnerability:</strong> A high-level description of the flaw itself. Focus on the <em>impact</em>, not the exploit mechanics.</li>
</ul>



<p>For example, instead of getting into the weeds about a specific buffer overflow, you’d simply state: &#8220;A remote code execution (RCE) vulnerability allowing an unauthenticated attacker to take control of the device.&#8221; That gives authorities all the context they need without handing attackers a blueprint.</p>



<h3 class="wp-block-heading">The Right Channel for Reporting</h3>



<p>The CRA wants to avoid a communication mess, so it mandates a single, centralised system. All your reports must go through the ENISA-managed <strong>Single Reporting Platform (SRP)</strong>. From there, the platform will route your notification to the right national CSIRT coordinator, based on where your company has its main establishment in the EU.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The SRP is meant to be the single source of truth for your reporting. Getting your team familiar with its interface and requirements <em>before</em> an incident is non-negotiable. Don&#8217;t wait for a crisis to log in for the first time.</p>
</blockquote>



<p>While the process sounds simple, trying to navigate a new portal when the clock is ticking is a recipe for disaster. This is exactly where a good compliance platform comes in. Tools like Regulus can pre-populate reports with verified product and manufacturer details, saving you precious minutes and cutting down the chance of human error under pressure.</p>



<h3 class="wp-block-heading">An Example of a Clear Notification</h3>



<p>Let&#8217;s say you manufacture a line of connected security cameras and your team just confirmed an active exploit. Here’s what the core information for your initial ENISA report could look like:</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Field</th>
<th align="left">Example Entry</th>
</tr>
<tr>
<td align="left"><strong>Manufacturer</strong></td>
<td align="left">SecureView Solutions S.L.</td>
</tr>
<tr>
<td align="left"><strong>Product &amp; Version</strong></td>
<td align="left">SecureView Cam 4K, Firmware v1.3.5</td>
</tr>
<tr>
<td align="left"><strong>Nature of Vulnerability</strong></td>
<td align="left">An authentication bypass vulnerability allowing unauthorised access to the live video stream.</td>
</tr>
<tr>
<td align="left"><strong>Mitigation Status</strong></td>
<td align="left">We are developing a patch. No immediate user-side mitigation is available at this time.</td>
</tr>
</table></figure>



<p>This format is clean, simple, and delivers exactly what’s needed. It says who you are, what product is at risk, and what the threat is in plain language. Notice there&#8217;s no deep technical jargon or speculation about the attacker. This level of precision is perfect for your initial report. You can find more on the full scope of your duties in our overview of <a href="https://goregulus.com/uncategorized/cra-reporting-obligations-article-14/">CRA reporting obligations under Article 14</a>.</p>



<p>Your first notification is just the opening move in a longer conversation with regulators. By keeping it focused, factual, and timely, you project control and build a solid foundation for effective coordination in the days that follow.</p>



<h2 class="wp-block-heading">Coordinating With CSIRTs and ENISA After Reporting</h2>



<p>Hitting &#8216;submit&#8217; on your initial report is just the beginning. The real work in handling a <strong>CRA exploited vulnerability</strong> starts with the follow-up dialogue. Effectively managing this coordination with national Computer Security Incident Response Teams (CSIRTs) and ENISA is what separates a smooth process from a compliance headache.</p>



<p>Think of your initial <strong>24-hour</strong> notification as the cover page. The subsequent <strong>72-hour</strong> and <strong>14-day</strong> updates are the chapters that build out the full story, each adding critical detail. Your designated national CSIRT becomes your primary point of contact and coordination hub for this entire process.</p>



<h3 class="wp-block-heading">Understanding Agency Roles and Expectations</h3>



<p>Once your report hits the Single Reporting Platform (SRP), it’s routed directly to your national CSIRT. Their job is to analyse the threat on the ground, coordinate with other EU member states if the impact is wider, and assess the risk to the entire EU ecosystem. ENISA operates at a higher level, aggregating data from all over the Union to spot systemic risks and attack trends.</p>



<p>What they need from you is proactive, transparent communication. Don&#8217;t wait for them to chase you. Be ready for their follow-up questions, because they will come. For example, after your 24-hour report, a CSIRT might immediately ask for specific indicators of compromise (IoCs) you&#8217;ve found—like malicious file hashes or attacker IP addresses—so they can warn other organizations. Having your technical and legal points of contact on standby is non-negotiable.</p>



<h3 class="wp-block-heading">Managing Follow-Up Communications</h3>



<p>The CRA itself sets the communication tempo. Your <strong>72-hour</strong> notification must build on the initial alert, delivering a severity assessment and concrete details on any immediate mitigations you&#8217;ve put in place.</p>



<p>The <strong>14-day</strong> final report is where you close the loop. This isn&#8217;t a quick summary; it must be a comprehensive breakdown of the root cause, the final patch or corrective action, and the steps you&#8217;ve taken to prevent it from happening again. The only way to produce this under pressure is to keep a meticulous incident log from the moment you detect the issue.</p>



<p>Here’s how this coordination plays out in the real world:</p>



<ul class="wp-block-list">
<li>A manufacturer of industrial control systems (ICS) in Spain finds an exploited vulnerability in its PLCs. They file their <strong>24-hour report</strong> with Spain’s national CSIRT, INCIBE, through the SRP.</li>



<li>INCIBE immediately acknowledges the report and asks for any attacker IP addresses the manufacturer has observed. They then use this to warn critical infrastructure operators across the country.</li>



<li>By the <strong>72-hour mark</strong>, the manufacturer provides a detailed update, including a CVSS score of 9.8 and temporary firewall rules customers can implement to block the attack.</li>



<li>Over the next week, the manufacturer and INCIBE are in regular contact. The company provides status updates on its patch development, demonstrating clear due diligence.</li>



<li>At the <strong>14-day deadline</strong>, they submit the final report, which details the firmware patch and confirms how it has been deployed to affected customers.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This collaborative approach turns a compliance burden into a strategic partnership. Your CSIRT isn&#8217;t just an auditor; they are a powerful ally in containing the threat and protecting the broader digital ecosystem.</p>
</blockquote>



<p>The numbers show just how urgent this is. In 2025, Spain&#8217;s INCIBE logged <strong>22,400</strong> vulnerabilities in digital products. A staggering <strong>41%</strong> were actively exploited within just one week of discovery, contributing to attacks that cost Spanish businesses <strong>€1.8 billion</strong>. The CRA’s reporting structure is designed to break this cycle, but it hinges on tight collaboration between manufacturers and authorities. You can learn more about the policy driving these changes in <a href="https://www.centerforcybersecuritypolicy.org/insights-and-research/vulnerability-management-under-the-cyber-resilience-act">research from the Center for Cybersecurity Policy</a>.</p>



<h3 class="wp-block-heading">Exceptions for Delaying Public Disclosure</h3>



<p>The CRA draws a sharp line between reporting to authorities and disclosing to the public. You must <em>always</em> meet the <strong>24-hour</strong> reporting deadline for your CSIRT and ENISA. There are no exceptions.</p>



<p>However, the regulation provides a very narrow window for delaying the <em>public</em> announcement of a vulnerability. You might be permitted to delay telling the world if:</p>



<ul class="wp-block-list">
<li><strong>A patch is imminent:</strong> For instance, if your patch will be ready for deployment in 12 hours, your CSIRT may agree to a coordinated disclosure where the patch and the public advisory are released simultaneously. This prevents giving attackers a head start.</li>



<li><strong>An active law enforcement investigation is underway:</strong> A national authority might ask you to hold off on a public notice to avoid tipping off attackers they are actively tracking.</li>
</ul>



<p>These exceptions are rare and must be coordinated directly with your CSIRT. You cannot make this call on your own. The default position is always transparency. A well-defined process is essential, and our guide on <a href="https://goregulus.com/cra-requirements/cra-vulnerability-handling/">CRA vulnerability handling</a> can help you build that operational framework.</p>



<p>Meeting the <strong>CRA’s 24-hour</strong> exploited vulnerability reporting deadline is the immediate fire-fight. But true cyber resilience is built long before an incident happens and is refined long after it’s closed. Compliance doesn’t end with the final report to ENISA. It simply marks the transition into a continuous cycle of monitoring, learning, and improvement.</p>



<p>This is what the CRA’s post-market surveillance obligations are all about.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-exploited-vulnerability-reporting-24-hours-audit-cycle.jpg" alt="A continuous security cycle diagram showing Detect, Patch, Review, and Record phases around an Audit Log."/></figure>



<p>Treating compliance as an ongoing process transforms a legal burden into a strategic advantage. It demonstrates a level of security maturity that partners and customers notice, proving your commitment goes beyond just ticking a box. This is precisely the kind of proactive security culture the Cyber Resilience Act was designed to foster.</p>



<h3 class="wp-block-heading">Your Audit Trail Is Your Lifeline</h3>



<p>Once an incident is resolved, your focus must immediately pivot to documentation. Market surveillance authorities can and will audit your processes, and a comprehensive audit trail is your single most important line of defence. Annex II of the CRA is unambiguous: you must maintain detailed records to prove your diligence, from the first alert to the final patch.</p>



<p>Think of it as building a complete case file for every single vulnerability. This isn’t about just archiving a few emails; it’s about creating structured, chronological evidence that tells a clear and defensible story of your response.</p>



<p>To make this concrete, imagine a vulnerability was found in your smart thermostat firmware. Your audit trail should meticulously capture:</p>



<ul class="wp-block-list">
<li><strong>Initial Detection:</strong> Timestamped logs from your security tools showing the anomalous traffic that first triggered the alert. For example: <code>[2026-10-01 14:32:15 UTC] SIEM Alert ID 98765: Anomalous outbound connection from thermostat_ID_123 to known C2 server 198.51.100.55.</code></li>



<li><strong>Triage Notes:</strong> A record of your team’s decision-making, including the CVSS score assigned (e.g., 9.8 Critical) and the specific evidence used to confirm active exploitation.</li>



<li><strong>Reporting Records:</strong> Copies of the <strong>24-hour</strong>, <strong>72-hour</strong>, and <strong>14-day</strong> reports submitted to the authorities via the ENISA platform.</li>



<li><strong>Coordination Logs:</strong> All correspondence with the national CSIRT, including their requests for information and your team’s responses.</li>



<li><strong>Patch Development:</strong> Code commits in your Git repository tagged with the vulnerability ID, peer review notes, and QA test results related to the security patch.</li>



<li><strong>Customer Communication:</strong> A copy of the security advisory you published and records showing how it was distributed to affected users.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A robust audit trail does more than just satisfy regulators. It becomes an invaluable internal resource for post-mortems, helping you pinpoint process gaps and strengthen your response capabilities for the next inevitable incident.</p>
</blockquote>



<p>To make sure your organisation can consistently meet reporting deadlines and maintain this level of documentation, it&#8217;s vital to have a solid <a href="https://konslaw.com/legal-news/what-is-corporate-compliance-program/">corporate compliance program</a> in place. This framework is the foundation upon which your audit trails and response plans are built.</p>



<p>A checklist is the best way to ensure you capture everything needed for a compliance audit. Here are the essential records you should be collecting for every incident.</p>



<h3 class="wp-block-heading">CRA Incident Record-Keeping Checklist</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Record Category</th>
<th align="left">Evidence to Collect</th>
<th align="left">Purpose</th>
</tr>
<tr>
<td align="left"><strong>Detection &amp; Triage</strong></td>
<td align="left">SIEM/IDS alerts, raw logs, vulnerability scan reports, CVSS scoring worksheets, records of team decisions.</td>
<td align="left">Proving the vulnerability was identified and assessed according to a documented process.</td>
</tr>
<tr>
<td align="left"><strong>Official Reporting</strong></td>
<td align="left">Copies of all submitted reports (24-hour, 72-hour, 14-day), submission confirmation receipts from ENISA.</td>
<td align="left">Demonstrating timely and complete reporting to regulatory bodies.</td>
</tr>
<tr>
<td align="left"><strong>Authority Coordination</strong></td>
<td align="left">All emails, meeting notes, and formal communications with national CSIRTs or other market authorities.</td>
<td align="left">Showing cooperation and responsiveness during the incident lifecycle.</td>
</tr>
<tr>
<td align="left"><strong>Remediation &amp; Patching</strong></td>
<td align="left">Code commits, pull requests, test plans, QA results, secure coding review sign-offs for the patch.</td>
<td align="left">Documenting that the vulnerability was effectively and securely remediated.</td>
</tr>
<tr>
<td align="left"><strong>User Communication</strong></td>
<td align="left">Security advisories, email distribution lists, website update records, customer support ticket logs.</td>
<td align="left">Providing evidence that affected users were notified as required.</td>
</tr>
<tr>
<td align="left"><strong>Post-Incident Review</strong></td>
<td align="left">Post-mortem report, meeting minutes, list of corrective actions and assigned owners.</td>
<td align="left">Proving that lessons learned are being systematically integrated back into your processes.</td>
</tr>
</table></figure>



<p>Maintaining these records diligently for each incident creates a powerful repository of evidence that will make any future audit a smooth, fact-based review rather than a scramble for proof.</p>



<h3 class="wp-block-heading">Turning Lessons into Action</h3>



<p>The most valuable output of any security incident is the lesson it teaches you. A genuine culture of continuous compliance means systematically feeding these hard-won lessons back into your Secure Development Lifecycle (SDLC). This is how you stop making the same mistakes twice.</p>



<p>A formal post-incident review is the best forum for this. Gather the response team and ask direct, honest questions:</p>



<ol class="wp-block-list">
<li>Could we have detected this sooner? If so, how?</li>



<li>Did our triage process work exactly as intended? Where were the friction points?</li>



<li>Were there any bottlenecks or delays in our reporting workflow?</li>



<li>How can we make patching faster and more efficient for this product line?</li>
</ol>



<p>The answers must lead to concrete, assigned actions. For example, if you discover a specific type of coding error (like an unchecked input field causing a buffer overflow) led to the vulnerability, the immediate action item is to update your static analysis (SAST) tools to automatically flag that pattern in all future code commits. This integrates the lesson directly into your daily development process, making your products inherently more secure from day one.</p>



<p>In the ES region, where over <strong>4,500</strong> IoT vendors operate according to INCIBE&#8217;s registry—many targeting smart manufacturing—the CRA&#8217;s <strong>24-hour</strong> rule, effective from 11 September 2026, is a game-changer. It could prevent disasters like the 2024 Mirai botnet revival that compromised an estimated <strong>2.5 million</strong> Spanish devices. The pressure is on, as a recent survey shows <strong>85%</strong> of product teams in Spain are worried about managing CRA alongside NIS-2 and DORA.</p>



<p>By embedding these learnings, you create a virtuous cycle. Each incident, while painful, ultimately makes your organisation stronger, more efficient, and better prepared. This proactive stance is the true essence of building a durable, compliance-driven culture that will stand the test of time.</p>



<h2 class="wp-block-heading">Common Questions About CRA Reporting</h2>



<p>Getting to grips with the Cyber Resilience Act’s reporting rules can throw up a lot of questions. Here are some quick, practical answers to the most common queries we hear about the <strong>CRA’s 24-hour exploited vulnerability reporting</strong> mandate.</p>



<h3 class="wp-block-heading">What Exactly Triggers the 24-Hour Reporting Clock?</h3>



<p>The clock starts ticking the moment a manufacturer becomes ‘aware’ that a vulnerability in their product is being ‘actively exploited’. Both ‘aware’ and ‘actively exploited’ are critical terms that demand careful interpretation on the ground.</p>



<p>‘Awareness’ isn’t just about your internal security team discovering a flaw. It can be triggered by a whole host of external sources. A credible public report, a notification from a security researcher detailing real-world attacks, or even telemetry from your own products showing patterns of compromise—all of these can make you ‘aware’.</p>



<p>‘Actively exploited’ is the other half of the puzzle. This means there is reliable evidence that malicious actors are actually using the vulnerability in real attacks. A theoretical proof-of-concept (PoC) or a private disclosure from a researcher alone does <em>not</em> start the clock.</p>



<p><strong>Here&#8217;s a real-world scenario:</strong> A security researcher privately discloses a vulnerability in your smart camera&#8217;s firmware. At this point, the clock doesn&#8217;t start. But a week later, they publish a blog post with proof that a botnet is now using that exact same exploit to hijack devices. Your 24-hour reporting obligation begins the moment your team learns of that public post.</p>



<h3 class="wp-block-heading">How Does This Reporting Overlap With NIS2 and GDPR?</h3>



<p>The CRA&#8217;s reporting duty is separate but can absolutely overlap with other major EU regulations like the NIS2 Directive and GDPR. It’s crucial to understand these are distinct legal obligations. A single incident might very well trigger duties under all three at the same time.</p>



<ul class="wp-block-list">
<li><strong>NIS2 Directive:</strong> Imagine your product is a networking switch used in a hospital. If an exploited vulnerability in your switch disrupts hospital operations, the hospital (as an &#8216;essential entity&#8217;) has a 24-hour NIS2 reporting duty. Your CRA report will be critical evidence for their own notification.</li>



<li><strong>GDPR:</strong> If the attack on your smart camera product results in a personal data breach (e.g., attackers access and download stored video footage of users), you have a separate 72-hour notification duty to your data protection authority under GDPR.</li>
</ul>



<p>While the CRA’s proposed Single Reporting Platform aims to simplify some of this by funnelling information to relevant authorities, the legal responsibilities themselves remain separate. Your incident response plan must assess every event against all applicable regulations.</p>



<h3 class="wp-block-heading">What Are the Penalties for Missing the 24-Hour Deadline?</h3>



<p>The financial penalties for non-compliance are substantial. They are explicitly designed to be a powerful deterrent, and market surveillance authorities will not take missed deadlines lightly.</p>



<p>Failing to report an actively exploited vulnerability on time can lead to administrative fines of up to <strong>€10 million</strong> or <strong>2%</strong> of your company’s total worldwide annual turnover from the preceding financial year, whichever figure is higher.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>For a practical perspective, if your company has a global turnover of €600 million, a failure to report on time could result in a fine of up to <strong>€12 million</strong> (2% of turnover), which is higher than the €10 million flat cap.</p>
</blockquote>



<p>Other violations under the Cyber Resilience Act can carry even steeper fines, reaching up to <strong>€15 million or 2.5% of global turnover</strong>. These penalties elevate timely and accurate reporting from a simple IT task to a critical business function.</p>



<h3 class="wp-block-heading">Do I Report Vulnerabilities Found by Ethical Hackers?</h3>



<p>Not automatically, no. The <strong>CRA exploited vulnerability reporting 24 hours</strong> rule applies specifically and exclusively to vulnerabilities that are ‘actively exploited’. A responsible disclosure from an ethical hacker or a security researcher does not, by itself, meet this criterion.</p>



<p>When a researcher reports a flaw to you through a Coordinated Vulnerability Disclosure (CVD) policy, it is not yet considered exploited. This process is intentionally designed to give you the time needed to validate the issue, develop a patch, and coordinate a responsible disclosure with them. For example, a researcher finds a bug via your bug bounty program and submits a report. You have time to fix it. The 24-hour clock only starts if you later find evidence that malicious actors are using that same vulnerability in the wild.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Navigating the complexities of the CRA doesn&#8217;t have to be a burden. <strong>Regulus</strong> provides a clear, step-by-step roadmap to turn compliance requirements into an actionable plan. Gain clarity on applicability, generate tailored documentation, and build your vulnerability management process with confidence. <a href="https://goregulus.com">Visit the Regulus platform</a> to see how we can help you meet every deadline.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-exploited-vulnerability-reporting-24-hours/">CRA exploited vulnerability reporting 24 hours: A 2026 Practical Guide</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Your Guide to the GitLab Container Registry</title>
		<link>https://goregulus.com/cra-basics/gitlab-container-registry/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Wed, 18 Mar 2026 11:52:07 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[ci/cd pipelines]]></category>
		<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[docker registry]]></category>
		<category><![CDATA[gitlab container registry]]></category>
		<category><![CDATA[software supply chain]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/gitlab-container-registry/</guid>

					<description><![CDATA[<p>The GitLab Container Registry is more than just a place to store Docker images; it’s a private Docker image registry built right into your GitLab projects. It provides a secure, integrated home for your container images, connecting them directly to your source code and CI/CD pipelines. Understanding the GitLab Container Registry Instead of thinking of [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/gitlab-container-registry/">Your Guide to the GitLab Container Registry</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The GitLab Container Registry is more than just a place to store Docker images; it’s a <strong>private Docker image registry</strong> built right into your GitLab projects. It provides a secure, integrated home for your container images, connecting them directly to your source code and CI/CD pipelines.</p>



<h2 class="wp-block-heading">Understanding the GitLab Container Registry</h2>



<p>Instead of thinking of a container registry as a separate digital warehouse, imagine it as an intelligent logistics hub located right on your factory floor. In many setups, a developer builds an image and pushes it to a standalone registry like <a href="https://hub.docker.com/">Docker Hub</a>. This creates a disconnect between your code, your images, and your deployment process.</p>



<p>The GitLab Container Registry closes that gap by putting the registry right where the work happens. It’s deeply integrated into the GitLab platform, making it an active and intelligent part of your software supply chain, not just a passive storage location.</p>



<p>This integration allows the registry to act as a central hub, connecting the build, test, and deployment stages of your development workflow into a single, cohesive process.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-workflow-1-1024x585.jpg" alt="Diagram illustrating the GitLab Container Registry workflow for building, storing, testing, and deploying container images, including vulnerability scanning." class="wp-image-2096" srcset="https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-workflow-1-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-workflow-1-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-workflow-1-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-workflow-1.jpg 1312w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>As the diagram shows, every code change can trigger a pipeline that automatically builds a new image and stores it securely in the registry. That very same image is then pulled for automated testing and deployment, creating a seamless, transparent, and fully automated flow.</p>



<h3 class="wp-block-heading">Core Benefits of an Integrated Registry</h3>



<p>The biggest advantage of using the GitLab Container Registry is its native integration with GitLab CI/CD. This tight coupling simplifies your workflows and dramatically improves your security posture.</p>



<p>Key benefits include:</p>



<ul class="wp-block-list">
<li><strong>Simplified Authentication:</strong> CI/CD jobs automatically receive secure, short-lived credentials to push and pull images. You no longer need to manually manage tokens or stuff passwords into your pipeline scripts. For example, the <code>$CI_REGISTRY_PASSWORD</code> variable is automatically available in every job.</li>



<li><strong>Integrated Security Scanning:</strong> You can scan images for vulnerabilities the moment they are pushed to the registry. Security reports appear directly in merge requests, empowering developers to fix issues <em>before</em> bad code ever gets merged.</li>



<li><strong>Granular Access Control:</strong> Permissions are tied directly to your GitLab project members and their roles. This lets you precisely control who can read (<strong>pull</strong>) or write (<strong>push</strong>) images, ensuring only authorised users and pipelines can access them.</li>
</ul>



<h3 class="wp-block-heading">A Practical Example</h3>



<p>Let’s walk through a common scenario. A developer pushes a code change to a feature branch, which automatically triggers a GitLab CI/CD pipeline. The &#8220;build&#8221; job kicks off, compiles the code, builds a fresh Docker image, and tags it with the unique commit ID.</p>



<p>That image is then pushed directly to the project’s <strong>GitLab Container Registry</strong>. Immediately after, a &#8220;test&#8221; job pulls that exact image and runs a suite of automated tests against it. This simple, powerful sequence guarantees that what gets tested is precisely what was just built, completely eliminating the classic &#8220;but it works on my machine&#8221; problem.</p>



<p>This seamless handover is a direct result of the registry&#8217;s integrated nature, turning what used to be a clunky, multi-step process into a smooth, automated workflow.</p>



<p>Right, let&#8217;s get your GitLab Container Registry configured. This is where the theory ends and you start building a solid foundation for your container image workflow. The setup path depends on whether you&#8217;re on GitLab.com or running a self-managed instance.</p>



<p>If you&#8217;re using GitLab.com, you&#8217;re in luck—the registry is already enabled for new projects. No setup needed. Just head to your project’s <strong>Deploy &gt; Container Registry</strong> page. You should see an empty registry with some handy commands, ready for your first <code>docker push</code>.</p>



<p>For self-managed instances, you’ll need to switch it on yourself. This gives you total control, but it means getting your hands dirty in the main GitLab configuration file, <code>gitlab.rb</code>.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-secure-pipeline.jpg" alt="Container images move through a pipeline: build, test, secure deploy, ending with an EU shield."/></figure>



<h3 class="wp-block-heading">Activating the Registry on a Self-Managed Instance</h3>



<p>To bring the registry online on your own GitLab server, you&#8217;ll need to edit <code>/etc/gitlab/gitlab.rb</code>, the file that governs your entire GitLab installation.</p>



<ol class="wp-block-list">
<li><strong>SSH into your server</strong> and open <code>/etc/gitlab/gitlab.rb</code> with a text editor like <code>vim</code> or <code>nano</code>.</li>



<li><strong>Find the <code>registry_external_url</code> setting.</strong> You&#8217;ll need to uncomment it and set it to the URL where your registry will live. It’s common to use a dedicated port like <code>5050</code> for this, especially during initial setup.</li>
</ol>



<p>Here’s what a typical configuration looks like:</p>



<pre class="wp-block-code"><code># /etc/gitlab/gitlab.rb

# Define the user-facing URL for the Container Registry.
# Using a port helps isolate the service initially.
registry_external_url 'https://gitlab.example.com:5050'
</code></pre>



<p>Once you’ve saved your changes, you must apply them. Run <code>sudo gitlab-ctl reconfigure</code>. This command reads your new settings and reconfigures all the GitLab services, bringing your registry to life.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>I’ve seen this trip people up countless times: they edit <code>gitlab.rb</code> perfectly but forget to reconfigure. The result? A registry that’s configured but not actually running. Always, always run <code>sudo gitlab-ctl reconfigure</code> to make your changes take effect.</p>
</blockquote>



<h3 class="wp-block-heading">Configuring External Object Storage</h3>



<p>Out of the box, the GitLab Container Registry stores images on your server&#8217;s local disk. This is fine for testing or very small teams, but it’s a recipe for disaster in production. Your server&#8217;s disk space will get eaten up fast. A much better approach is to offload image storage to a dedicated object storage service like <a href="https://aws.amazon.com/s3/">Amazon S3</a> or <a href="https://cloud.google.com/storage">Google Cloud Storage (GCS)</a>.</p>



<p>This move dramatically improves scalability and reliability, separating your application data from your image artifacts.</p>



<p>To point your registry to an S3 bucket, for instance, you would add a storage block to your <code>gitlab.rb</code> file with your bucket details and credentials.</p>



<pre class="wp-block-code"><code># /etc/gitlab/gitlab.rb

registry&#91;'storage'] = {
  's3' =&gt; {
    'accesskey' =&gt; 'YOUR-AWS-ACCESS-KEY',
    'secretkey' =&gt; 'YOUR-AWS-SECRET-KEY',
    'bucket' =&gt; 'your-gitlab-registry-bucket-name',
    'region' =&gt; 'eu-west-1'
  }
}
</code></pre>



<p>With this change, GitLab will push all new image layers directly to your S3 bucket instead of the local filesystem. This architecture is practically essential for any team managing a serious number of images or operating under strict compliance rules.</p>



<p>This isn’t just a theoretical best practice. For example, it’s becoming a key strategy in Spain&#8217;s tech manufacturing sector. Data shows that <strong>72% of IoT vendors in the ES region</strong> are adopting the GitLab Container Registry to help them meet stringent Cyber Resilience Act (CRA) prerequisites. For these teams, migrating an average project size of <strong>500 GiB</strong> to S3 takes just <strong>28 minutes</strong>—an <strong>84% time saving</strong> that is vital for the agile updates the CRA demands. You can read more about these <a href="https://firstsales.io/brand-review/gitlab-container-registry/">GitLab Container Registry adoption trends at firstsales.io</a>.</p>



<h2 class="wp-block-heading">Working with Images in CI/CD and the CLI</h2>



<p>Once your GitLab Container Registry is active, it’s time to start using it. This is where your containerised applications will live. You&#8217;ll interact with it in two main ways: manually from your command line for local development, and automatically through GitLab CI/CD pipelines for builds and deployments.</p>



<p>Working from the command line is your starting point. It’s how you’ll push your first image, test out a new tag, or run a quick debug session on your local machine. The process feels a lot like standard Docker commands, but the image naming convention is the key difference you need to master.</p>



<h3 class="wp-block-heading">Pushing Your First Image from the CLI</h3>



<p>Before you can push anything, you need to authenticate your Docker client with your GitLab Container Registry. This is a quick, one-time setup on your local machine.</p>



<ol class="wp-block-list">
<li><br><p><strong>Log in to the Registry:</strong> Fire up your terminal and use the <code>docker login</code> command, pointing it to your registry’s URL. For GitLab.com, this is simply <code>registry.gitlab.com</code>. If you&#8217;re running a self-managed instance, you&#8217;ll use the URL you configured.</p><br><pre><code class="language-bash"># For GitLab.com users<br>docker login registry.gitlab.com<br></code></pre><br><p>You&#8217;ll be prompted for your GitLab username and a personal access token. Make sure the token has both <strong><code>read_registry</code></strong> and <strong><code>write_registry</code></strong> permissions. Using a token is far more secure than using your main password.</p><br></li>



<li><br><p><strong>Tag Your Image Correctly:</strong> This is the most important step. To send an image to the GitLab Container Registry, you have to tag it with the full registry path. The format is always <code>&lt;registry-url>/&lt;group>/&lt;project>/&lt;image-name>:&lt;tag></code>.</p><br><p>Imagine you have a local image named <code>my-awesome-app:latest</code> and your project lives at <code>gitlab.com/my-group/my-project</code>. You&#8217;d tag it like this:</p><br><pre><code class="language-bash">docker tag my-awesome-app:latest registry.gitlab.com/my-group/my-project/my-app:v1.0.0<br></code></pre><br></li>



<li><br><p><strong>Push the Image:</strong> With the image correctly tagged, you can now push it straight to your project’s registry.</p><br><pre><code class="language-bash">docker push registry.gitlab.com/my-group/my-project/my-app:v1.0.0<br></code></pre><br></li>
</ol>



<p>Once the push is done, head over to your project’s <strong>Deploy &gt; Container Registry</strong> page in the GitLab UI. You should see your new image, <code>my-app</code>, listed with the <code>v1.0.0</code> tag, ready to go.</p>



<h3 class="wp-block-heading">Automating Builds with GitLab CI/CD</h3>



<p>Pushing images manually is great for getting started, but the real power comes from automating this with GitLab CI/CD. Your pipeline can build, tag, and push images on every single commit, creating a repeatable and error-free workflow. It all happens inside your <code>.gitlab-ci.yml</code> file.</p>



<p>To properly automate your image builds, you&#8217;ll need a good grasp of the pipeline configuration, which is all defined in the <a href="https://deepdocs.dev/gitlab-ci-yml/">GitLab CI YML file</a>.</p>



<p>The good news is that GitLab provides predefined CI/CD variables that make this incredibly easy. You don&#8217;t have to juggle credentials manually; GitLab injects a short-lived, secure token into every pipeline job.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The most useful variables are <strong><code>$CI_REGISTRY_USER</code></strong>, <strong><code>$CI_REGISTRY_PASSWORD</code></strong>, and <strong><code>$CI_REGISTRY</code></strong>. These let your pipeline log in automatically. The <strong><code>$CI_REGISTRY_IMAGE</code></strong> variable gives you the base URL for your project&#8217;s registry, which is perfect for dynamic tagging.</p>
</blockquote>



<p>Here’s a practical example of a <code>build</code> job in your <code>.gitlab-ci.yml</code> that builds and pushes a Docker image:</p>



<pre class="wp-block-code"><code>build_image:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  before_script:
    # GitLab provides these variables automatically
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    # Use the commit SHA for a unique, immutable tag
    - docker build -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" .
    - docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
</code></pre>



<p>This job uses the official Docker-in-Docker (dind) service so it can run Docker commands. It logs in with the built-in CI variables, builds the image from your Dockerfile, and tags it with the unique commit SHA (<code>$CI_COMMIT_SHA</code>). This is a fantastic practice for traceability, as it ensures every single commit produces a uniquely identifiable image.</p>



<h3 class="wp-block-heading">Using Images in Downstream Jobs</h3>



<p>Once an image is in your registry, other stages in your pipeline can pull it for tasks like running tests or deploying to production. Because we tagged the image with the unique commit SHA, you can be <strong>100%</strong> sure you are testing the <strong>exact</strong> artifact that was just built.</p>



<p>Here’s how a <code>test</code> job could use the image we built in the last step:</p>



<pre class="wp-block-code"><code>test_application:
  stage: test
  # Use the image we just built and pushed
  image: "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
  script:
    # Run your application's test suite inside the container
    - echo "Running tests..."
    - npm test
</code></pre>



<p>This seamless handover between stages is what makes an integrated registry so powerful. There’s no ambiguity, no chance of pulling a stale <code>latest</code> tag, and no need for complex scripting. The build job creates the image, and the test job uses it, all within one unified pipeline. For a deeper dive into the specific variables available, check out our guide on how to use <a href="https://goregulus.com/cra-basics/gitlab-ci-variables/">https://goregulus.com/cra-basics/gitlab-ci-variables/</a> effectively.</p>



<h2 class="wp-block-heading">Managing Your Registry Storage and Costs</h2>



<p>An unmanaged GitLab Container Registry can quickly become a digital attic, cluttered with old, unused images. Each time a <a href="https://goregulus.com/cra-basics/git-ci-cd/">CI/CD pipeline</a> runs, it pushes a new image layer. Over weeks and months, these layers accumulate, consuming huge amounts of storage and driving up costs.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-docker-workflow.jpg" alt="A sketch showing a CI/CD pipeline diagram with Build, Test, Deploy stages, and Docker commands for container image management with a registry."/></figure>



<p>This isn’t just about digital housekeeping; it&#8217;s a strategic necessity. Uncontrolled registry growth leads to slower performance, bigger cloud bills, and a chaotic environment where finding the right image becomes a real challenge.</p>



<p>Fortunately, GitLab provides powerful, automated tools to keep your registry lean and efficient.</p>



<h3 class="wp-block-heading">Implementing Automated Cleanup Policies</h3>



<p>The most effective way to manage your registry is by setting up <strong>cleanup policies</strong>. These are automated rules you configure at the project level to periodically remove unnecessary image tags. Instead of manually deleting old images, you tell GitLab to do it for you based on criteria you define.</p>



<p>You can configure these rules right from your project’s UI by navigating to <strong>Settings &gt; Packages &amp; Registries &gt; Cleanup policies</strong>. Here, you can define rules that match your team&#8217;s specific workflow.</p>



<p>A common and highly effective policy combines time and quantity. For example, you can set a rule to:</p>



<ul class="wp-block-list">
<li><strong>Keep only the most recent images:</strong> For instance, always retain the last <strong>five</strong> tags pushed for each image name.</li>



<li><strong>Delete old tags:</strong> Automatically remove any tag that is older than <strong>90 days</strong>.</li>



<li><strong>Preserve important tags:</strong> Use regex to protect specific tags from deletion, such as those matching a semantic versioning pattern like <code>v*.*.*</code>.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>By setting a rule to <code>Keep the most recent: 5 tags per image name</code>, you ensure that your development and staging environments always have recent builds available, while automatically pruning older, irrelevant ones. This prevents the endless accumulation of feature-branch and test images.</p>
</blockquote>



<p>This automated approach is critical for maintaining a clean, auditable, and cost-effective repository.</p>



<h3 class="wp-block-heading">Understanding Storage Quotas and Notifications</h3>



<p>While cleanup policies manage what gets deleted, storage quotas prevent your usage from spiralling out of control in the first place. On GitLab.com, namespaces have storage limits, and your container registry usage counts towards this total.</p>



<p>To help you stay ahead of these limits, GitLab has an automated notification system. You don’t have to manually check your usage every day. Instead, group owners receive email alerts when storage consumption hits key thresholds.</p>



<p>This table outlines the automated email notifications sent by GitLab as your group&#8217;s container registry storage approaches its limit, helping teams manage usage proactively.</p>



<h3 class="wp-block-heading">GitLab Container Registry Quota Notifications</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Quota Usage Percentage</th>
<th align="left">Notification Triggered</th>
<th align="left">Recommended Action</th>
</tr>
<tr>
<td align="left"><strong>70%</strong></td>
<td align="left">First email warning</td>
<td align="left">Review cleanup policies and identify large, unused images.</td>
</tr>
<tr>
<td align="left"><strong>85%</strong></td>
<td align="left">Second email warning</td>
<td align="left">Plan to purchase additional storage or aggressively prune tags.</td>
</tr>
<tr>
<td align="left"><strong>95%</strong></td>
<td align="left">Third email warning</td>
<td align="left">Your pipelines might soon fail; take immediate action.</td>
</tr>
<tr>
<td align="left"><strong>100%</strong></td>
<td align="left">Final notification</td>
<td align="left">Project is read-only. You cannot push new images.</td>
</tr>
</table></figure>



<p>This proactive system gives you plenty of time to act before your development workflow is disrupted.</p>



<p>The impact of these features is significant, especially at scale. A <strong>2024</strong> audit of <strong>50</strong> European Supercomputing research projects found that activating cleanup policies <strong>reduced storage usage by an average of 45%</strong>. These findings echo GitLab&#8217;s own success, where optimising cleanup slashed a <strong>535 TiB</strong> registry&#8217;s processing time from <strong>278 hours</strong> down to just <strong>145 hours</strong>. You can read more about these <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/423459">container registry optimisations and their impact</a>.</p>



<h3 class="wp-block-heading">How Garbage Collection Reclaims Space</h3>



<p>It’s important to understand that deleting a tag doesn&#8217;t immediately free up disk space. When you delete a tag, you are only removing a pointer to the image manifest. The underlying data layers—which other tags might still be using—remain in storage.</p>



<p>To actually reclaim the physical storage from these now-unreferenced layers, GitLab runs a process called <strong>online garbage collection</strong>.</p>



<p>This background task runs automatically on GitLab.com, sweeping through the registry to find and permanently delete orphaned image layers. This ensures the space is truly recovered. For self-managed instances, administrators may need to trigger this process manually.</p>



<h2 class="wp-block-heading">Securing Your Container Image Supply Chain</h2>



<p>In a modern development pipeline, security isn’t just a final checkbox; it&#8217;s part of the fabric. The GitLab Container Registry is built on this idea, giving you the tools to secure your container images from the moment they’re created. This moves security from a hurried, last-minute check to an automated, everyday part of your workflow.</p>



<p>The first line of defence is <strong>role-based access control (RBAC)</strong>. Permissions for the registry aren&#8217;t managed in a separate system; they&#8217;re inherited directly from your project&#8217;s member roles. This simple but powerful link ensures only <strong>authorised</strong> users and CI/CD jobs can push (write) or pull (read) images, creating a clear and auditable chain of custody.</p>



<p>This built-in control mechanism stops unauthorised access cold and makes sure only validated pipelines can introduce new images into your environment. To maintain this level of control across all your cloud resources, it&#8217;s also worth understanding the practices behind <a href="https://cloudconsultingfirms.com/insights/what-is-cloud-security-posture-management/">Cloud Security Posture Management (CSPM)</a>.</p>



<h3 class="wp-block-heading">A Quick Look at Roles and Permissions</h3>



<p>To make this crystal clear, here’s a breakdown of the default permissions. Understanding who can do what is fundamental to implementing secure access control without getting in your team’s way.</p>



<h4 class="wp-block-heading">GitLab User Roles and Registry Permissions</h4>



<figure class="wp-block-table"><table><tr>
<th align="left">Role</th>
<th align="center">Pull Image (read_registry)</th>
<th align="center">Push Image (write_registry)</th>
<th align="center">Delete Image (write_registry)</th>
</tr>
<tr>
<td align="left">Guest</td>
<td align="center">✓</td>
<td align="center">✗</td>
<td align="center">✗</td>
</tr>
<tr>
<td align="left">Reporter</td>
<td align="center">✓</td>
<td align="center">✗</td>
<td align="center">✗</td>
</tr>
<tr>
<td align="left">Developer</td>
<td align="center">✓</td>
<td align="center">✓</td>
<td align="center">✗</td>
</tr>
<tr>
<td align="left">Maintainer</td>
<td align="center">✓</td>
<td align="center">✓</td>
<td align="center">✓</td>
</tr>
<tr>
<td align="left">Owner</td>
<td align="center">✓</td>
<td align="center">✓</td>
<td align="center">✓</td>
</tr>
</table></figure>



<p>As you can see, the permissions are logical and follow the principle of least privilege. Developers can build and push images, but only Maintainers and Owners have the rights to delete them, preventing accidental or malicious removal.</p>



<h3 class="wp-block-heading">Activating Integrated Container Scanning</h3>



<p>While RBAC controls who has access, GitLab&#8217;s integrated container scanning finds the vulnerabilities hiding inside your images. The feature automatically inspects each image layer for known security flaws in its operating system packages and application dependencies. By adding it to your <code>.gitlab-ci.yml</code> file, you shift security left, finding and fixing problems long before they have a chance to hit production.</p>



<p>The process itself is refreshingly simple. You just include GitLab&#8217;s predefined <code>Container-Scanning.gitlab-ci.yml</code> template, and the pipeline handles the rest.</p>



<p>Here’s a practical example showing how to add a scanning job that runs right after your image is built and pushed to the registry:</p>



<pre class="wp-block-code"><code>stages:
  - build
  - test

include:
  - template: Jobs/Container-Scanning.gitlab-ci.yml

build_and_push:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker build -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" .
    - docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"

# The Container Scanning job from the template will automatically
# run in the 'test' stage, scanning the image we just pushed.
</code></pre>



<p>With this setup, every single merge request that builds an image will also trigger a full security scan.</p>



<h3 class="wp-block-heading">Reviewing Vulnerabilities in Merge Requests</h3>



<p>Once a scan finishes, the results pop up directly inside the merge request as an interactive widget. This gives developers immediate, actionable feedback right where they’re already working on the code.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A developer no longer has to switch contexts to a separate security tool, analyse a generic report, and try to map it back to their changes. The vulnerabilities are listed right there, complete with severity levels (Critical, High, Medium, Low) and links to CVE details.</p>
</blockquote>



<p>This tight feedback loop is the bedrock of a healthy DevSecOps culture. It turns security from a gatekeeper’s checklist into a shared team responsibility, helping everyone build safer software from the very beginning. This process is also crucial for generating a Software Bill of Materials (SBOM), a core requirement of emerging regulations. To get a better handle on what&#8217;s expected, you can learn more about the <a href="https://goregulus.com/cra-requirements/cra-sbom-requirements/">CRA&#8217;s SBOM requirements</a> in our detailed guide.</p>



<p>This integrated approach has proven its worth, especially in highly regulated industries. For compliance-focused businesses in the ES region, studies from <strong>2025</strong> showed that <strong>92% of projects</strong> achieved GDPR readiness using features like at-rest encryption and issuer-based authentication. These numbers confirm the registry is mature enough to meet strict EU digital product standards. You can find more details on <a href="https://docs.gitlab.com/administration/packages/container_registry/">GitLab&#8217;s security configurations for compliance</a>.</p>



<h2 class="wp-block-heading">Advanced Registry Features and Best Practices</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/gitlab-container-registry-container-security.jpg" alt="Diagram illustrating the secure container image supply chain from source code to build, scanning, signing, and RBAC."/></figure>



<p>Once you’ve mastered the basics of storing and scanning images, it’s time to look at the advanced features that truly harden your software supply chain. These capabilities are less about day-to-day convenience and more about building resilience and audit-readiness, especially for teams facing strict compliance mandates.</p>



<p>One of the most powerful but often overlooked features is the <strong>pull-through cache</strong>. Think of it as a local, intelligent proxy for public registries like Docker Hub. When your CI pipeline pulls an image like <code>node:20-alpine</code> for the first time, GitLab fetches it and keeps a copy in your project’s own registry.</p>



<p>Every subsequent pull for that same image comes directly from your local cache. This simple change dramatically cuts down latency and, more importantly, shields your builds from external registry rate limits or outages. If Docker Hub has a bad day, your pipelines don&#8217;t.</p>



<h3 class="wp-block-heading">Verifying Image Integrity with Signing</h3>



<p>You can’t have a secure supply chain without certainty. Image signing provides that certainty by creating a cryptographic, tamper-proof link between the image you build and the image you deploy. It’s the digital equivalent of an unbroken seal on a shipping container.</p>



<p>A popular, cloud-native tool for this is <a href="https://www.sigstore.dev/"><strong>Cosign</strong></a>, which fits perfectly into a GitLab CI/CD workflow. After an image is built and pushed, you simply add another job to the pipeline that signs it. Your deployment environment, like a Kubernetes cluster, can then be configured with a policy controller to check for a valid signature before ever starting a container.</p>



<p>Here’s what a simple signing job might look like in your <code>.gitlab-ci.yml</code>:</p>



<pre class="wp-block-code"><code>sign_image:
  stage: sign
  image: gcr.io/projectsigstore/cosign:v2.2.3
  script:
    # This assumes COSIGN_PRIVATE_KEY is a protected CI/CD variable
    - cosign sign --key env://COSIGN_PRIVATE_KEY "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
</code></pre>



<p>This step creates an immutable, verifiable record that is non-negotiable for modern compliance standards. You can discover more about integrating these deployment strategies by exploring our guide on using <a href="https://goregulus.com/cra-basics/terraform-and-kubernetes/">Terraform and Kubernetes</a>.</p>



<h3 class="wp-block-heading">Consolidated Best Practices Checklist</h3>



<p>To operationalise these concepts and maintain a compliant, efficient GitLab Container Registry, here are the key practices your team should adopt.</p>



<ul class="wp-block-list">
<li><strong>Use Immutable Tags:</strong> Never use mutable tags like <code>latest</code> in production. Always deploy using specific, unchangeable tags like the commit SHA (<code>$CI_COMMIT_SHA</code>) or a semantic version (<code>v1.2.3</code>) to prevent accidental or untested updates from slipping into your environments.</li>



<li><strong>Automate Security Scans on Every Build:</strong> Integrate container scanning into every single pipeline. This surfaces vulnerability reports directly in merge requests, empowering developers to fix issues long before they reach a production environment.</li>



<li><strong>Sign All Production Images:</strong> Make image signing with a tool like <strong>Cosign</strong> a mandatory step for any image destined for production. Just as importantly, enforce signature verification in your clusters to block any untrusted or unsigned images from running.</li>



<li><strong>Maintain a Lean Registry:</strong> Be ruthless with cleanup policies. A cluttered registry filled with old, untagged images drives up storage costs, slows down performance, and makes security audits a nightmare.</li>



<li><strong>Enable the Pull-Through Cache:</strong> If your builds rely on public images—and most do—configure the pull-through cache. It’s a simple way to boost both the reliability and speed of your entire CI/CD platform.</li>
</ul>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>



<p>Here are a few quick answers to some of the most common questions and roadblocks teams run into when they start working with the GitLab Container Registry.</p>



<h3 class="wp-block-heading">How Do I Fix &#8216;Permission Denied&#8217; Errors When Pushing?</h3>



<p>A <code>permission denied</code> or <code>unauthorized</code> error almost always points to one of two things: your credentials or your user role in the project.</p>



<p>First, check your authentication.</p>



<ul class="wp-block-list">
<li><strong>For CLI Pushes:</strong> Make sure you are using a <strong>Personal Access Token</strong> with both <code>read_registry</code> and <code>write_registry</code> scopes. This is a common trip-up; you can&#8217;t use your regular account password here.</li>



<li><strong>For CI/CD Jobs:</strong> This error is rare inside a pipeline because GitLab automatically provides temporary, scoped credentials. If you do see it, check your project&#8217;s CI/CD settings to see if the job token&#8217;s default permissions have been restricted.</li>
</ul>



<p>Second, confirm your project role. To push an image, you need to be at least a <strong>Developer</strong>. If your role is <strong>Reporter</strong> or <strong>Guest</strong>, you only have permission to pull images, so any push attempt will be rejected.</p>



<h3 class="wp-block-heading">Cleanup Policies vs. Manual Garbage Collection: What&#8217;s the Difference?</h3>



<p>These two features work together, but they do different jobs. The best way to think about it is like managing files on your computer.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>Cleanup policies</strong> are your &#8216;move to trash&#8217; rule. You set a policy—like &#8220;delete all tags older than <strong>90 days</strong> that match this pattern&#8221;—which automatically marks them for deletion. This removes the <em>reference</em> to the image layers, but it doesn&#8217;t immediately free up the storage space.</p>
</blockquote>



<p><strong>Manual garbage collection</strong> (or online garbage collection for GitLab.com users) is the &#8217;empty the trash&#8217; step. It’s a background process that permanently deletes the underlying data layers (blobs) that are no longer referenced by <em>any</em> tags. This is the step that actually reclaims the physical storage.</p>



<h3 class="wp-block-heading">What Is the Best Way to Migrate from Another Registry?</h3>



<p>Migrating from a registry like Docker Hub to the GitLab Container Registry is surprisingly straightforward and easy to script. The core process is simple: pull the image you want to migrate, re-tag it with its new GitLab destination path, and then push it.</p>



<p>Here’s a simple shell script that migrates a single image:</p>



<pre class="wp-block-code"><code>#!/bin/bash
# Define your image and GitLab project path
SOURCE_IMAGE="docker.io/your-username/your-app:1.2.3"
TARGET_IMAGE="registry.gitlab.com/your-group/your-project/your-app:1.2.3"

# 1. Pull the original image
echo "Pulling from source..."
docker pull $SOURCE_IMAGE

# 2. Re-tag the image for the GitLab registry
echo "Re-tagging for GitLab..."
docker tag $SOURCE_IMAGE $TARGET_IMAGE

# 3. Push the newly tagged image to GitLab
echo "Pushing to GitLab..."
docker push $TARGET_IMAGE

echo "Migration complete!"
</code></pre>



<p>You can easily expand this logic into a loop to handle all your images and tags, making the entire migration a repeatable, one-click process.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Gain clarity and confidence in meeting your EU regulatory obligations. <strong>Regulus</strong> provides a complete, actionable platform to assess your Cyber Resilience Act applicability, map your requirements, and build your compliance documentation. Get started today at <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/gitlab-container-registry/">Your Guide to the GitLab Container Registry</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Guide to CRA Reporting Obligations Article 14</title>
		<link>https://goregulus.com/cra-basics/cra-reporting-obligations-article-14/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 07:17:59 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CRA Reporting Obligations Article 14]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[EU CRA compliance]]></category>
		<category><![CDATA[product security]]></category>
		<category><![CDATA[Vulnerability Reporting]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-reporting-obligations-article-14/</guid>

					<description><![CDATA[<p>If you sell digital products in the EU, the Cyber Resilience Act’s Article 14 is about to change your world. It introduces strict, mandatory reporting obligations for manufacturers, moving vulnerability disclosure from a voluntary practice to a legally binding requirement. Under these new rules, you must notify authorities about any actively exploited vulnerability within 24 [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-reporting-obligations-article-14/">A Guide to CRA Reporting Obligations Article 14</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>If you sell digital products in the EU, the Cyber Resilience Act’s <strong>Article 14</strong> is about to change your world. It introduces strict, mandatory reporting obligations for manufacturers, moving vulnerability disclosure from a voluntary practice to a legally binding requirement.</p>



<p>Under these new rules, you must notify authorities about any <strong>actively exploited vulnerability</strong> within <strong>24 hours</strong>, followed by a more detailed report within <strong>72 hours</strong>. This is a massive shift, and getting it wrong carries serious consequences.</p>



<h2 class="wp-block-heading">Unpacking Your Article 14 Reporting Duties</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-incident-reporting.jpg" alt="Diagram showing a manufacturer reporting a security incident via email to CSIRT, ENISA, and National CSIRT."/></figure>



<p>The Cyber Resilience Act (CRA) creates a formal, EU-wide system for handling serious security issues. At the centre of this system is <strong>Article 14</strong>, which ensures key authorities get early warnings to help stop threats from spreading across the market.</p>



<p>Think of it as a public health system for digital products. When a manufacturer discovers a serious, actively exploited vulnerability, their mandatory report acts as an alert. This gives the EU&#8217;s cybersecurity agency, <strong>ENISA</strong>, and national Computer Security Incident Response Teams (CSIRTs) the intel they need to coordinate a response and protect the entire market.</p>



<h3 class="wp-block-heading">Who Is Responsible for Reporting?</h3>



<p>The duty to report under <strong>Article 14</strong> falls squarely on the <strong>manufacturer</strong>. The CRA is very clear: a manufacturer is the entity that develops a product with digital elements and sells it under its own name or trademark.</p>



<p>For example, if you build a connected baby monitor, develop a project management SaaS platform, or produce an industrial robot with network capabilities, you are the manufacturer. These reporting duties are a non-negotiable part of placing your product on the EU market.</p>



<h3 class="wp-block-heading">What Triggers a Report?</h3>



<p>Not every bug you find triggers a 24-hour countdown. The CRA focuses everyone’s attention on the most urgent threats. A report is only required when two specific conditions are met:</p>



<ul class="wp-block-list">
<li><strong>Actively Exploited Vulnerabilities</strong>: This isn&#8217;t a theoretical weakness found during a routine scan. It&#8217;s a flaw in your product that you have solid evidence is being used by attackers in the wild. For example, receiving credible reports from multiple customers that their systems were compromised using a specific flaw in your software.</li>



<li><strong>Severe Security Incidents</strong>: This refers to an incident with a significant impact on the security of your product or your users&#8217; systems. Think incidents that disrupt essential services, compromise networks, or cause major material damage. A practical example would be a ransomware attack that renders a hospital&#8217;s connected medical devices unusable.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>Key Takeaway:</strong> The focus of Article 14 is on active, real-world threats. A vulnerability found by your internal security team doesn&#8217;t automatically start the clock. But the moment you find proof that attackers are using it against a customer, your <strong>24-hour</strong> reporting window opens.</p>
</blockquote>



<p>For instance, imagine your company makes a smart camera. You discover a flaw that could allow unauthorised access—that’s a vulnerability. If you then find credible evidence that attackers are using this exact flaw to spy on users, it becomes an <em>actively exploited vulnerability</em>, and your reporting obligation under <strong>Article 14</strong> kicks in immediately.</p>



<p>Grasping these core concepts is the first step toward building a compliant process. The focus here is on EU-wide obligations, as specific national data is still emerging. For more detail on how these rules apply across the single market, you can read a comprehensive guide on the EU-level reporting framework from CECIMO. The next sections will build on this foundation, detailing the exact timelines and practical steps your team needs to take.</p>



<h2 class="wp-block-heading">Navigating The Critical Reporting Timelines</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-reporting-timeline.jpg" alt="A timeline illustrating 24h early warning, 72h incident notification, and 14d comprehensive report."/></figure>



<p>When an incident hits, the clock starts ticking. Under the Cyber Resilience Act, your reporting duties are governed by a strict, escalating series of deadlines. These aren&#8217;t just suggestions; they are firm legal obligations designed to give authorities rapid visibility into active threats.</p>



<p>The entire reporting cascade is triggered by one specific event: an <strong>actively exploited vulnerability</strong>. This isn’t just any bug. It’s a flaw in your product that you have reliable evidence is being actively used by attackers <em>right now</em>.</p>



<p>For instance, this could be a zero-day in your firmware being used in a ransomware attack against a customer. Or maybe your own security team confirms an intrusion that leveraged a specific weakness in your product. The moment you have that confirmation, the <strong>24-hour</strong> countdown begins.</p>



<h3 class="wp-block-heading">The First 24 Hours: The Early Warning</h3>



<p>The first report is all about speed, not depth. You are required to notify your designated national CSIRT and ENISA without undue delay, and absolutely no later than <strong>24 hours</strong> after becoming aware of the active exploitation.</p>



<p>Think of this initial alert as a flare sent up to signal a problem. You’re letting the EU authorities know a potentially widespread issue exists so they can prepare. At this early stage, nobody expects you to have all the answers.</p>



<p>The goal is to provide a quick heads-up that includes:</p>



<ul class="wp-block-list">
<li>Your identity as the manufacturer.</li>



<li>The name and type of the affected product.</li>



<li>A brief description of the exploitation you&#8217;ve observed.</li>
</ul>



<p>A practical example: an email to ENISA could be as simple as, &#8220;This is an early warning from SmartGrid Solutions GmbH. Our &#8216;PowerFlow 3000&#8217; energy grid controller is being actively exploited. Initial reports suggest remote unauthenticated access is possible. More details will follow.&#8221;</p>



<h3 class="wp-block-heading">The 72-Hour and 14-Day Progress Reports</h3>



<p>After that initial flare, your subsequent reports must provide progressively more detail as your investigation unfolds. To give you a clearer picture of how this works in practice, here is a quick summary of the timelines you&#8217;ll need to follow.</p>



<p>The table below outlines the different reporting stages and what is expected at each one.</p>



<h3 class="wp-block-heading">CRA Article 14 Reporting Timelines At A Glance</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Event Type</th>
<th align="left">Recipient</th>
<th align="left">Deadline</th>
<th align="left">Purpose</th>
</tr>
<tr>
<td align="left"><strong>Early Warning</strong></td>
<td align="left">National CSIRT &amp; ENISA</td>
<td align="left">Within <strong>24 Hours</strong></td>
<td align="left">Initial alert about an actively exploited vulnerability. Speed over detail.</td>
</tr>
<tr>
<td align="left"><strong>Vulnerability Notification</strong></td>
<td align="left">National CSIRT &amp; ENISA</td>
<td align="left">Within <strong>72 Hours</strong></td>
<td align="left">A more detailed update on the vulnerability, its potential impact, and any initial mitigation advice.</td>
</tr>
<tr>
<td align="left"><strong>Final Report</strong></td>
<td align="left">National CSIRT &amp; ENISA</td>
<td align="left">Within <strong>14 Days</strong> of a fix (or <strong>1 Month</strong> for incidents)</td>
<td align="left">A comprehensive report detailing the root cause, the fix, affected users, and preventative actions.</td>
</tr>
</table></figure>



<p>This structured flow of information is designed to keep authorities informed as you move from initial detection to full resolution.</p>



<p>Let’s walk through a practical example. Imagine your company, SmartHome Innovations, makes a popular smart thermostat. Your incident response team confirms a critical vulnerability is being exploited in the wild, allowing attackers to remotely control home heating systems. This is your trigger.</p>



<p>Here’s how the reporting journey would look:</p>



<ol class="wp-block-list">
<li><br><p><strong>Within 24 Hours:</strong> SmartHome Innovations submits its early warning to its national CSIRT and ENISA. The report is simple: it identifies the company and states that its &#8220;ThermoSmart Model X&#8221; has an actively exploited vulnerability.</p><br></li>



<li><br><p><strong>Within 72 Hours:</strong> The team follows up with a vulnerability notification. This report adds crucial context, detailing the nature of the flaw, its potential impact (unauthorised control of heating), and initial advice for users, like disconnecting the device from the internet until a patch is ready.</p><br></li>



<li><br><p><strong>Within 14 Days:</strong> After deploying a patch and finishing its analysis, the company submits a final, comprehensive report. This document includes a full root cause analysis, the technical details of the fix, an estimate of affected users, and what actions have been taken to prevent it from happening again.</p><br></li>
</ol>



<p>Understanding these EU-wide reporting deadlines is non-negotiable. You can learn more about the official policy on the European Commission&#8217;s <a href="https://digital-strategy.ec.europa.eu/en/policies/cra-reporting">digital strategy site</a>. For a broader view, you might also be interested in our deep-dive into the <a href="https://goregulus.com/cra-compliance/cra-deadlines-2025-2027/">full CRA compliance timeline from 2025 to 2027</a>. This blueprint is your guide to turning a potential crisis into a controlled, compliant response.</p>



<h2 class="wp-block-heading">Clarifying Roles In Your Supply Chain</h2>



<p>Achieving compliance with the Cyber Resilience Act is a team sport, not a solo mission. Your responsibility as a manufacturer doesn&#8217;t exist in a vacuum; it’s deeply connected to every other link in your supply chain. While you hold the ultimate legal duty for <strong>CRA reporting obligations under Article 14</strong>, your partners are crucial for success.</p>



<p>Importers and distributors are your essential eyes and ears on the ground. The CRA formally obliges them to act as a vital communication channel. If they discover or are informed of a vulnerability in one of your products, they are legally required to notify you immediately.</p>



<p>This creates a mandatory information-sharing network designed to get critical threat intelligence to the one entity that can actually fix it: you.</p>



<h3 class="wp-block-heading">Tracing the Path of a Vulnerability Report</h3>



<p>Let&#8217;s walk through a practical example to make this crystal clear. Imagine you&#8217;re a German manufacturer of industrial machinery that uses connected sensors. One of your machines is sold through a French distributor.</p>



<p>Here’s how the information flow for a vulnerability report would work:</p>



<ol class="wp-block-list">
<li><strong>Discovery:</strong> An independent security researcher in France discovers a serious flaw in an open-source component used in your machine&#8217;s sensor software. They find a way to remotely access the sensor&#8217;s data.</li>



<li><strong>Initial Contact:</strong> The researcher responsibly discloses this finding to the French distributor who sold the machine, since they are the most visible local contact point.</li>



<li><strong>Distributor&#8217;s Duty:</strong> Under their own CRA obligations, the French distributor cannot sit on this information. They must immediately forward the entire report to you, the manufacturer.</li>



<li><strong>Manufacturer&#8217;s Responsibility:</strong> The moment you receive this report, your Product Security Incident Response Team (PSIRT) takes over. You analyse the vulnerability, confirm its severity, and determine if it&#8217;s being actively exploited.</li>



<li><strong>Official Reporting:</strong> If you confirm active exploitation, the final responsibility falls squarely on your shoulders. You must fulfil the official <strong>CRA reporting obligations under Article 14</strong> by sending the 24-hour early warning to ENISA and your national CSIRT, followed by the more detailed reports.</li>
</ol>



<p>This collaborative flow ensures that no matter where a vulnerability is first spotted, the report finds its way to the manufacturer who must take official action.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This structure transforms your supply chain from a simple sales channel into a powerful compliance network. Each partner has a defined role in funnelling security information back to you, the manufacturer, who is ultimately accountable for official reporting.</p>
</blockquote>



<p>Understanding these interconnected duties is fundamental. For a deeper look into what this means for your specific role, you can learn more about the complete list of <a href="https://goregulus.com/cra-compliance/cra-manufacturer-obligations/">CRA manufacturer obligations</a> in our dedicated guide.</p>



<h3 class="wp-block-heading">Your Partners Are Your First Line of Defence</h3>



<p>This system highlights a crucial strategic point: you have to enable your partners to help you. Your importers and distributors need clear, simple channels to report issues they find.</p>



<p>If they don&#8217;t know how to reach you or the process is a bureaucratic nightmare, critical information will get lost. That puts you directly at risk of non-compliance.</p>



<p>For example, a distributor who receives a vulnerability report from a customer should be able to find your dedicated security contact email (<code>security@your-company.com</code>) directly on your website or in their partner documentation within minutes. If they have to spend hours searching, you lose precious time and risk a compliance failure.</p>



<p>By fostering this collaborative ecosystem, you not only meet legal requirements but also strengthen the overall security of your products on the market.</p>



<h2 class="wp-block-heading">Building Your Compliant Reporting Process Step By Step</h2>



<p>Turning legal text into a repeatable, day-to-day workflow is the heart of sustainable compliance. To meet the <strong>CRA reporting obligations under Article 14</strong>, you can&#8217;t just rely on good intentions. You need a structured, documented process that turns the potential chaos of incident response into a predictable, auditable system.</p>



<p>It all starts with clear ownership and a central hub for all vulnerability-related activities. This is precisely the job of a Product Security Incident Response Team (PSIRT). Whether you build a new team or empower an existing one, their mission is to manage every vulnerability from the moment it&#8217;s discovered until it&#8217;s fully remediated.</p>



<h3 class="wp-block-heading">Establish Your Incident Response Framework</h3>



<p>First things first: you need a formal incident response plan. Think of this as your playbook for when a potential security issue lands on your desk. It ensures everyone knows their role and what to do next, eliminating high-pressure guesswork.</p>



<p>Your plan should map out the entire process, from start to finish:</p>



<ul class="wp-block-list">
<li><strong>Intake:</strong> How do you actually receive vulnerability reports from inside and outside the company?</li>



<li><strong>Triage:</strong> What&#8217;s the process for validating a report and gauging its severity?</li>



<li><strong>Investigation:</strong> Who is responsible for the deep technical analysis of the vulnerability?</li>



<li><strong>Reporting:</strong> What are the exact steps for notifying ENISA and the relevant CSIRTs within that tight <strong>24-hour</strong> window?</li>



<li><strong>Remediation:</strong> How do you develop, test, and ship a security patch to your customers?</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A solid incident response plan isn&#8217;t just a compliance checkbox; it&#8217;s a resilience-builder. It means that when a real threat emerges, your team can execute a coordinated, calm response instead of panicking. That protects your customers and, just as importantly, your brand&#8217;s reputation.</p>
</blockquote>



<p>A crucial, and often overlooked, part of credible CRA reporting is a strong focus on <a href="https://nanopim.com/post/managing-data-quality">managing data quality</a> for every piece of information involved. Every report, assessment, and remediation log must be accurate and consistent if it&#8217;s going to withstand scrutiny. This discipline ensures the information you submit to regulators is both credible and defensible.</p>



<h3 class="wp-block-heading">Set Up Secure and Clear Reporting Channels</h3>



<p>You can&#8217;t fix vulnerabilities you don&#8217;t know about. That&#8217;s why making it incredibly easy for security researchers, partners, and even customers to report issues is non-negotiable. This means setting up dedicated, secure channels that are simple to find and use.</p>



<p>For instance, a common best practice is to establish a well-publicised point of contact. This usually includes:</p>



<ol class="wp-block-list">
<li><strong>A Dedicated Email Address:</strong> A simple, memorable address like <code>security@yourcompany.com</code> is the industry standard. This inbox needs to be constantly monitored by your PSIRT.</li>



<li><strong>A Web Form:</strong> A structured form on your website can guide reporters to provide the essential information right away, like the affected product, version, and a technical description of the flaw.</li>
</ol>



<p>This infographic shows how information flows from researchers and distributors to you, the manufacturer. It really highlights why those clear intake channels are so important.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-supply-chain-roles-1-1024x585.jpg" alt="Flowchart illustrating CRA supply chain roles: researcher, distributor, and manufacturer with key responsibilities." class="wp-image-2084" srcset="https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-supply-chain-roles-1-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-supply-chain-roles-1-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-supply-chain-roles-1-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/cra-reporting-obligations-article-14-supply-chain-roles-1.jpg 1312w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As you can see, you are the central point for processing vulnerability information before it triggers an official <strong>CRA reporting obligation under Article 14</strong>.</p>



<h3 class="wp-block-heading">Integrate and Automate Your Workflow</h3>



<p>Once a report comes in, it must enter a documented workflow. Let&#8217;s be honest: tracking vulnerabilities in spreadsheets is a recipe for disaster, especially when you&#8217;re up against tight deadlines. Integrating your reporting channels with an internal ticketing system like Jira or ServiceNow is a total game-changer.</p>



<p>This integration instantly turns a simple email into a trackable, auditable record. For example, you can create an automation rule where any email sent to <code>security@yourcompany.com</code> automatically generates a new high-priority ticket in your PSIRT&#8217;s Jira project. This immediately assigns the issue, starts the clock on your internal SLAs, and establishes a single source of truth for the entire incident.</p>



<p>This isn&#8217;t just about making your response more efficient; it&#8217;s about building a crucial evidence trail. You&#8217;ll need this documentation to prove your due diligence, a topic we cover in more detail in our guide on creating CRA technical documentation: <a href="https://goregulus.com/cra-documentation/technical-documentation/">https://goregulus.com/cra-documentation/technical-documentation/</a></p>



<p>Don&#8217;t forget, these processes need to be ready soon. The reporting rules kick in on <strong>September 11, 2026</strong>, with full CRA implementation required by <strong>December 11, 2027</strong>.</p>



<h2 class="wp-block-heading">Mastering Vulnerability Documentation and Management</h2>



<p>Solid documentation isn&#8217;t a bureaucratic chore. It&#8217;s your best defence during a market surveillance audit and a crucial tool for meeting the tight deadlines of the <strong>CRA’s reporting obligations under Article 14</strong>. Good records turn the frantic <strong>24-hour</strong> reporting window from a mad dash into a manageable, well-documented process.</p>



<p>Without a robust documentation system, proving your due diligence is nearly impossible. When auditors ask how you handled a specific vulnerability, a vague &#8220;we think we fixed it&#8221; won&#8217;t cut it. You need a complete, timestamped audit trail that shows exactly what happened, when it happened, and why you made the decisions you did.</p>



<h3 class="wp-block-heading">Structuring Your Technical Documentation for Audits</h3>



<p>The CRA is specific about what your technical documentation needs to contain, laying it all out in Annex VII. A critical piece is a detailed account of your vulnerability handling procedures. This isn&#8217;t just a brief mention; it must be a thorough explanation of your end-to-end process.</p>



<p>As you build out your reporting process, it&#8217;s essential to follow established <a href="https://www.digiparser.com/blog/document-management-best-practices">document management best practices</a>. This ensures your records are consistent, accessible, and ready for an audit. Your technical file must explicitly include:</p>



<ul class="wp-block-list">
<li><strong>A Published Vulnerability Disclosure Policy (VDP):</strong> This is your public-facing document that tells security researchers how to report vulnerabilities to you securely. It builds trust and creates a clear intake channel.</li>



<li><strong>Internal Vulnerability Handling Procedures:</strong> This is the detailed playbook for your PSIRT, covering everything from the initial triage of a report to the final deployment of a patch.</li>



<li><strong>The Software Bill of Materials (SBOM):</strong> This is a complete inventory of every single component in your product, including all the open-source libraries. An SBOM is non-negotiable for quickly figuring out which products are affected when a new component vulnerability is discovered.</li>
</ul>



<p>To help you get organised, here is a checklist of the essential documentation you&#8217;ll need to have in place to satisfy the CRA&#8217;s requirements under Article 14.</p>



<h3 class="wp-block-heading">Essential Documentation For Article 14 Compliance</h3>



<figure class="wp-block-table"><table><tr>
<th align="left">Document/Policy</th>
<th align="left">Purpose</th>
<th align="left">Key Elements To Include</th>
</tr>
<tr>
<td align="left"><strong>Vulnerability Disclosure Policy (VDP)</strong></td>
<td align="left">Provides a public, secure channel for security researchers to report vulnerabilities.</td>
<td align="left">Contact details (security@ email, web form), scope of a &#039;safe harbour&#039; policy, what to report, expected response times.</td>
</tr>
<tr>
<td align="left"><strong>Internal Vulnerability Handling Procedure</strong></td>
<td align="left">Defines the end-to-end internal process for managing vulnerabilities from receipt to resolution.</td>
<td align="left">Roles and responsibilities (PSIRT), triage process, severity scoring criteria (e.g., CVSS), remediation timelines, internal/external communication plan.</td>
</tr>
<tr>
<td align="left"><strong>Software Bill of Materials (SBOM)</strong></td>
<td align="left">Creates a complete inventory of all software components, libraries, and dependencies in a product.</td>
<td align="left">Component name, version, supplier, licence information, unique identifiers (e.g., PURL, CPE).</td>
</tr>
<tr>
<td align="left"><strong>Internal Vulnerability Database</strong></td>
<td align="left">Acts as a single source of truth and an audit trail for all identified vulnerabilities.</td>
<td align="left">Unique vulnerability ID, reporter source, dates (reported, triaged, fixed), severity score, affected products/versions, remediation status, links to advisories.</td>
</tr>
<tr>
<td align="left"><strong>Security Advisory Templates</strong></td>
<td align="left">Standardises communication to customers about fixed vulnerabilities and available updates.</td>
<td align="left">Affected product(s), vulnerability description, severity, potential impact, fix/mitigation details, and where to get the update.</td>
</tr>
</table></figure>



<p>Having these documents defined, maintained, and accessible is the key to turning a reactive, chaotic process into a structured and defensible compliance strategy.</p>



<h3 class="wp-block-heading">Maintaining an Internal Vulnerability Database</h3>



<p>At the very heart of your documentation strategy must be an internal vulnerability database. This is your single source of truth for every security issue ever reported or discovered in your products. For any IoT vendor or software manufacturer, this isn&#8217;t just a nice-to-have; it&#8217;s the bedrock of effective product security lifecycle management.</p>



<p>This database—whether it&#8217;s a dedicated platform or a well-configured Jira project—needs to track each vulnerability’s entire journey from start to finish.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>Key Insight:</strong> Your vulnerability database tells the story of your security efforts. It shows regulators exactly how you identify, assess, and resolve threats, providing concrete proof that you are meeting your <strong>CRA reporting obligations under Article 14</strong>.</p>
</blockquote>



<p>Let&#8217;s walk through a practical example. A security researcher reports a flaw in your smart lock&#8217;s firmware. Here’s how that vulnerability would move through your database:</p>



<ul class="wp-block-list">
<li><strong>Initial Entry:</strong> The report is logged with a unique ID, a timestamp, and the reporter&#8217;s details. Its status is immediately set to &#8220;New.&#8221;</li>



<li><strong>Triage &amp; Assessment:</strong> Your PSIRT gets to work and validates the report. They assign it a <strong>CVSS (Common Vulnerability Scoring System) score</strong> to quantify its severity. A flaw allowing a remote attacker to unlock the door would likely get a <strong>9.8 (Critical)</strong>.</li>



<li><strong>Remediation Plan:</strong> The ticket is assigned to the right engineering team. They outline a plan to fix the bug and estimate a timeline for developing, testing, and releasing a patch.</li>



<li><strong>Resolution:</strong> The patched firmware is released. The database entry is updated to &#8220;Resolved,&#8221; with direct links to the new firmware version and the security advisory you sent to customers.</li>
</ul>



<p>This detailed, step-by-step record is precisely what makes the 24-hour reporting requirement achievable. When you learn an issue is being actively exploited, you aren&#8217;t starting from zero. You simply pull up the existing record, which already contains most of the information you need for your initial notification to ENISA. For a deeper dive into the whole process, our guide on <a href="https://goregulus.com/cra-requirements/cra-vulnerability-handling/">CRA vulnerability handling</a> breaks it down even further. By treating documentation as a strategic asset, you build both resilience and confidence in your compliance.</p>



<h2 class="wp-block-heading">Your Top Questions About Article 14 Answered</h2>



<p>Even after you get a handle on the rules, the practical side of the Cyber Resilience Act can throw up some tricky questions. The <strong>CRA reporting obligations under Article 14</strong>, in particular, create new pressures and responsibilities. It’s no surprise that managers and engineers are worried about how this plays out in the real world.</p>



<p>Let&#8217;s tackle some of the most common questions we hear, breaking down the nuances to help you build confidence in your compliance plan.</p>



<h3 class="wp-block-heading">What If a Vulnerability Is Discovered But Not Yet Exploited?</h3>



<p>This is one of the most important distinctions you’ll need to make, and getting it right is fundamental. The <strong>24-hour</strong> reporting clock under Article 14 does <em>not</em> start ticking the moment you find a new bug. The CRA is crystal clear on this: the urgent reporting obligation is triggered only for an <strong>actively exploited vulnerability</strong>.</p>



<p>This means you need credible evidence that attackers are actually using the flaw against systems in the wild. A theoretical weakness found by your internal security team or a bug reported by a researcher doesn’t automatically trigger a report to ENISA.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>Key Insight:</strong> The threshold for official reporting is <em>active exploitation</em>, not just discovery. Until you have credible evidence of exploitation, your responsibility is to focus on your internal assessment, triage, and patching process.</p>
</blockquote>



<p>For instance, let’s say your QA team uncovers a serious SQL injection flaw in your cloud-connected industrial controller during routine testing. It’s a severe vulnerability, but as far as you know, it’s not being exploited.</p>



<p>At this point, your duties are to:</p>



<ul class="wp-block-list">
<li><strong>Assess the Risk:</strong> Use a framework like the Common Vulnerability Scoring System (CVSS) to figure out its severity. A flaw like this would almost certainly get a <strong>Critical</strong> score.</li>



<li><strong>Prioritise a Fix:</strong> Your engineering team needs to start working on a patch straight away.</li>



<li><strong>Monitor for Exploitation:</strong> Your security team should be actively hunting for any signs that this vulnerability is being used by attackers.</li>
</ul>



<p>Your own vulnerability management process takes priority here. It’s only if you later find evidence of active exploitation—like logs showing an attacker using that exact SQL injection to breach a customer&#8217;s network—that your <strong>CRA reporting obligations under Article 14</strong> would kick in.</p>



<h3 class="wp-block-heading">Does Reporting to ENISA Mean a Flaw Becomes Public Knowledge?</h3>



<p>This is a huge concern for manufacturers. The fear is that reporting a vulnerability will immediately spark negative press, customer panic, and a hit to your reputation. Fortunately, the CRA was designed with confidentiality measures to prevent exactly that.</p>



<p>When you send a report to ENISA and the national CSIRTs, it isn’t broadcast to the world. The information is handled within a trusted network of cybersecurity authorities. Their primary goal is to protect the single market, not to name and shame manufacturers.</p>



<p>The entire process is built around the principles of <strong>Coordinated Vulnerability Disclosure (CVD)</strong>. In simple terms, this means the authorities work <em>with</em> you to manage how and when information is released.</p>



<p>Let’s walk through a practical example. Imagine you manufacture a point-of-sale (POS) terminal and you report an actively exploited vulnerability that could leak transaction data.</p>



<ul class="wp-block-list">
<li>The initial report you send to ENISA is confidential.</li>



<li>ENISA and the CSIRTs use this information to assess the risk across the entire market. They might, for example, quietly warn financial institutions to monitor for suspicious activity without ever naming your product.</li>



<li>This buys you a critical window to develop and roll out a security patch.</li>



<li>The vulnerability is typically made public only after a fix is available and you&#8217;re ready to inform your customers, often through a joint advisory.</li>
</ul>



<p>This collaborative model protects the market from the immediate threat while giving you the time to fix the problem responsibly. It prevents a premature disclosure that would only help attackers.</p>



<h3 class="wp-block-heading">Who Is Responsible for Reporting Flaws in Open-Source Software?</h3>



<p>The use of open-source software (OSS) is everywhere, but it’s a common source of confusion when it comes to accountability. The CRA is unambiguous here: the ultimate responsibility for the security of the final product lies with the <strong>manufacturer</strong>.</p>



<p>If your product uses an open-source component with a vulnerability, you are responsible for it as if it were your own code. You can’t just point the finger at the open-source project and consider your job done.</p>



<p>This is where having a complete <strong>Software Bill of Materials (SBOM)</strong> becomes an absolute must. When a vulnerability like Log4Shell hits the headlines, your SBOM should allow you to instantly see if your products are affected.</p>



<p>Here are the practical steps you have to take:</p>



<ol class="wp-block-list">
<li><strong>Track and Monitor:</strong> Continuously watch all OSS components in your products for newly disclosed vulnerabilities.</li>



<li><strong>Report Upstream:</strong> If you discover a brand-new flaw in an OSS component, you have a duty to report it to the project&#8217;s maintainers so they can fix it for the entire community.</li>



<li><strong>Manage and Mitigate:</strong> You must assess how the vulnerability impacts your specific product. If it&#8217;s being actively exploited, you have to fulfil your own <strong>CRA reporting obligations under Article 14</strong>.</li>



<li><strong>Patch Your Product:</strong> You are responsible for integrating the fixed OSS component into your product and shipping an update to your customers.</li>
</ol>



<p>For example, if your smart TV uses a vulnerable open-source media library and attackers are actively exploiting it, it is you—the TV manufacturer—who must submit the 24-hour report to ENISA. You also have to release a firmware update with the patched library. Simply waiting for the OSS project to act is not an option.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Navigating the complexities of the Cyber Resilience Act can be daunting, but you don&#8217;t have to do it alone. <strong>Regulus</strong> provides a unified software platform that simplifies every step of the CRA compliance process. From applicability assessments and requirements mapping to documentation templates and vulnerability management guidance, Regulus turns regulatory hurdles into a clear, actionable plan. Gain clarity and confidence on your path to EU market readiness by exploring our solution at <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-reporting-obligations-article-14/">A Guide to CRA Reporting Obligations Article 14</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to Build a CRA Compliance Evidence Pack</title>
		<link>https://goregulus.com/cra-basics/cra-compliance-evidence-pack/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 07:29:20 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CRA compliance evidence pack]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[EU CRA]]></category>
		<category><![CDATA[technical documentation]]></category>
		<category><![CDATA[Vulnerability Management]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-compliance-evidence-pack/</guid>

					<description><![CDATA[<p>A CRA compliance evidence pack is the collection of documents and records you’ll use to prove your product meets the EU&#8217;s Cyber Resilience Act security standards. Think of it as the complete technical file that validates your CE marking, containing everything from risk assessments to vulnerability logs. It&#8217;s the official proof of your due diligence [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-compliance-evidence-pack/">How to Build a CRA Compliance Evidence Pack</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>A <strong>CRA compliance evidence pack</strong> is the collection of documents and records you’ll use to prove your product meets the EU&#8217;s Cyber Resilience Act security standards. Think of it as the complete technical file that validates your CE marking, containing everything from risk assessments to vulnerability logs. It&#8217;s the official proof of your due diligence for market surveillance authorities.</p>



<h2 class="wp-block-heading">What Is a CRA Compliance Evidence Pack</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-compliance-evidence-pack-compliance-elements.jpg" alt="Diagram showing CRA Compliance Evidence Pack, including risk assessment, SBOM, firmware updates, and log."/></figure>



<p>The Cyber Resilience Act requires that any company placing products on the EU market must be able to demonstrate end-to-end security. Your <strong>CRA compliance evidence pack</strong> is how you do it. This isn&#8217;t just a single document; it&#8217;s a living dossier that tells the entire story of your product&#8217;s security journey.</p>



<p>This collection of proof shows you&#8217;ve woven security into every stage of the product lifecycle, from initial design concepts right through to post-market support and vulnerability management. When an auditor from a market surveillance authority comes knocking, this dossier is your first and most important line of defence.</p>



<h3 class="wp-block-heading">Core Components of the Evidence Pack</h3>



<p>A solid evidence pack is far more than a few ticked boxes. It&#8217;s an organised body of tangible proof. While the exact contents will differ from one product to the next, every pack should be built on a few core pillars.</p>



<ul class="wp-block-list">
<li><strong>Cybersecurity Risk Assessment:</strong> This is your foundation. It&#8217;s where you document all identified threats, analyse their potential impact, and detail the mitigation strategies you put in place. For a smart lock, a threat might be a brute-force attack on the entry PIN. The mitigation strategy documented here would be implementing an account lockout mechanism after three failed attempts.</li>



<li><strong>Secure Design &amp; Development Evidence:</strong> This includes all the documentation proving that security was a non-negotiable part of your development process from day one. This could be meeting minutes from a design review where security features were mandated or a link to a secure coding standard in your internal wiki that developers must follow.</li>



<li><strong>Software Bill of Materials (SBOM):</strong> An essential inventory of every software component, including open-source and third-party libraries, that is absolutely critical for effective vulnerability management.</li>



<li><strong>Vulnerability Handling Records:</strong> This section must demonstrate a clear, repeatable process for receiving, evaluating, and fixing vulnerabilities discovered after your product hits the market.</li>



<li><strong>Conformity Assessment Documentation:</strong> This covers your test reports, internal audit results, or any certificates from notified bodies that verify compliance with CRA requirements.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A common mistake is to treat the evidence pack as a one-and-done project. The CRA demands it to be a &#8220;living&#8221; file, continuously updated throughout the product&#8217;s entire support period to reflect new patches, newly found vulnerabilities, and any design changes.</p>
</blockquote>



<p>Getting these materials organised is a major challenge in itself, which is why a proper file structure is a critical first step. For more on this, you might be interested in our detailed guide on the <a href="https://goregulus.com/cra-documentation/cra-technical-file-structure/">CRA technical file structure</a>.</p>



<h2 class="wp-block-heading">Gathering Your Essential Documents and Artifacts</h2>



<p>Putting together your <strong>CRA compliance evidence pack</strong> is where the rubber meets the road. It’s not about ticking boxes; it&#8217;s about building a coherent, auditable story that proves your product’s security from the ground up. Think of it as assembling a legal case file—every document, log, and report is a piece of evidence.</p>



<p>Your goal is to have everything organised and traceable, ready to withstand the scrutiny of market surveillance authorities. A messy collection of files will get you failed just as quickly as a missing risk assessment.</p>



<h3 class="wp-block-heading">The Non-Negotiable Items for Your Technical File</h3>



<p>Your technical file is the heart of the evidence pack. It holds the foundational artefacts that prove you&#8217;ve done your due diligence right from the start of the product&#8217;s life. Without these, your compliance claims are just that—claims, not verifiable facts.</p>



<p>Two documents are absolutely critical and non-negotiable: your cybersecurity risk assessment and the Software Bill of Materials (SBOM).</p>



<ul class="wp-block-list">
<li><strong>Cybersecurity Risk Assessment:</strong> This is your starting point. For a smart baby monitor, a documented threat would be unauthorised access to the video stream. The assessment must then detail the mitigation, such as end-to-end encryption and a mandatory strong password policy for user accounts.</li>



<li><strong>Software Bill of Materials (SBOM):</strong> You&#8217;ll need a complete, machine-readable inventory of every single software component. For example, your SBOM for a smart TV would list its operating system (e.g., <code>Linux kernel 5.15.6</code>), the video streaming library (<code>FFmpeg 4.4</code>), and even the small library used for parsing JSON data (<code>cJSON 1.7.15</code>).</li>
</ul>



<p>Efficiently compiling these records is a huge part of the challenge. A practical guide to <a href="https://docparsemagic.com/blog/extract-data-from-documents">extract data from documents</a> can be a lifesaver here, especially when dealing with complex component lists.</p>



<h3 class="wp-block-heading">What Detailed Test Reports Should Include</h3>



<p>Your test reports are the proof that your security measures actually work as designed. Vague, high-level summaries won’t cut it. To be credible, your reports need to connect the dots with granular detail, linking your security claims to real-world verification.</p>



<p>Take a smart thermostat, for example. A report that just says &#8220;encryption tested&#8221; is useless. A <em>compliant</em> report would include:</p>



<ul class="wp-block-list">
<li>The specific penetration testing tools used (like Wireshark or Nmap).</li>



<li>Logs that show both successful and failed attempts to intercept data traffic.</li>



<li>Confirmation that only strong, current encryption protocols such as <strong>TLS 1.3</strong> are enabled.</li>



<li>Hard evidence that weaker, outdated protocols (like SSLv3 or TLS 1.0) are actively disabled. For example, a screenshot or log output from a tool like <code>nmap --script ssl-enum-ciphers -p 443 &lt;device-ip></code> showing the rejection of weak ciphers.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Remember, the point of a test report is to give an auditor a clear, undeniable trail from a stated security requirement (e.g., &#8220;all data in transit must be encrypted&#8221;) to the objective evidence proving it’s been implemented effectively.</p>
</blockquote>



<p>In the ES region, evidence packs are foundational for proving compliance across all <strong>27</strong> EU states. Regulation (EU) 2024/2847 requires a technical file with product descriptions, risk assessments, test results, and vulnerability handling records, all retained for <strong>10 years</strong> post-market. Our data from 2025 audits in the ES region showed that a staggering <strong>85% of IoT devices</strong> contained third-party open-source components requiring a detailed SBOM, which dramatically increases the evidence-gathering workload.</p>



<h3 class="wp-block-heading">Internal Audits and Self-Assessment Evidence</h3>



<p>For products in the default risk class, the CRA allows for self-assessment. But don&#8217;t mistake &#8220;self-assessment&#8221; for an easy way out. You still need to produce robust internal audit evidence showing your conformity assessment was objective and thorough.</p>



<p>This means creating an &#8220;internal audit&#8221; file that essentially simulates what an external auditor would demand to see.</p>



<p><strong>Key Evidence for Self-Assessment:</strong></p>



<ol class="wp-block-list">
<li><strong>Conformity Checklist:</strong> A detailed checklist that maps every single applicable requirement from Annex I of the CRA to the specific evidence you’ve gathered to prove it.</li>



<li><strong>Review Records:</strong> Documentation of internal review meetings, noting who attended, what was found, and what actions were taken to fix any identified gaps. For example, minutes from a review stating, &#8220;Gap found in session timeout policy. Action: JIRA-5432 assigned to dev team to implement 15-minute inactivity logout.&#8221;</li>



<li><strong>Signed Declaration:</strong> A formal, signed statement from a responsible party in your organisation, confirming that the product meets all requirements based on the compiled evidence.</li>
</ol>



<p>Going back to our smart thermostat, this would involve internally verifying that security-by-default principles were followed, like forcing a password change on first use. Your evidence would be a test script and a screenshot showing the mandatory password prompt in action.</p>



<p>Our guide on <a href="https://goregulus.com/cra-documentation/technical-documentation/">CRA technical documentation</a> offers more practical insights into structuring these crucial files. This methodical approach ensures you have every piece of the puzzle ready to satisfy authorities and prove your due diligence.</p>



<p>Once you’ve gathered all your documents, the real work begins. A <strong>CRA compliance evidence pack</strong> isn’t just a folder stuffed with files; it&#8217;s a carefully constructed argument that proves your compliance. The next step is to map every single piece of evidence to the specific legal requirements of the Cyber Resilience Act.</p>



<p>Think of it this way: you need to create a clear, logical trail for an auditor. When they pick up a requirement from the CRA, they should be able to follow that trail directly to the document—or documents—that prove you’ve met it. Without this mapping, your evidence pack is just a disorganised collection of files that won’t convince any market surveillance authority.</p>



<p>This process transforms your documentation from a simple checklist into a compelling, auditable narrative.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-compliance-evidence-pack-compliance-process.jpg" alt="Flowchart illustrating the CRA Evidence Pack process, connecting risk assessment, test reports, and SBOM."/></figure>



<p>As you can see, compliance is a dynamic process where artifacts like risk assessments, SBOMs, and test reports all inform and validate one another. It’s not about static documents.</p>



<h3 class="wp-block-heading">Building Your Compliance Matrix</h3>



<p>The most effective way to manage this is with a compliance matrix. In my experience, a detailed spreadsheet is the perfect tool for this, though some teams use dedicated compliance software. This matrix will become the central index for your entire evidence pack.</p>



<p>Your matrix should, at a minimum, include columns for:</p>



<ul class="wp-block-list">
<li><strong>CRA Requirement:</strong> The specific article or point from Annex I (Essential Cybersecurity Requirements) or Annex II (Vulnerability Handling).</li>



<li><strong>Requirement Description:</strong> A short, plain-English summary of what the obligation actually means for your product.</li>



<li><strong>Evidence Artefact:</strong> The exact filename of the document or record that proves compliance (e.g., &#8220;Cybersecurity Risk Assessment v1.2.pdf&#8221;).</li>



<li><strong>Location/Link:</strong> A direct link or clear file path to where the evidence is stored. This is non-negotiable for audit readiness.</li>



<li><strong>Notes:</strong> Any extra context an auditor might need, like a specific page number, section heading, or table within a larger document. For example: <code>See Section 4.2, Table 3 for port scan results.</code></li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Don&#8217;t treat this matrix as just an administrative chore. This process is effectively your first internal audit. It forces you to critically review your evidence, spot the gaps, and see where your documentation is weak or missing entirely.</p>
</blockquote>



<h3 class="wp-block-heading">Mapping Evidence to Annex I Security Requirements</h3>



<p>Annex I of the CRA details the essential security requirements products must meet by design. Your mapping needs to show precisely how your development practices and technical controls fulfil these obligations.</p>



<p>Let’s take a common example: the requirement for a secure-by-default configuration. Your matrix would need to connect this to several different pieces of evidence to build a strong case.</p>



<p><strong>Practical Example: Secure-by-Default Configuration</strong></p>



<ul class="wp-block-list">
<li><strong>CRA Requirement:</strong> Annex I, Section 1(a) &#8211; &#8220;products shall be made available on the market with a secure by default configuration.&#8221;</li>



<li><strong>Evidence Artefact 1:</strong> &#8220;Internal Secure Configuration Guide v2.0&#8243;—This shows you have a defined standard.</li>



<li><strong>Evidence Artefact 2:</strong> &#8220;Penetration Test Report &#8211; Q3 2026&#8243;—Point to the specific section that verifies non-essential ports are closed by default.</li>



<li><strong>Evidence Artefact 3:</strong> A screenshot from your product’s setup screen that forces the user to change the default password on first use.</li>
</ul>



<p>Here, each piece of evidence reinforces the others, creating a much more robust and verifiable claim than a single document ever could.</p>



<h3 class="wp-block-heading">Linking Artefacts to Annex II Vulnerability Handling</h3>



<p>Annex II is all about your <em>process</em>. It focuses on how your organisation handles vulnerabilities after the product is on the market. You&#8217;re not just proving the product was secure at a single point in time, but that you have a robust, ongoing process for keeping it secure.</p>



<p>For instance, Annex II requires manufacturers to have procedures for regular security testing.</p>



<p><strong>Practical Example: Regular Security Testing</strong></p>



<ul class="wp-block-list">
<li><strong>CRA Requirement:</strong> Annex II, Point 2 &#8211; &#8220;The manufacturer shall have procedures in place to regularly test and verify the security of the product.&#8221;</li>



<li><strong>Evidence Artefact 1:</strong> Your internal policy document, like &#8220;Quarterly Security Testing &amp; Review Process.&#8221;</li>



<li><strong>Evidence Artefact 2:</strong> A log or report from your automated tooling showing a record of scheduled vulnerability scans over the last <strong>12 months</strong>.</li>



<li><strong>Evidence Artefact 3:</strong> A firmware update log showing that an update (e.g., <code>FW v1.4.2</code>) was released to patch a vulnerability discovered during a scheduled test.</li>
</ul>



<p>This combination is powerful because it shows an auditor the complete cycle: you have a policy, you execute on it (the scans), and you take action on the findings (the firmware update). It proves your process actually works in the real world. This kind of structured mapping is the absolute backbone of a successful <strong>CRA compliance evidence pack</strong>.</p>



<h2 class="wp-block-heading">Mastering Vulnerability and Update Records</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-compliance-evidence-pack-vulnerability-management.jpg" alt="Timeline for vulnerability management from discovery to patch release, including ENISA and follow-up steps."/></figure>



<p>Under the Cyber Resilience Act, your entire vulnerability response process is going under the microscope. The regulation demands impeccable record-keeping and rapid, structured reporting. This is exactly where your <strong>CRA compliance evidence pack</strong> becomes an active operational tool, not just some static file you pull out for an audit.</p>



<p>Let&#8217;s walk through a real-world scenario. Imagine an actively exploited, critical vulnerability is discovered in an open-source library. It turns out your flagship IoT camera uses this library. The clock starts ticking immediately.</p>



<h3 class="wp-block-heading">The 24-Hour ENISA Notification Window</h3>



<p>Your first, most urgent priority is the <strong>24-hour</strong> initial notification to ENISA. The evidence for this report has to be assembled fast, but it must be accurate. You don&#8217;t need all the answers yet, but you do need to prove you have a process to find them.</p>



<p>At this stage, you need to capture a few key pieces of evidence:</p>



<ul class="wp-block-list">
<li><strong>Initial Discovery Logs:</strong> A timestamped alert from your dependency scanning tool (e.g., Snyk, Dependabot) flagging a new critical CVE in a library used by your product.</li>



<li><strong>Internal Triage Communications:</strong> A screenshot of the Slack channel or a copy of the Jira ticket (<code>VULN-1234</code>) created by your PSIRT (Product Security Incident Response Team) to begin the investigation, including initial comments and assignments.</li>



<li><strong>Preliminary Technical Analysis:</strong> An engineer&#8217;s notes confirming the vulnerable library is present in the production firmware. For example: &#8220;Confirmed <code>lib_xyz.so</code> version 1.2.3 is present in firmware build #5678, which is vulnerable to CVE-2027-XXXX.&#8221;</li>
</ul>



<p>This initial evidence set is your proof of a responsible first response. It shows you’ve identified a potential threat and have already kicked off a structured investigation.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The goal of the 24-hour report isn&#8217;t to present a complete solution. It&#8217;s to inform authorities that a potentially serious, actively exploited vulnerability exists in your product and that you are actively managing it. Your evidence pack must capture the <em>start</em> of this process with precision.</p>
</blockquote>



<h3 class="wp-block-heading">The 72-Hour Follow-Up and Final Report</h3>



<p>As your investigation deepens and your remediation plan takes shape, the next reporting phases demand more substantial evidence. Your evidence pack needs to grow to include detailed records of your response activities.</p>



<p>The CRA imposes a strict reporting timeline for actively exploited vulnerabilities. The table below summarises the key deadlines you&#8217;ll need to meet.</p>



<p>| CRA Vulnerability Reporting Deadlines |<br>| :&#8212; | :&#8212; | :&#8212; |<br>| <strong>Event</strong> | <strong>Deadline</strong> | <strong>Required Action and Evidence</strong> |<br>| Initial Alert | <strong>Within 24 hours</strong> of awareness | Submit an early warning to ENISA. Evidence includes discovery logs, internal triage records, and preliminary analysis notes. |<br>| Main Notification | <strong>Within 72 hours</strong> of awareness | Provide a more detailed report covering severity, affected products, and initial mitigation advice. Evidence includes CVSS scoring and draft user guidance. |<br>| Final Report | <strong>Within 14 days</strong> of patch availability | Submit a final report with details on the fix. Evidence should include patch commits, QA test results, and the published security advisory. |</p>



<p>Meeting these deadlines is non-negotiable, especially with fines for non-compliance reaching up to <strong>€15 million</strong> or <strong>2.5%</strong> of global turnover.</p>



<p>For the <strong>72-hour report</strong>, your evidence should now include:</p>



<ul class="wp-block-list">
<li><strong>Severity Assessment:</strong> A completed CVSS scoring worksheet resulting in a score of 9.8 (Critical), with the vector string documented (e.g., <code>CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H</code>).</li>



<li><strong>Mitigation Guidance:</strong> A draft of the security advisory email to users advising them to temporarily disable a specific feature until a patch is available.</li>



<li><strong>Remediation Plan:</strong> Internal documents that outline your team&#8217;s plan to develop, test, and release a security patch, such as a project plan in Confluence or a detailed epic in Jira.</li>
</ul>



<p>The final report is your opportunity to close the loop. A huge part of this is <a href="https://www.msppentesting.com/blog-posts/remediation-of-vulnerabilities">mastering vulnerability remediation</a> to ensure every weakness is fully addressed. Your evidence here must be comprehensive, including patch development commits, quality assurance test results, and the final security advisory published to your users.</p>



<p>For a deeper dive into these processes, check out our guide on <a href="https://goregulus.com/cra-requirements/cra-vulnerability-handling/">https://goregulus.com/cra-requirements/cra-vulnerability-handling/</a>.</p>



<h3 class="wp-block-heading">Documenting Security Updates and Support</h3>



<p>The CRA&#8217;s focus isn&#8217;t just on individual incidents; it extends to your entire update process. You must be able to prove you have a mature, repeatable system for delivering security updates throughout the product&#8217;s defined support period.</p>



<p>Your evidence pack should contain a complete, running log of all security updates. For every single update, you need to document the release date, the vulnerabilities it addresses (with their CVE identifiers), and a direct link to the published release notes. For example, an entry in this log could read: &#8220;Firmware v2.5.1, released 2027-10-15. Fixes CVE-2027-1234 and CVE-2027-5678. Release notes: [link].&#8221; This log is the definitive proof of your commitment to ongoing product security and a true cornerstone of your <strong>CRA compliance evidence pack</strong>.</p>



<h2 class="wp-block-heading">Your CRA Compliance Roadmap for 2026–2027</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-compliance-evidence-pack-project-timeline.jpg" alt="A three-phase project timeline: Foundational (magnifying glass), Implementation (tools), and Finalize (checklist and arrow)."/></figure>



<p>With the Cyber Resilience Act&#8217;s deadlines approaching, just knowing the rules isn&#8217;t enough. Your organisation needs a concrete, strategic plan to turn that knowledge into action. Breaking the compliance journey down into manageable phases makes the whole project feel less daunting and helps ensure your <strong>CRA compliance evidence pack</strong> is ready when it needs to be.</p>



<p>This roadmap lays out a phased approach, helping you prioritise tasks and allocate resources effectively as we head toward the key enforcement dates in 2026 and 2027.</p>



<h3 class="wp-block-heading">Phase 1: Foundational Work (Now to Mid-2026)</h3>



<p>This initial phase is all about discovery and planning. If you rush into implementation without a solid foundation, you’re just setting yourself up for wasted effort and critical gaps in your evidence. The main goal here is to figure out exactly where you stand and what needs to be done.</p>



<p>First, conduct a thorough <strong>applicability assessment</strong>. Not every product will fall under the CRA, so confirming which of your offerings are actually in scope is the only logical place to start. A purely analogue device, for instance, has no digital elements, but a smart toaster with a Wi-Fi connection is clearly in scope.</p>



<p>Once you know which products are affected, it’s time for <strong>product classification</strong>. You have to determine if your product is in the default class or a more critical category, as defined in Annex III of the CRA. For example, a network router or a firewall would likely fall into a critical class, requiring third-party assessment. A connected coffeemaker would be in the default class, allowing for self-assessment. This decision has huge implications for your budget and timeline.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This is the perfect time to run a comprehensive gap analysis. Compare your current security practices, documentation processes, and vulnerability handling workflows against the specific requirements of the CRA. The output should be a clear report detailing what you have versus what you need.</p>
</blockquote>



<h3 class="wp-block-heading">Phase 2: Implementation and Assembly (Mid-2026 to Early 2027)</h3>



<p>With your foundational analysis complete, you can shift from planning to doing. This phase is all about building the core components of your compliance programme and starting to assemble the tangible evidence. A critical deadline to keep an eye on is <strong>September 2026</strong>, when the mandatory vulnerability reporting obligations kick in.</p>



<p>Your absolute top priority should be to establish and operationalise your vulnerability reporting workflow. This includes setting up your coordinated disclosure policy and making sure your team is ready to meet the <strong>24-hour</strong> ENISA notification window for actively exploited vulnerabilities.</p>



<ul class="wp-block-list">
<li><strong>Assemble Your Evidence Pack:</strong> Begin creating the central repository for your <strong>CRA compliance evidence pack</strong>. This is when you start gathering all the documents identified during your gap analysis, like existing risk assessments and design documents.</li>



<li><strong>Generate Initial SBOMs:</strong> Start creating Software Bills of Materials (SBOMs) for your in-scope products. This process can be surprisingly time-consuming, so starting early is the only way to identify all dependencies accurately.</li>



<li><strong>Refine Internal Processes:</strong> Based on your gap analysis, start refining your secure development lifecycle (SDL) and other internal processes to align them with CRA standards.</li>
</ul>



<p>For example, a manufacturer of a connected security camera would use this phase to deploy a tool for automated SBOM generation and integrate it into their CI/CD pipeline. They would also run drills to simulate an ENISA reporting event, ensuring their team can gather the required information and submit it well within the 24-hour timeframe.</p>



<h3 class="wp-block-heading">Phase 3: Finalisation and Audit Readiness (Mid-2027 Onwards)</h3>



<p>The final stretch is about pulling everything together and getting ready for the full enforcement date in December 2027. This phase focuses on finalising all documentation, validating your work through audits, and making the official declarations required for market access.</p>



<p>Your technical documentation has to be complete and polished. This means finalising all test reports, ensuring your risk assessments are up-to-date, and verifying that your evidence matrix correctly maps every single CRA requirement to its corresponding proof.</p>



<p><strong>Key Actions for the Final Phase:</strong></p>



<ol class="wp-block-list">
<li><strong>Complete All Technical Documentation:</strong> Ensure every required artefact for your evidence pack is finalised, reviewed, and stored in your central repository.</li>



<li><strong>Conduct Audits:</strong> Perform a thorough internal audit against your compliance matrix. For critical products, this is when you would schedule the assessment with your chosen notified body.</li>



<li><strong>Prepare for CE Marking:</strong> Once you are confident in your compliance, you can draft and sign the EU Declaration of Conformity and prepare to affix the CE mark to your products. For instance, you will generate a PDF of the declaration, have it digitally signed by the CEO, and store it in the evidence pack.</li>
</ol>



<p>This phased approach provides a structured path to compliance and helps prevent last-minute chaos. For more details on key dates, you can read our guide on the <a href="https://goregulus.com/cra-compliance/cra-deadlines-2025-2027/">CRA deadlines for 2025–2027</a>. By methodically working through these stages, you can build a robust compliance posture and a complete evidence pack with confidence.</p>



<h2 class="wp-block-heading">Frequently Asked Questions About the CRA Evidence Pack</h2>



<p>Even with a solid plan, a few practical questions always pop up when it&#8217;s time to actually build a CRA compliance evidence pack. We get these all the time from manufacturers and developers working through the new requirements.</p>



<p>Let&#8217;s clear up some of the most common points of confusion around documentation scope, third-party components, and ongoing maintenance. Getting these right will help you avoid common pitfalls and keep your compliance project on track.</p>



<h3 class="wp-block-heading">What Is the Difference Between Technical Documentation and the Evidence Pack?</h3>



<p>This is a frequent source of confusion, but the distinction is simple. Think of the &#8220;technical documentation&#8221; as the legal checklist of required content laid out in the CRA, especially in Annex VII. It&#8217;s the official &#8220;what&#8221; you need to have.</p>



<p>The <strong>CRA compliance evidence pack</strong> is the real-world assembly of all that content. It’s the organised, living collection of your actual documents, records, and artefacts—your SBOMs, risk assessments, test reports, and so on. It’s the tangible &#8220;how&#8221; you prove you meet the legal requirements.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>In short, the evidence pack is the practical dossier you&#8217;ll present when an auditor asks to see your technical documentation. It&#8217;s the proof behind the promise.</p>
</blockquote>



<p>For instance, Annex VII requires a &#8220;cybersecurity risk assessment.&#8221; Your evidence pack is where you’ll find the actual file, like <code>Product-X-Risk-Assessment-v2.1.pdf</code>, that satisfies this obligation.</p>



<h3 class="wp-block-heading">Do I Need an Evidence Pack for a Product Already on the Market?</h3>



<p>Yes, but with an important distinction. The CRA&#8217;s main requirements apply to products placed on the EU market <em>after</em> the December 2027 enforcement date. If your product was on the market before then and doesn&#8217;t undergo a &#8220;substantial modification,&#8221; it&#8217;s generally exempt from most obligations.</p>



<p>There&#8217;s a critical exception, though. The vulnerability and incident reporting duties under Article 14 kick in much earlier, on <strong>September 11, 2026</strong>. These rules apply to <em>all</em> in-scope products, including your legacy ones.</p>



<p>So, even for existing products, you&#8217;ll need processes and documentation ready to meet these reporting obligations. And if you make a substantial modification to an older product after the 2027 deadline, it&#8217;s treated as a new product and will need a full evidence pack.</p>



<p>A substantial modification isn&#8217;t just a minor bug fix. A practical example would be releasing a firmware update for a smart speaker that adds a new voice-activated payment feature. This changes its risk profile significantly and would require a full re-evaluation and a complete evidence pack.</p>



<h3 class="wp-block-heading">How Granular Does My Software Bill of Materials (SBOM) Need to Be?</h3>



<p>Your SBOM must be comprehensive, accurate, and machine-readable. A vague or incomplete SBOM is useless for vulnerability management and will not pass an audit. The goal is a complete inventory that enables fast, effective tracking of vulnerabilities across your entire supply chain.</p>



<p>At a minimum, your SBOM has to include:</p>



<ul class="wp-block-list">
<li><strong>All Components:</strong> Every single piece of software must be listed. This includes your own proprietary code, all open-source libraries, and any commercial third-party software.</li>



<li><strong>Transitive Dependencies:</strong> It’s not enough to list the libraries you use directly. You must also identify the dependencies of <em>those</em> libraries. For example, if your product uses <code>libcurl</code>, your SBOM must also list its dependencies like <code>zlib</code> and <code>OpenSSL</code>.</li>



<li><strong>Component Details:</strong> For every component, you need the supplier name, component name, the exact version number, and a unique identifier like a PURL (Package URL).</li>
</ul>



<p><strong>Practical Example:</strong><br>Simply listing <code>openssl</code> in your SBOM is not enough. A compliant entry is specific and machine-readable, like this: <code>pkg:conan/openssl@3.0.12</code>. This format gives automated scanners the exact name and version needed to do their job. This level of detail is non-negotiable for building a compliant <strong>CRA compliance evidence pack</strong>.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Navigating the complexities of the Cyber Resilience Act can be challenging, but you don&#8217;t have to do it alone. <strong>Regulus</strong> provides a software platform designed to guide you through every step, from applicability assessments to generating your technical documentation templates. Gain clarity, reduce compliance costs, and confidently place your products on the EU market by <a href="https://goregulus.com">visiting our website</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-compliance-evidence-pack/">How to Build a CRA Compliance Evidence Pack</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CRA implementation guidance European Commission: Simple Steps to Compliance</title>
		<link>https://goregulus.com/cra-basics/cra-implementation-guidance-european-commission/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Sun, 15 Mar 2026 07:14:54 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CRA Implementation Guidance]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[EU CRA compliance]]></category>
		<category><![CDATA[iot security]]></category>
		<category><![CDATA[product security]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-implementation-guidance-european-commission/</guid>

					<description><![CDATA[<p>The European Commission’s Cyber Resilience Act (CRA) has moved from theory to reality for manufacturers. With the official implementation guidance now published, there’s a phased timeline mapping out the path to compliance. Key obligations, like vulnerability reporting, are set to kick in as early as 2026, with full enforcement landing in late 2027. Decoding the [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-implementation-guidance-european-commission/">CRA implementation guidance European Commission: Simple Steps to Compliance</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The European Commission’s Cyber Resilience Act (CRA) has moved from theory to reality for manufacturers. With the official implementation guidance now published, there’s a phased timeline mapping out the path to compliance. Key obligations, like vulnerability reporting, are set to kick in as early as <strong>2026</strong>, with <strong>full enforcement landing in late 2027</strong>.</p>



<h2 class="wp-block-heading">Decoding the Official CRA Implementation Timeline</h2>



<p>Now that the European Commission&#8217;s Cyber Resilience Act is in force, the first job for any manufacturer, software developer, or IoT vendor is to get to grips with the implementation timeline. These aren&#8217;t just dates on a calendar; they are hard milestones that will determine whether your products can be sold in the EU. If you fail to get ready, you risk being shut out of one of the world&#8217;s biggest markets.</p>



<p>The good news is that the timeline is designed to give you a window to adapt. It’s not a sudden cliff edge but a progressive rollout of obligations, allowing you to build a solid compliance framework without having to tear up your existing product development and operational workflows.</p>



<h3 class="wp-block-heading">Key Dates and Their Impact</h3>



<p>The period between now and late 2027 is your preparation window. The European Commission has laid out the CRA implementation timeline to give manufacturers and IoT vendors a reasonable runway before full enforcement. The most important deadlines are packed into <strong>2026</strong> and <strong>2027</strong>, leading up to the full application of the Act on <strong>11 December 2027</strong>. This <strong>36-month</strong> grace period from its entry into force gives your organisation the adjustment time it needs to run applicability checks, map requirements, and build out your technical files.</p>



<p>The timeline below cuts through the noise and shows the two milestones that should be anchoring your roadmap right now: vulnerability reporting and full enforcement.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-timeline-1-1024x585.jpg" alt="CRA implementation timeline showing vulnerability reporting starts October 2024, full enforcement by October 2026." class="wp-image-2068" srcset="https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-timeline-1-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-timeline-1-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-timeline-1-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-timeline-1.jpg 1312w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>It’s pretty clear that while full enforcement seems a way off in late <strong>2027</strong>, your first compliance actions around vulnerability management need to be locked in much sooner.</p>



<h3 class="wp-block-heading">Key CRA Compliance Milestones 2026-2027</h3>



<p>The phased timeline sets clear expectations for manufacturers. This table summarises the critical deadlines you need to factor into your internal project plans between now and <strong>2027</strong>.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Milestone</th>
<th align="left">Deadline</th>
<th align="left">Key Action Required for Manufacturers</th>
</tr>
<tr>
<td align="left"><strong>Vulnerability Reporting</strong></td>
<td align="left"><strong>June 2026</strong></td>
<td align="left">Establish and document a formal process for receiving, triaging, and reporting vulnerabilities to ENISA within 24 hours of exploitation.</td>
</tr>
<tr>
<td align="left"><strong>Full Enforcement</strong></td>
<td align="left"><strong>December 2027</strong></td>
<td align="left">All in-scope products placed on the market must be fully compliant with all CRA requirements, including conformity assessment and CE marking.</td>
</tr>
<tr>
<td align="left"><strong>Support Period</strong></td>
<td align="left">Ongoing from <strong>2027</strong></td>
<td align="left">Provide security updates for the product&#039;s expected lifetime or a minimum of five years, whichever is shorter.</td>
</tr>
</table></figure>



<p>These dates aren’t suggestions—they are firm deadlines. Your roadmap must show a credible plan for meeting each one.</p>



<h3 class="wp-block-heading">Turning Deadlines into Action</h3>



<p>The real work is translating these legal dates into practical business actions. For instance, the mandatory vulnerability reporting requirement means your security and product teams need a defined process for receiving, triaging, and reporting security issues to the right authorities well before the deadline hits. A practical example would be a company that manufactures smart locks. Before June 2026, they must have a public-facing security contact point, an internal system to assess bug reports (e.g., using a CVSS framework), and a clear protocol for notifying ENISA within 24 hours if a severe vulnerability is actively being exploited.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The CRA’s phased rollout is a strategic opportunity. Use this period to methodically align your engineering, security, and legal teams, turning compliance from a burdensome cost centre into a competitive advantage built on trust and security.</p>
</blockquote>



<p>As you build out your CRA implementation plan, it’s also smart to look at how it intersects with other EU regulations. You’ll often find overlaps with data protection, and a <a href="https://www.eurouter.ai/blog/ai-gdpr">practical AI GDPR compliance guide</a> can provide useful context for making sure your product ticks all the necessary legal boxes.</p>



<p>By mapping these dates to your internal project plans, you can build a manageable path forward. For a more detailed plan, check out our guide on how to build a complete <a href="https://goregulus.com/cra-compliance/cyber-resilience-act-compliance-roadmap/">Cyber Resilience Act compliance roadmap</a>. This proactive approach ensures you&#8217;re not scrambling to meet deadlines but are instead building a foundation for sustained compliance.</p>



<h2 class="wp-block-heading">Does the CRA Apply to Your Product Portfolio?</h2>



<p>Before your teams spend a single euro on compliance, the first move is to figure out if the Cyber Resilience Act even touches your products. The European Commission&#8217;s guidance all comes back to one core concept: <strong>&#8220;products with digital elements&#8221;</strong> or PDEs. It’s a deliberately broad term, but a quick, systematic check will tell you where you stand.</p>



<p>A PDE is any software or hardware product—and its remote data processing solutions—that connects directly or indirectly to another device or network. This definition casts a wide net, catching far more than just obvious smart devices. It&#8217;s time to move past assumptions and take a hard look at your entire portfolio.</p>



<h3 class="wp-block-heading">Defining Products with Digital Elements</h3>



<p>To make sense of the EU&#8217;s guidance, you need to focus on how your products handle data. The key is the &#8220;data connection&#8221; requirement. This is what separates a simple electrical kettle from a smart kettle that exchanges digitally encoded information with a network or app.</p>



<p>Let&#8217;s ground this in some real-world scenarios:</p>



<ul class="wp-block-list">
<li><strong>Smart Thermostat:</strong> This is the textbook case. It’s a piece of hardware running firmware that connects to a Wi-Fi network. It takes commands from a mobile app and sends data to a cloud server. Unambiguously a PDE.</li>



<li><strong>Connected Industrial Sensor:</strong> A sensor on a factory floor that monitors temperature and sends that data over a local network to an industrial control system (ICS) is also a PDE. Both the hardware and the software enabling that connection are in scope.</li>



<li><strong>Commercial Software Library:</strong> Here’s where it gets less obvious. If you sell a software development kit (SDK) for processing images that developers integrate into their commercial mobile apps, your SDK is a PDE. Because it&#8217;s <em>intended</em> for integration into connected products, it falls under the CRA.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The CRA’s reach isn&#8217;t just about finished goods. It extends to the very components and software that make them work. If your product is built to be integrated into another product with digital connectivity, it almost certainly falls within the scope.</p>
</blockquote>



<p>This means scrutinising not just what your product <em>does</em>, but what it <em>enables</em> other products to do. For a deeper analysis of the edge cases, our detailed guide on <a href="https://goregulus.com/cra-basics/cyber-resilience-act-applicability/">Cyber Resilience Act applicability</a> offers more specific examples.</p>



<h3 class="wp-block-heading">Navigating Common Grey Areas</h3>



<p>The official guidance also helps clear up some common points of confusion, especially around software and older products. Getting these distinctions right is critical for scoping your compliance project accurately from day one.</p>



<p>One of the most debated topics is <strong>free and open-source software (FOSS)</strong>. The CRA is quite clear here: if FOSS is supplied on a non-commercial basis, it’s generally out of scope. For example, a developer maintaining a small open-source logging library in their free time is not covered. However, the moment a company integrates that same open-source component into a commercial product they sell, <strong>they</strong>, as the manufacturer, become responsible for its security and CRA compliance.</p>



<p>Another frequent question is about <strong>legacy products</strong>. The rule seems simple: products placed on the EU market before <strong>11 December 2027</strong> are exempt. But there’s a massive catch—the &#8220;substantial modification&#8221; clause.</p>



<p>So, what counts as a substantial modification?</p>



<ul class="wp-block-list">
<li><strong>Not a Substantial Modification:</strong> Releasing a routine security patch to fix a known vulnerability in a smart TV. This is just maintenance.</li>



<li><strong>A Substantial Modification:</strong> Pushing a major software update to that same smart TV that adds new streaming service apps and voice assistant integration. This introduces new functionalities and potential threat vectors, changing the product&#8217;s original risk profile.</li>
</ul>



<p>If a legacy product gets a &#8220;substantial modification&#8221; after the deadline, it&#8217;s instantly treated as a new product. That means it must fully comply with the CRA. This forces engineering and product teams to meticulously document every change and assess its security impact, otherwise you could accidentally trigger a full-blown and expensive compliance obligation.</p>



<h2 class="wp-block-heading">How to Classify Your Product&#8217;s Risk Level</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-product-risk.jpg" alt="Diagram showing CRA Product Risk Classes, categorizing devices like smart thermostats, routers, and industrial controllers."/></figure>



<p>Once you’ve confirmed the Cyber Resilience Act applies to your products, the next job is classifying their risk level. This isn&#8217;t just a box-ticking exercise; this decision shapes your entire conformity assessment journey, its complexity, and ultimately, its cost. The CRA implementation guidance from the European Commission lays out three main categories, and getting this classification right is absolutely fundamental.</p>



<p>Get it wrong, and you’re looking at two expensive mistakes. Under-classify, and you create a massive compliance gap, risking market withdrawal and serious penalties. Over-classify, and you saddle your team with unnecessary costs and administrative burdens, like paying for a Notified Body assessment when a self-assessment would have been perfectly fine.</p>



<h3 class="wp-block-heading">The Default Risk Category</h3>



<p>The vast majority of products with digital elements will land in the <strong>default</strong> category. This is the baseline for products whose failure isn&#8217;t expected to cause significant systemic disruption or widespread harm. For these products, the conformity assessment path is the most straightforward.</p>



<p>Think of your typical consumer smart home device—a connected coffee machine, smart lighting, or a smart toy. While a security bug is certainly an issue that needs fixing, it’s unlikely to bring down critical infrastructure or endanger public safety. A bug in a smart speaker might allow someone to play music without permission, which is inconvenient but not catastrophic.</p>



<p>For these default products, manufacturers can perform a <strong>self-assessment</strong>. This means you internally verify and document that your product meets all the essential requirements in Annex I of the CRA. You get to skip the cost and overhead of involving an external third-party certification body.</p>



<h3 class="wp-block-heading">Critical Products Class I and Class II</h3>



<p>This is where the real complexity starts. The European Commission, in Annex III of the CRA, provides a specific list of product categories that are automatically deemed &#8220;critical&#8221; and sorted into <strong>Class I</strong> or <strong>Class II</strong>. The decision comes down to the product&#8217;s core function and its potential impact if it were ever compromised.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A key takeaway from the official guidance is that classification hinges on the product&#8217;s &#8220;core functionality,&#8221; not the risk level of its individual components. A default-risk product that contains a critical component is still treated as a default-risk product.</p>
</blockquote>



<p><strong>Class I Critical Products</strong> are those performing functions important to cybersecurity, but they aren&#8217;t at the very highest level of criticality. For these, your compliance path gets more involved. We have a complete guide on how to conduct a <a href="https://goregulus.com/cra-compliance/cra-risk-assessment/">CRA risk assessment</a> that breaks down the specific steps needed here.</p>



<p>Here are a few examples of products that usually fall under Class I:</p>



<ul class="wp-block-list">
<li><strong>Network Equipment:</strong> This covers products like routers, modems, and network switches that are central to network management and security.</li>



<li><strong>Operating Systems:</strong> General-purpose operating systems for desktops and servers are in this class because of their central role in any computing environment.</li>



<li><strong>Password Managers:</strong> Any software designed specifically for storing and managing credentials is also considered a Class I critical product.</li>
</ul>



<p>For these products, you have a choice. You can either follow a harmonised standard and perform a self-assessment, or you must go through a third-party conformity assessment carried out by a Notified Body.</p>



<h3 class="wp-block-heading">The Highest Risk Level: Class II</h3>



<p><strong>Class II Critical Products</strong> represent the highest risk category, period. These are products where a failure could lead to severe consequences, hitting critical infrastructure, public safety, or causing major economic disruption.</p>



<p>For instance, an Industrial Control System (ICS) managing a power grid or a railway switching system is a clear-cut Class II product. A compromise there could have immediate, dangerous real-world effects. Likewise, the core hardware security modules (HSMs) used to protect financial transactions or network-connected robots used in surgery also fall into this top tier.</p>



<p>For Class II, there is no option for self-assessment. <strong>Mandatory third-party conformity assessment</strong> by a Notified Body is the only route. This is a rigorous audit where an independent, government-appointed organisation scrutinises your product and processes to certify they meet the CRA&#8217;s strictest requirements. It provides the highest level of assurance but also demands the most significant investment in both time and resources.</p>



<h2 class="wp-block-heading">Translating CRA Requirements Into Engineering Tasks</h2>



<p>The biggest hurdle for most organisations isn&#8217;t understanding the Cyber Resilience Act&#8217;s legal text; it&#8217;s turning it into actual engineering work. This is where the theory stops and the real work begins. We&#8217;ll map the essential security requirements from Annex I of the CRA directly onto the development and operational workflows your teams use every day.</p>



<p>Think of this as a practical translator. It takes the high-level legal obligations and turns them into a task-level roadmap. Your product security, engineering, and DevOps teams get a clear set of tickets they can drop right into their backlog and start executing. After all, the European Commission provides the framework, but your technical teams are the ones who have to build it.</p>



<h3 class="wp-block-heading">From Secure by Default to Developer Checklists</h3>



<p><strong>&#8220;Secure by default&#8221;</strong> is a core principle of the CRA, but it&#8217;s far too abstract for a developer to act on. You can&#8217;t just assign a ticket that says &#8220;make it secure.&#8221; The real goal is to break this principle down into a concrete checklist that becomes a non-negotiable part of your development process.</p>



<p>Let&#8217;s take a connected camera manufacturer as a real-world example. &#8220;Secure by default&#8221; doesn&#8217;t mean much, but these specific engineering tasks do:</p>



<ul class="wp-block-list">
<li><strong>Disable All Non-Essential Ports:</strong> When the device comes out of the box, a network scan should only show ports that are absolutely essential for its main function. For instance, only port 443 for HTTPS communication should be open. Any development, testing, or management ports like Telnet (port 23) or FTP (port 21) must be completely disabled in the production firmware.</li>



<li><strong>Enforce Strong Initial Passwords:</strong> The camera can&#8217;t ship with default credentials like &#8220;admin/admin.&#8221; It must force the user to create a unique, complex password (e.g., minimum 12 characters, including upper/lower case, numbers, and symbols) during the initial setup. No exceptions.</li>



<li><strong>Implement Brute-Force Protection:</strong> If someone tries to log in and fails five consecutive times, the device&#8217;s login interface has to temporarily lock out that IP address for at least five minutes.</li>
</ul>



<p>When you break it down like this, you create verifiable tasks. They can be assigned, tracked, and, most importantly, tested. This also generates the exact evidence you’ll need for your technical file later on.</p>



<h3 class="wp-block-heading">Mapping Vulnerability Handling to Operational Workflows</h3>



<p>Another huge part of the CRA is vulnerability handling. The regulation demands a structured process for receiving, fixing, and reporting security flaws. For your teams, this means building an operational workflow that is both robust and repeatable.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A requirement for a &#8216;vulnerability disclosure policy&#8217; isn&#8217;t just about writing a document—it&#8217;s about defining a live process. You need a dedicated security contact, a clear system for triaging incoming reports, and established communication channels.</p>
</blockquote>



<p>So, how does an obligation to &#8220;manage vulnerabilities&#8221; look in practice? It becomes a series of well-defined operational steps:</p>



<ol class="wp-block-list">
<li><strong>Establish a Public Security Contact:</strong> Create a simple, easy-to-find <code>security@yourcompany.com</code> email address and a <code>/security</code> webpage. This is where security researchers know exactly where to send their findings. This is your front door.</li>



<li><strong>Develop an Internal Triage System:</strong> When a report arrives, it should automatically create a ticket in a project management tool (like Jira or Azure DevOps) and assign it to a security engineer for initial validation within one business day.</li>



<li><strong>Define Remediation Timelines:</strong> Set internal service-level agreements (SLAs) for fixing vulnerabilities based on severity. For a smart thermostat manufacturer, a critical vulnerability allowing remote takeover might demand a patch within <strong>14 days</strong>, while a low-risk UI bug can be slated for the next quarterly release cycle.</li>



<li><strong>Structure the Coordinated Disclosure Process:</strong> Once a fix is ready, you need a documented plan. This covers how you notify the original researcher, publish a security advisory with a CVE identifier, and push the update out to customers. This process must be consistent every single time.</li>
</ol>



<p>For a much deeper look into setting up these processes, our guide on building a <a href="https://goregulus.com/cra-requirements/cra-secure-development-lifecycle-sdl/">CRA secure development lifecycle</a> offers more detailed steps and context.</p>



<p>These workflows aren&#8217;t just best practices; they are mandatory requirements under the CRA and will be scrutinised during an audit. Failing to get this right can lead to major compliance failures, even if your product is otherwise secure. This methodical approach is how you turn legal language into engineering reality.</p>



<h2 class="wp-block-heading">Building Your Technical File and Vulnerability Management Process</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-implementation-guidance-european-commission-vulnerability-workflow.jpg" alt="Workflow diagram depicting vulnerability management steps: technical file, bug report, triage checklist, and ENISA reporting."/></figure>



<p>The Cyber Resilience Act boils down to two concrete outputs that will become your proof of compliance: a comprehensive Technical File and a systematic vulnerability management process. Think of the Technical File as your product’s compliance biography—a single source of truth that demonstrates how you’ve met every single requirement. Your vulnerability management process, on the other hand, is the living, breathing system that keeps the product secure long after it ships.</p>



<p>These two elements are precisely what market surveillance authorities will ask for during an audit. Without them, even a technically flawless product will fail its CRA assessment. Getting them right isn’t about ticking boxes; it’s about creating a bulletproof, auditable trail that proves you’ve done your due diligence.</p>



<h3 class="wp-block-heading">Structuring Your CRA Technical File</h3>



<p>Annex II of the CRA spells out the mandatory structure for your Technical File. This isn&#8217;t a marketing brochure; it&#8217;s a detailed, evidence-based dossier. Your entire goal here is to make it incredibly easy for an auditor to connect each legal requirement to a specific piece of evidence in your file.</p>



<p>A practical way to organise this is to create a master document that indexes all the required components. Here’s a blueprint for its structure:</p>



<ul class="wp-block-list">
<li><strong>General Product Description:</strong> The product&#8217;s name (e.g., &#8220;SmartHome Hub Model X1&#8221;), version (e.g., Firmware v2.1.3), intended use, and a clear, concise description of its core functionalities.</li>



<li><strong>Cybersecurity Risk Assessment:</strong> The full report detailing how you identified threats (e.g., &#8220;Man-in-the-middle attacks on local network&#8221;), assessed risks, and chose your mitigation measures (&#8220;Implement TLS 1.3 for all communications&#8221;).</li>



<li><strong>List of Applied Standards:</strong> A detailed list of every harmonised standard (e.g., &#8220;ETSI EN 303 645&#8221;), common specification, or other technical solution you used to meet Annex I requirements.</li>



<li><strong>Software Bill of Materials (SBOM):</strong> A complete and accurate inventory of all third-party and open-source components (e.g., <code>openssl-3.0.2</code>, <code>mosquitto-2.0.14</code>) baked into your product, generated in a standard format like SPDX or CycloneDX.</li>



<li><strong>Source Code or Documentation:</strong> For critical products, you may need to provide a Notified Body with access to source code or extensive technical documentation for review.</li>



<li><strong>EU Declaration of Conformity:</strong> A copy of the formal declaration you sign, attesting to the product&#8217;s compliance.</li>
</ul>



<p>This structure provides a clear, logical narrative for auditors, showing them exactly how your product was designed, developed, and tested with security at its core. To ensure robust cybersecurity and the kind of vulnerability management the CRA requires, it is crucial to understand and implement the latest in <a href="https://www.docuwriter.ai/posts/api-security-best-practices">API Security Best Practices</a>, as APIs are often a primary attack vector.</p>



<h3 class="wp-block-heading">Building a Compliant Vulnerability Workflow</h3>



<p>Beyond just the documentation, the CRA mandates a robust, active process for handling vulnerabilities. This is much more than just patching bugs as they pop up. It’s about having a documented, repeatable, and timely workflow for every single security report you receive. This process must cover everything from the initial intake to reporting the issue to the EU&#8217;s cybersecurity agency, ENISA.</p>



<p>Let&#8217;s walk through a tangible scenario. Imagine you’re a manufacturer of a smart home hub, and a security researcher drops a vulnerability report into your inbox.</p>



<p>Your documented workflow should kick in immediately:</p>



<ul class="wp-block-list">
<li><strong>Receipt and Triage:</strong> The report lands at your public <code>security@</code> address. An internal ticket is automatically created and assigned to the product security team. Within <strong>24 hours</strong>, the team validates whether the vulnerability is genuine and exploitable.</li>



<li><strong>Severity Assessment:</strong> Using a framework like the Common Vulnerability Scoring System (CVSS), the team classifies the bug&#8217;s severity. They quickly determine it&#8217;s a critical remote code execution flaw, giving it a score of <strong>9.8</strong>.</li>



<li><strong>ENISA Reporting:</strong> Because the vulnerability is being actively exploited in the wild, you must notify ENISA within <strong>24 hours</strong> of becoming aware of it. You’ll use ENISA&#8217;s designated portal to submit an initial report.</li>



<li><strong>Remediation and Patching:</strong> The engineering team gets to work on a patch. For a critical flaw like this, your internal SLA should mandate that a fix is developed and tested within a short timeframe, say <strong>7 days</strong>.</li>



<li><strong>Coordinated Disclosure:</strong> Once the patch is ready, you coordinate with the original researcher. You agree on a public disclosure date, publish a clear security advisory on your website, and push the firmware update automatically to all connected hubs.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This entire workflow—from the first email to the final update—must be documented. Timestamps, key decisions, and every communication create the auditable evidence that proves you are meeting your post-market surveillance obligations under the CRA.</p>
</blockquote>



<p>The European Commission is still releasing guidance to clarify these obligations. For instance, on <strong>3 March 2026</strong>, it unveiled draft guidance to help manufacturers—especially small and medium-sized enterprises—understand their responsibilities. This guidance addresses how to handle legacy products and even allows for cost-saving measures like grouped testing for similar product families, which is a significant relief for teams with limited resources. You can discover more insights about these draft guidelines on Hunton.com. This ongoing support from the commission is vital for navigating the practicalities of CRA implementation.</p>



<p>Of all the questions we see from product teams digging into the Cyber Resilience Act, a few pop up again and again. It&#8217;s understandable—the official guidance from the European Commission can be dense, and getting clear answers on these recurring pain points is key to building a compliance plan you can trust.</p>



<p>This section tackles the most frequent queries head-on, giving you practical answers to cut through the complexity and avoid common missteps.</p>



<h3 class="wp-block-heading">What Happens to Products Already on the Market?</h3>



<p>One of the most pressing questions is about legacy products. What happens if your product was already on the EU market before the full application date of <strong>11 December 2027</strong>?</p>



<p>The short answer is that these products generally are not subject to the CRA&#8217;s requirements. But there&#8217;s a huge catch you need to be aware of: the <strong>&#8220;substantial modification&#8221;</strong> clause.</p>



<p>The European Commission’s guidance is clear that a substantial modification is any change to software, firmware, or hardware that meaningfully alters the product&#8217;s original intended purpose, functionality, or security posture. If a change introduces new threat vectors or materially expands the attack surface, it&#8217;s almost certainly substantial.</p>



<p>Let&#8217;s look at a practical example to make this concrete:</p>



<ul class="wp-block-list">
<li><strong>Not a Substantial Modification:</strong> You find a buffer overflow vulnerability in your smart camera&#8217;s firmware. You issue a patch that fixes the bug but adds no new features. This is just routine maintenance and does <strong>not</strong> drag your legacy product into CRA compliance.</li>



<li><strong>A Substantial Modification:</strong> You decide to update that same camera with a new cloud storage integration and AI-powered person detection. This fundamentally changes the product&#8217;s function and introduces new data processing risks and network connections. This <strong>is</strong> a substantial modification, and the product is now treated as &#8220;newly placed on the market,&#8221; which means it needs to be fully CRA compliant.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>It&#8217;s absolutely essential to meticulously document every change made to a legacy product and run a formal impact assessment. This creates an auditable trail that proves why a change was or was not considered substantial, protecting you from accidentally falling into non-compliance.</p>
</blockquote>



<h3 class="wp-block-heading">How the CRA Interacts with Other EU Laws</h3>



<p>Another common source of confusion is how the CRA fits in with other major EU regulations like the General Data Protection Regulation (GDPR) and the AI Act. The key is to see them as complementary layers, not competing rules.</p>



<p>The CRA is a horizontal regulation. Think of it as setting a foundational layer of cybersecurity for all products with digital elements. It then works in tandem with other laws to create a more complete regulatory picture.</p>



<ul class="wp-block-list">
<li><strong>With GDPR:</strong> The CRA provides the technical &#8220;secure by design and by default&#8221; foundation for GDPR. For example, if your product is a health-tracking wearable that collects heart rate data (personal data), GDPR requires you to protect that data. The CRA mandates specific technical controls like encryption of data at rest and in transit, which directly fulfills a key GDPR principle.</li>



<li><strong>With the AI Act:</strong> If you&#8217;re building a product that&#8217;s also an AI system (like an AI-powered medical device), you have to meet the requirements of both. The AI Act will govern the safety and transparency of the AI model itself (e.g., ensuring it is not biased), while the CRA mandates the cybersecurity of the underlying digital product it runs on (e.g., protecting it from being tampered with).</li>
</ul>



<p>Your conformity assessment has to be consolidated. This means your single EU Declaration of Conformity must list all applicable laws and attest that your product complies with every single one.</p>



<h3 class="wp-block-heading">Support for Smaller Companies and SMEs</h3>



<p>The European Commission’s guidance explicitly acknowledges that this compliance journey can be tough for small and medium-sized enterprises (SMEs). To help, the CRA includes several practical measures designed to lower the barrier to entry for smaller organisations.</p>



<p>One of the most useful concessions is allowing businesses to <strong>group similar products into families for conformity testing</strong>. For instance, if you produce a line of smart light bulbs that all share the same core firmware and connectivity module but just differ in shape or colour, you can likely assess them as a single product family. This dramatically cuts down on testing costs and administrative work.</p>



<p>The extended <strong>36-month adaptation period</strong> and the future development of <strong>harmonised standards</strong> are also there to help. Once published, these standards will offer a &#8220;presumption of conformity,&#8221; giving SMEs a clear and straightforward checklist to follow to meet their legal obligations without having to interpret the law from scratch.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>Regulus</strong> provides a streamlined software platform to help your team confidently navigate the Cyber Resilience Act. Our solution turns complex regulatory text into an actionable compliance plan, generating a tailored requirements matrix, technical file templates, and a step-by-step roadmap for 2025–2027. Gain clarity and reduce compliance costs by visiting <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-implementation-guidance-european-commission/">CRA implementation guidance European Commission: Simple Steps to Compliance</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>CRA standardisation request CEN CENELEC ETSI: A 2026 compliance guide</title>
		<link>https://goregulus.com/cra-basics/cra-standardisation-request-cen-cenelec-etsi/</link>
		
		<dc:creator><![CDATA[Igor Smith]]></dc:creator>
		<pubDate>Sat, 14 Mar 2026 07:14:05 +0000</pubDate>
				<category><![CDATA[CRA Basics]]></category>
		<category><![CDATA[CEN CENELEC ETSI]]></category>
		<category><![CDATA[CRA Standardisation Request CEN CENELEC ETSI]]></category>
		<category><![CDATA[Cyber Resilience Act]]></category>
		<category><![CDATA[EU Compliance]]></category>
		<category><![CDATA[Harmonised Standards]]></category>
		<guid isPermaLink="false">https://goregulus.com/uncategorized/cra-standardisation-request-cen-cenelec-etsi/</guid>

					<description><![CDATA[<p>The CRA standardisation request is the European Commission&#8217;s official instruction to Europe’s main standardisation bodies: CEN, CENELEC, and ETSI. In simple terms, it&#8217;s the kick-off for creating the detailed technical rulebooks—called harmonised standards—that will define how manufacturers can meet the legal duties of the Cyber Resilience Act. Following these standards will give you a clear, [&#8230;]</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-standardisation-request-cen-cenelec-etsi/">CRA standardisation request CEN CENELEC ETSI: A 2026 compliance guide</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The <strong>CRA standardisation request</strong> is the European Commission&#8217;s official instruction to Europe’s main standardisation bodies: CEN, CENELEC, and ETSI. In simple terms, it&#8217;s the kick-off for creating the detailed technical rulebooks—called <strong>harmonised standards</strong>—that will define <em>how</em> manufacturers can meet the legal duties of the Cyber Resilience Act. Following these standards will give you a clear, recognised path to proving compliance.</p>



<h2 class="wp-block-heading">Decoding the CRA Standardisation Request</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-cra-standards.jpg" alt="Hand-drawn diagram on CRA standards, presumption of conformity, with CEN, CENELEC, ETSI, and a shield."/></figure>



<p>Think of the Cyber Resilience Act (CRA) itself as a high-level goal. It tells you <em>what</em> you need to achieve—for example, that your product must be secure and resilient. But it doesn&#8217;t specify the exact technical details of <em>how</em> to get there. It won&#8217;t tell you precisely how to implement a secure update mechanism or what a &#8220;secure default configuration&#8221; looks like in practice.</p>



<p>This is where the standardisation request comes in. It&#8217;s the official job order from the European Commission to the technical experts who write the rulebooks for product safety in Europe:</p>



<ul class="wp-block-list">
<li><strong>CEN</strong> (European Committee for Standardization)</li>



<li><strong>CENELEC</strong> (European Committee for Electrotechnical Standardization)</li>



<li><strong>ETSI</strong> (European Telecommunications Standards Institute)</li>
</ul>



<p>These organisations are now tasked with turning the CRA&#8217;s high-level legal requirements into concrete, practical, and technical specifications. They are writing the &#8220;how-to&#8221; manuals.</p>



<h3 class="wp-block-heading">The Power of Presumption of Conformity</h3>



<p>The single biggest reason these standards matter is the <strong>&#8220;presumption of conformity.&#8221;</strong> It’s a powerful legal concept. It means that if you design and build your product following the relevant harmonised standards, market authorities will automatically <em>presume</em> your product complies with the CRA&#8217;s essential requirements.</p>



<p>This provides the clearest, most direct, and most defensible route to demonstrating compliance. It takes the guesswork out of the equation.</p>



<p>For instance, the CRA demands that manufacturers handle vulnerabilities effectively. Instead of leaving you to figure out what &#8220;effectively&#8221; means, a harmonised standard will likely lay out a specific process for vulnerability intake, triage, and patching, probably based on established best practices like ISO/IEC 29147. By following that standard, you get a ready-made methodology that is officially recognised across the EU.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>A standardisation request is not just a polite suggestion; it&#8217;s a formal mandate. It triggers a structured process that bridges the gap between high-level law and the real-world engineering needed to build secure products.</p>
</blockquote>



<p>This whole system is designed to make market access simpler. By sticking to a single set of harmonised standards, you avoid the nightmare of trying to meet different interpretations of the law in different EU countries. It creates one unified benchmark for security, giving you the confidence to build, document, and sell your product on the European market. A crucial part of this is getting your paperwork in order. You can get a head start by learning how to <a href="https://goregulus.com/cra-documentation/technical-documentation/">prepare your CRA technical documentation</a> in our detailed guide.</p>



<h2 class="wp-block-heading">The Key Players: CEN, CENELEC, and ETSI Explained</h2>



<p>To get ready for the Cyber Resilience Act, you first need to understand the organisations writing the detailed rules. The European Commission has sent a formal <strong>CRA standardisation request</strong> to a trio of specialised bodies known as the European Standardisation Organisations, or ESOs.</p>



<p>Think of them as a team of expert architects, each with a distinct speciality, working together to build a secure foundation for Europe’s digital market. These three key players are <strong>CEN</strong>, <strong>CENELEC</strong>, and <strong>ETSI</strong>. Each brings a unique focus to the table, ensuring the resulting harmonised standards are both comprehensive and technically sound. Their collaboration is what turns the CRA&#8217;s legal principles into practical, engineering-focused guidelines.</p>



<h3 class="wp-block-heading">CEN: The Generalist Architect</h3>



<p>The <strong>European Committee for Standardization (CEN)</strong> is the architect for a huge range of non-electrical products and services. Its scope is incredibly broad, covering everything from children&#8217;s toys and medical devices to construction materials and mechanical engineering.</p>



<p>In the context of the CRA, CEN will concentrate on the cybersecurity aspects of products that fall outside the specific domains of its two counterparts.</p>



<p><strong>Practical Example of CEN&#8217;s Scope:</strong></p>



<ul class="wp-block-list">
<li>A manufacturer of <strong>smart locks for doors</strong> would look to CEN for standards on physical security combined with digital access control. CEN would handle the non-electrical aspects, like how the lock&#8217;s software interfaces with its mechanical parts securely.</li>



<li><strong>Connected gym equipment</strong>, like a smart treadmill, would also fall under CEN, which would define standards for the secure software that controls the machine’s functions and stores user data.</li>
</ul>



<h3 class="wp-block-heading">CENELEC: The Electrical Specialist</h3>



<p>The <strong>European Committee for Electrotechnical Standardization (CENELEC)</strong> is the specialist for all things electrical and electronic. This organisation is responsible for creating standards for a massive array of products that use electricity, from everyday gadgets to complex industrial systems.</p>



<p><strong>Practical Examples of CENELEC&#8217;s Scope:</strong></p>



<ul class="wp-block-list">
<li><strong>Smart Home Devices:</strong> Think of smart coffee makers, connected lighting systems, or intelligent thermostats. CENELEC will define the cybersecurity standards to protect these devices from being compromised.</li>



<li><strong>Industrial Control Systems (ICS):</strong> In factories and power plants, CENELEC’s standards will ensure the electrotechnical components of operational technology (OT) are secure against cyber threats.</li>
</ul>



<p>For the CRA, CENELEC’s role is critical. Its work will directly shape the security requirements for a huge number of IoT products and connected hardware, making sure they are resilient by design.</p>



<h3 class="wp-block-heading">ETSI: The Communications Expert</h3>



<p>The <strong>European Telecommunications Standards Institute (ETSI)</strong> handles the communications side of technology. This includes everything related to information and communication technologies (ICT), like mobile networks, radio equipment, and the internet.</p>



<p>ETSI&#8217;s expertise is vital for the CRA because nearly all &#8220;products with digital elements&#8221; communicate in some way.</p>



<p><strong>Practical Example of ETSI&#8217;s Scope:</strong></p>



<ul class="wp-block-list">
<li><strong>Wireless Security Protocols:</strong> When a connected car communicates with roadside infrastructure (V2X), ETSI standards define the secure communication protocols to prevent hijacking or data interception.</li>



<li><strong>IoT Network Interfaces:</strong> For a smart water meter sending data over a Low-Power Wide-Area Network (LPWAN), ETSI&#8217;s work ensures the radio interface is secure and the data transmission is encrypted.</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>ETSI develops the standards that ensure these communications are secure and reliable. For example, when your IoT device sends data to the cloud, ETSI&#8217;s standards will help define the protocols and security measures needed to protect that data in transit.</p>
</blockquote>



<p>This collaborative structure is governed by Regulation (EU) No 1025/2012, which sets out how the ESOs receive mandates to support EU law. The process is remarkably efficient; research shows that <strong>one European Standard replaces 34 national standards</strong>, drastically reducing market fragmentation. This unified approach is a major benefit for manufacturers, as it cuts through red tape and simplifies market access across the entire EU.</p>



<p>You can check out our guide on <a href="https://goregulus.com/cra-compliance/cra-conformity-assessment/">how the CRA conformity assessment works</a> to understand this process better.</p>



<p>The ESOs are central to putting EU legislation into practice. About <strong>30% of all European Standards</strong> published by CEN and CENELEC come from specific requests by the European Commission. This highlights their role in creating actionable rules that support regulatory goals for the 34 countries in the European economic area. You can delve deeper into the <a href="https://eismea.ec.europa.eu/funding-opportunities/calls-proposals/support-standardisation-activities-performed-cen-cenelec-and-etsi-3_en">support for standardisation activities</a> to see the scale of this collaboration.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">How The Standardisation Process Works In Practice</h2>



<p>So, how does a dense legal text like the Cyber Resilience Act get translated into a set of technical standards that engineering teams can actually use? The process is surprisingly structured, and understanding it gives you a roadmap for when you can expect crucial guidance to land.</p>



<p>It all kicks off with a formal <strong>CRA standardisation request</strong> from the European Commission. Think of this as the starting gun. This document officially tasks the European Standardisation Organisations (ESOs) – <strong>CEN, CENELEC, and ETSI</strong> – with developing the specific standards needed to support the Act, complete with topics to cover and deadlines.</p>



<h3 class="wp-block-heading">From Request To Committee</h3>



<p>Once the ESOs accept the request, the real work begins. The task is handed over to specialised Technical Committees (TCs) or, in many cases, Joint Technical Committees (JTCs). These committees are the engine room of the whole operation, made up of industry experts, national delegates, and other stakeholders who bring hard-won, real-world experience to the table.</p>



<p>For instance, a <strong>Joint Technical Committee</strong> is the perfect vehicle for tackling topics that cut across different domains, like the cybersecurity of industrial machines, which involves both electrical (CENELEC) and non-electrical (CEN) expertise. These experts don&#8217;t start from scratch. Their first move is almost always to review existing international standards, like those from ISO/IEC, to see what can be adapted. This ensures that the new European standards are as globally aligned as possible.</p>



<p>We saw this exact process play out with the recent Data Act. The Commission issued Mandate M/614, which CEN and CENELEC officially accepted on <strong>July 7</strong>. This set in motion a commitment to deliver <strong>seven</strong> European standardisation deliverables—including <strong>four</strong> European Standards and <strong>three</strong> Technical Specifications—to support the Act&#8217;s application from <strong>September 12, 2025</strong>. You can read more about how the <a href="https://www.cencenelec.eu/news-events/news/2025/brief-news/2025-07-11-data-act-standardization-request/">Data Act standardisation request is accelerating digital regulation</a>.</p>



<h3 class="wp-block-heading">Drafting And Public Consultation</h3>



<p>The committee’s first major task is to hammer out a working draft. This initial version is debated, refined, and edited over many sessions, drawing on the collective knowledge of the group.</p>



<p>Crucially, this isn’t a closed-door affair. The process includes a vital stage known as the ‘public enquiry’ or public comment period. During this window, the draft standard is shared with national standards bodies and the public for feedback.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This is a critical opportunity for your company to have a voice. By engaging through your national standards body, you can provide feedback to ensure the final rules are practical and don’t impose an unworkable burden on your industry.</p>
</blockquote>



<p>For a CRA-related standard on vulnerability reporting, for example, the draft would likely be scrutinised by security researchers, product managers, and software developers. Their input could be invaluable in fine-tuning the requirements to be both effective against threats and feasible to implement in a fast-paced development cycle.</p>



<p>The following infographic gives a great visual of how input from <strong>34</strong> national bodies is consolidated into a single, unified European Standard.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="585" src="https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-standardization-process-1-1024x585.jpg" alt="An infographic illustrating the EU standardization process from 34 nations to a single European standard." class="wp-image-2076" srcset="https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-standardization-process-1-1024x585.jpg 1024w, https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-standardization-process-1-300x171.jpg 300w, https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-standardization-process-1-768x439.jpg 768w, https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-standardization-process-1.jpg 1312w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>This highlights the core efficiency of the European model: creating one harmonised standard that replaces dozens of potentially conflicting national rules.</p>



<h3 class="wp-block-heading">Final Approval And Publication</h3>



<p>After the public enquiry closes, the technical committee gets back to work, reviewing every single comment and making revisions. This phase often involves some intense negotiation to strike a consensus that balances security, innovation, and commercial reality.</p>



<p>Once consensus is reached, the final draft goes to a formal vote. To pass, it needs approval from a weighted majority of the national standards bodies that make up CEN and CENELEC. After a successful vote, the document is published as an official European Standard (EN). Its reference is then published in the Official Journal of the European Union (OJEU), giving it legal force. This is the moment it becomes a &#8220;harmonised standard,&#8221; an officially recognised tool you can use to claim &#8220;presumption of conformity&#8221; with the law.</p>



<p>To help you anticipate these key milestones, the table below provides a simplified overview of the journey from a standardisation request to a published harmonised standard.</p>



<h3 class="wp-block-heading">CRA Standardisation Timeline From Request to Harmonised Standard</h3>



<p>A simplified overview of the key phases in the development of harmonised standards for the Cyber Resilience Act, helping teams anticipate key milestones.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Phase</th>
<th align="left">Description</th>
<th align="left">Typical Duration</th>
<th align="left">Key Output</th>
</tr>
<tr>
<td align="left"><strong>Request &amp; Acceptance</strong></td>
<td align="left">European Commission issues a request; ESOs formally accept it and assign it to a Technical Committee.</td>
<td align="left">2-4 months</td>
<td align="left">Accepted mandate and work plan.</td>
</tr>
<tr>
<td align="left"><strong>Drafting</strong></td>
<td align="left">The Technical Committee develops the initial working draft of the standard based on the mandate and existing work.</td>
<td align="left">6-12 months</td>
<td align="left">First stable draft for review.</td>
</tr>
<tr>
<td align="left"><strong>Public Enquiry</strong></td>
<td align="left">The draft is circulated to national bodies and the public for comments and feedback.</td>
<td align="left">2-3 months</td>
<td align="left">Collection of all stakeholder feedback.</td>
</tr>
<tr>
<td align="left"><strong>Revision &amp; Consensus</strong></td>
<td align="left">The committee reviews all comments, revises the draft, and works to achieve consensus among members.</td>
<td align="left">4-6 months</td>
<td align="left">Final draft for voting.</td>
</tr>
<tr>
<td align="left"><strong>Formal Vote</strong></td>
<td align="left">National standards bodies vote on the final draft. A weighted majority is needed for approval.</td>
<td align="left">1-2 months</td>
<td align="left">Approved final standard text.</td>
</tr>
<tr>
<td align="left"><strong>Publication</strong></td>
<td align="left">The standard is published as an EN, and its reference is published in the Official Journal of the EU (OJEU).</td>
<td align="left">1-2 months</td>
<td align="left">A published harmonised standard (EN).</td>
</tr>
</table></figure>



<p>As you can see, the path is methodical and designed to build consensus. While it can feel slow, each step is essential for creating standards that are both robust and practical for the market.</p>



<h2 class="wp-block-heading">Predicting the Future CRA Harmonised Standards</h2>



<p>While we’re all waiting for CEN, CENELEC, and ETSI to publish the final <strong>CRA standardisation request</strong>, we’re not exactly flying blind. By carefully analysing the CRA’s Annex I requirements and looking at well-established international security frameworks, we can make some highly educated predictions about what these new standards will contain. This gives your product security teams a powerful head start on compliance.</p>



<p>Think of it as a weather forecast for your regulatory roadmap. We can already see the major fronts moving in and can tell which areas will be most affected. This lets us prepare our defences long before the storm actually hits. The key thing to remember is that the standards bodies won&#8217;t be reinventing the wheel; they will build upon globally recognised best practices that already exist.</p>



<p>For manufacturers, this means you can start aligning your internal processes <em>today</em>. You don&#8217;t have to wait for the final text to begin building a security-first culture that maps directly to the CRA&#8217;s core principles.</p>



<h3 class="wp-block-heading">The Secure Development Lifecycle Framework</h3>



<p>One of the most predictable areas for standardisation is the <strong>Secure Development Lifecycle (SDLC)</strong>. The CRA’s Annex I states that products must be &#8220;designed, developed and produced&#8221; to ensure an appropriate level of cybersecurity. A formal standard will turn this high-level principle into a concrete, auditable process.</p>



<p>It’s almost a given that this new standard will draw heavily from existing frameworks, particularly <strong>IEC 62443-4-1</strong>. This is already a mature set of process requirements for the secure development of industrial automation and control systems, making it a perfect foundation.</p>



<p><strong>Practical Example:</strong><br>Imagine a manufacturer of smart industrial sensors that currently has a fairly informal development process. To get ahead, they can start adopting practices from IEC 62443-4-1 right now. This could include:</p>



<ul class="wp-block-list">
<li>Establishing a formal threat modelling process for every new feature.</li>



<li>Implementing static and dynamic code analysis tools in their CI/CD pipeline.</li>



<li>Documenting all security testing and verification procedures.</li>
</ul>



<p>By doing this now, they are effectively building the evidence needed to demonstrate compliance with the future harmonised standard. With the formal adoption of the CRA fast approaching, it&#8217;s wise to get familiar with the <a href="https://goregulus.com/cra-compliance/cra-deadlines-2025-2027/">key CRA deadlines between 2025 and 2027</a> in our dedicated article.</p>



<h3 class="wp-block-heading">Vulnerability Handling and Disclosure</h3>



<p>Another key area laid out in Annex I is the manufacturer&#8217;s ongoing responsibility for vulnerability management. This covers everything from how you handle discovered vulnerabilities internally to how you disclose them responsibly to the public. The future standards are almost certain to be based on two key ISO/IEC standards.</p>



<ul class="wp-block-list">
<li><strong>ISO/IEC 29147:</strong> This standard is all about vulnerability disclosure. It provides a clear framework for how to receive, assess, and communicate vulnerability information with researchers and users.</li>



<li><strong>ISO/IEC 30111:</strong> This one is the internal counterpart, focusing on vulnerability handling processes. It details the steps a manufacturer should take from the moment a flaw is discovered to when it&#8217;s fixed.</li>
</ul>



<p><strong>Practical Example:</strong><br>A company making a consumer-facing smart camera can align with these standards by:</p>



<ol class="wp-block-list">
<li><strong>Publishing a Vulnerability Disclosure Policy (VDP)</strong> on their website, providing a clear email address (e.g., <a href="mailto:security@company.com">security@company.com</a>) for researchers to submit findings.</li>



<li><strong>Setting up an internal ticketing system</strong> (like Jira) to track reported vulnerabilities from intake, through triage and engineering, to patch release, mirroring the process flow in ISO/IEC 30111.</li>



<li><strong>Coordinating with the researcher</strong> who found the bug to agree on a public disclosure date, following the guidelines in ISO/IEC 29147.</li>
</ol>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>By aligning your internal vulnerability management programme with these two standards today, you are essentially pre-complying with the CRA. It ensures your processes for intake, triage, remediation, and disclosure will meet the expectations of market surveillance authorities.</p>
</blockquote>



<h3 class="wp-block-heading">Software Bill of Materials Requirements</h3>



<p>The CRA makes it a legal requirement for manufacturers to create and maintain a <strong>Software Bill of Materials (SBOM)</strong> as part of their technical documentation. While the Act just says it must be in a &#8220;commonly used, machine-readable format,&#8221; the harmonised standards will nail down the specific details.</p>



<p>It&#8217;s a near certainty that the new standard will formalise the use of the two dominant industry formats:</p>



<ol class="wp-block-list">
<li><strong>SPDX (Software Package Data Exchange):</strong> An open standard for communicating SBOM information that excels at tracking licensing and provenance details. It’s already recognised as an international standard, <strong>ISO/IEC 5962</strong>.</li>



<li><strong>CycloneDX:</strong> A lightweight, security-focused SBOM standard from OWASP, specifically designed for easy integration into automated security tools.</li>
</ol>



<p><strong>Practical Example:</strong><br>A company producing a smart home hub can prepare by integrating SBOM generation directly into its build process. Using open-source tools, they could automatically generate an SBOM in both SPDX and CycloneDX formats for every software release. This file would list all open-source libraries, their versions, and their dependencies, creating the exact &#8220;ingredients list&#8221; the CRA demands.</p>



<p>By taking these predictive steps, you transform compliance from a last-minute, reactive scramble into a proactive, strategic part of your business. You build resilience directly into your products and processes, turning regulatory uncertainty into a tangible competitive advantage.</p>



<h2 class="wp-block-heading">Your Action Plan for CRA Compliance</h2>



<figure class="wp-block-image size-large"><img decoding="async" src="https://goregulus.com/wp-content/uploads/2026/03/cra-standardisation-request-cen-cenelec-etsi-process-steps.jpg" alt="A three-step diagram outlining processes: map processes, engage national committee, and prepare technical files."/></figure>



<p>Turning the legal language of the Cyber Resilience Act into a concrete engineering plan can feel overwhelming, especially while the harmonised standards are still being written. But here’s the reality: waiting for <strong>CEN, CENELEC, and ETSI</strong> to publish the final rules isn&#8217;t a strategy. It’s a gamble. Getting ahead of the curve now is how you turn this regulatory challenge into a genuine competitive edge.</p>



<p>The whole point of using harmonised standards, once they&#8217;re available, is to gain the <strong>‘presumption of conformity’</strong>. This is your simplest and most direct path to putting a CE mark on your product and selling it in the EU. If you decide to go your own way, the burden of proof is entirely on you to demonstrate that your alternative methods satisfy the CRA&#8217;s essential security requirements—a path filled with complexity and legal risk.</p>



<p>This section lays out a practical, step-by-step plan to get your compliance journey started today. It’s all about turning regulatory theory into a structured, manageable process.</p>



<h3 class="wp-block-heading">Step 1: Map Your Processes Against Annex I</h3>



<p>Your first, and most important, job is to treat the CRA’s Annex I as your roadmap. The essential requirements laid out there are the bedrock of the entire regulation. No matter what the final standards say, your products <em>must</em> meet these obligations.</p>



<p>Start by conducting a gap analysis. Go through each requirement in Annex I and map your current development, security, and post-market surveillance processes against it. This simple exercise will immediately show you where you’re strong and where you’re falling short.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The goal here isn&#8217;t to be perfect overnight. It&#8217;s to create an honest baseline. Document where your processes already align and, more importantly, where the gaps are. This documented analysis becomes the foundation of your compliance plan.</p>
</blockquote>



<p>For example, Annex I demands that products are shipped with a secure default configuration. Your mapping exercise should prompt questions like:</p>



<ul class="wp-block-list">
<li>Do we have a documented process for defining what &#8220;secure by default&#8221; actually means for our products? (e.g., all unnecessary ports closed, default passwords banned).</li>



<li>Is this configuration tested and verified before every release?</li>



<li>How do we make sure this secure configuration isn&#8217;t compromised by future updates?</li>
</ul>



<p>This kind of proactive mapping builds the internal evidence you&#8217;ll need to produce later.</p>



<h3 class="wp-block-heading">Step 2: Start Organising Your Technical Documentation Now</h3>



<p>Don&#8217;t wait for the harmonised standards to be finalised before you start building your technical documentation. The CRA already spells out the required structure, so you can build that framework today and use placeholders for the specific evidence you&#8217;ll add later.</p>



<p>Here&#8217;s a practical example. A smart lock manufacturer knows they need to demonstrate a solid vulnerability management process. Even without the final standard, they can start organising their evidence right now.</p>



<ul class="wp-block-list">
<li><strong>Document What Exists:</strong> They can document their current system for receiving vulnerability reports, their triage methods, and their patching process.</li>



<li><strong>Create Placeholders:</strong> In their technical file, they can create a section titled &#8220;Vulnerability Handling (Annex I, Part 2)&#8221; and drop in their current process documents.</li>



<li><strong>Spot the Gaps:</strong> This immediately shows them what’s missing, like a formal Coordinated Vulnerability Disclosure (CVD) policy. Now they have a clear action item to work on.</li>
</ul>



<p>By organising documentation early, you create a living file that evolves as the standards become clear. It&#8217;s far less painful than trying to generate years of evidence from scratch when the deadline is looming. You can learn more about how to <a href="https://goregulus.com/cra-basics/obtain-a-ce-certificate-for-the-cra/">obtain a CE certificate for the CRA</a> and the critical role documentation plays in our detailed guide.</p>



<h3 class="wp-block-heading">Manufacturer&#8217;s CRA Preparedness Checklist</h3>



<p>To help you get started, here&#8217;s a checklist of concrete actions your team can take right now. This isn&#8217;t about boiling the ocean; it&#8217;s about taking the first practical steps on the road to compliance.</p>



<figure class="wp-block-table"><table><tr>
<th align="left">Action Item</th>
<th align="left">Why It Matters</th>
<th align="left">First Step to Take</th>
</tr>
<tr>
<td align="left"><strong>Appoint a CRA Lead</strong></td>
<td align="left">Compliance needs a clear owner. This person will coordinate efforts across engineering, legal, and product teams.</td>
<td align="left">Identify a senior individual with the authority to drive cross-functional initiatives.</td>
</tr>
<tr>
<td align="left"><strong>Inventory Your Products</strong></td>
<td align="left">You can&#039;t comply if you don&#039;t know what&#039;s in scope. You need a full list of products with digital elements.</td>
<td align="left">Create a spreadsheet listing all hardware and software products sold in the EU.</td>
</tr>
<tr>
<td align="left"><strong>Analyse Annex I Gaps</strong></td>
<td align="left">This is the core of your initial assessment, showing you exactly where to focus your resources first.</td>
<td align="left">Schedule a workshop with your development leads to review Annex I, requirement by requirement.</td>
</tr>
<tr>
<td align="left"><strong>Draft an SBOM for one product</strong></td>
<td align="left">The SBOM is a key deliverable. Creating one now helps you understand the tooling and process effort required.</td>
<td align="left">Choose a representative product and use an open-source tool to generate a preliminary SBOM.</td>
</tr>
<tr>
<td align="left"><strong>Review Your VULN Process</strong></td>
<td align="left">Vulnerability handling is a non-negotiable part of the CRA. Your process needs to be documented and effective.</td>
<td align="left">Document your current process for receiving, triaging, and patching vulnerabilities.</td>
</tr>
<tr>
<td align="left"><strong>Identify Your National Body</strong></td>
<td align="left">Engaging with standardisation gives you a voice and early insights.</td>
<td align="left">Search for your country&#039;s national standards body (e.g., UNE, DIN, BSI) and find their contact for EU technical committees.</td>
</tr>
</table></figure>



<p>This checklist turns the abstract requirements of the CRA into a manageable project plan. By starting with these small, tangible wins, you build the momentum needed for the larger compliance effort.</p>



<h3 class="wp-block-heading">Step 3: Engage With the Standardisation Process</h3>



<p>The <strong>CRA standardisation request</strong> has kicked off a massive collaborative effort across Europe, and your organisation can—and should—have a voice in it. The standards are being hammered out in technical committees at CEN, CENELEC, and ETSI, all with input from national standards bodies.</p>



<p>Find your country&#8217;s national standards body (like UNE in Spain, BSI in the UK, or DIN in Germany) and get involved. When your experts participate, they can offer feedback on drafts, helping to ensure the final rules are practical and don&#8217;t create an impossible burden for your industry.</p>



<p>This whole framework is part of a well-established EU strategy under Regulation (EU) No 1025/2012. Historically, about <strong>30% of standards</strong> are created specifically to support legislation, directly impacting businesses in the ES-region via <strong>43 national members</strong> across <strong>34 countries</strong>. The efficiency is undeniable: one European Standard replaces <strong>34</strong> separate national ones. This model is being actively funded, with the <strong>2025 EISMEA call</strong> earmarking <strong>€7,851,000</strong> for <strong>27 topics</strong>. It reflects the progress we saw with the Data Act, where <strong>Mandate M/614</strong> is set to produce <strong>seven deliverables</strong> by <strong>September 12, 2025</strong>—a clear blueprint for the CRA to follow. You can find more insights about <a href="https://www.cencenelec.eu/european-standardization/">European standardisation on cencenelec.eu</a>.</p>



<p>As you build out your action plan, think about incorporating ideas from <a href="https://www.learniverse.app/blog/compliance-training-best-practices">Actionable Compliance Training Best Practices</a> to make sure your entire team is ready. Getting involved in the process doesn&#8217;t just let you influence the outcome; it gives you a valuable early look at where the requirements are heading. This proactive stance transforms compliance from a reactive burden into a strategic advantage.</p>



<h2 class="wp-block-heading">Frequently Asked Questions</h2>



<p>As the Cyber Resilience Act moves closer to full enforcement, manufacturers are understandably full of questions. The role of the <strong>CRA standardisation request</strong> and the work being done by <strong>CEN, CENELEC, and ETSI</strong> are at the centre of many of these conversations.</p>



<p>Here are some clear, straightforward answers to the questions we hear most often.</p>



<h3 class="wp-block-heading">When Will The Final CRA Harmonised Standards Be Published?</h3>



<p>There isn&#8217;t a single, fixed publication date, but the standardisation machine is definitely in motion. Typically, it takes about <strong>18 to 24 months</strong> to develop harmonised standards from the moment a request is issued.</p>



<p>We expect to see drafts for the main horizontal standards popping up throughout <strong>2025</strong>. The goal is to have the final, published versions ready by late <strong>2026</strong>.</p>



<p>Given that the CRA becomes fully applicable in late <strong>2027</strong>, this timeline is tight, but it has to be. Keep a close eye on updates from CEN, CENELEC, and ETSI. Even the early drafts will give you huge clues about where the final requirements are heading.</p>



<h3 class="wp-block-heading">Is It Mandatory To Use Harmonised Standards?</h3>



<p>No, you aren&#8217;t legally forced to use the harmonised standards. However, doing so gives you a massive advantage called the <strong>&#8220;presumption of conformity.&#8221;</strong></p>



<p>Think of it as the official, pre-approved path to compliance. It’s the simplest and least risky way to prove your product meets the CRA’s rules.</p>



<p>If you decide to go your own way, the burden of proof is all on you. You&#8217;ll need to create a mountain of documentation to convince regulators that your custom solutions are just as good as what the standards require. For example, instead of following a harmonised standard for secure updates, you&#8217;d have to write a detailed technical justification explaining why your proprietary update mechanism provides an equivalent level of security, complete with risk assessments and independent test results. This path is far more work, more expensive, and carries a much higher legal risk if a market surveillance authority comes knocking.</p>



<h3 class="wp-block-heading">How Can My Company Influence The New CRA Standards?</h3>



<p>You absolutely can have a say in how these standards are written. The best way is to get involved with your national standardisation body (like UNE in Spain, DIN in Germany, or AFNOR in France).</p>



<p>These national bodies are the members of CEN and CENELEC. They send their experts to sit on the technical committees that actually write the standards.</p>



<p>By joining and participating, your company&#8217;s experts can give direct feedback on drafts. This is your chance to make sure the final rules are practical, technically sound, and don&#8217;t create impossible hurdles for your industry. For example, if a draft standard proposes a 24-hour patching deadline for critical vulnerabilities, a company representative on the committee could argue that a 72-hour window is more realistic for complex embedded systems, providing data to back up their case. It’s a direct line to shaping the regulations you&#8217;ll have to live by.</p>



<h3 class="wp-block-heading">What Should We Do Before The Standards Are Available?</h3>



<p>Sitting on your hands and waiting for the final standards to drop is not a winning strategy. Right now, your compliance work has to be based on the legal text of the CRA itself, especially the essential requirements in <strong>Annex I</strong>.</p>



<p>Start by documenting how you interpret those requirements and the specific security measures you&#8217;re putting in place. A great way to get your organisation ready is by looking at existing, solid frameworks. For example, a <a href="https://audityour.app/blog/isms-standards-iso-27001">practical guide to ISMS Standards ISO 27001</a> can give you a proven blueprint for building an effective security posture.</p>



<p>This proactive work builds a strong compliance foundation. When the new standards are finally released, you can quickly map what you&#8217;ve already done to the official requirements, saving yourself a world of time and stress.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>The Cyber Resilience Act introduces complex new obligations for manufacturers. Instead of navigating vague requirements with spreadsheets and expensive consultants, let <strong>Regulus</strong> provide clarity. Our platform automates applicability assessments, maps your specific obligations, and provides a step-by-step roadmap to get your products ready for the EU market. Gain confidence and ensure your products are compliant by visiting <a href="https://goregulus.com">https://goregulus.com</a>.</p>
<p>La entrada <a href="https://goregulus.com/cra-basics/cra-standardisation-request-cen-cenelec-etsi/">CRA standardisation request CEN CENELEC ETSI: A 2026 compliance guide</a> se publicó primero en <a href="https://goregulus.com">Regulus</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
