Every manufacturer must implement the essential requirements in Annex 1 Part 1 of the EU CRA in their products. They must also document how they comply with the essential requirements in a conformity assessment. The wording of the essential requirements is very generic and hard to understand. Germany’s Federal Office of Information Security (BSI) published a Technical Guideline (PDF) that translates the legalese of the EU CRA into concrete and actionable requirements. I will add lots of examples from my work with embedded Linux devices to illustrate the requirements.
Introduction
Let me recap some points from my previous article Embedded Devices Covered by EU Cyber Resilience Act (CRA) to understand why the essential requirements from Annex I are the centre piece of every conformity assessment and why every manufacturers should start taking action now.
- All embedded devices are covered by the EU CRA.
- Important dates:
- On 11 December 2024 (effective date), the EU CRA entered into force.
- From 11 September 2026 (notification date), manufactures must notify the relevant authorities about any actively exploited vulnerability of the device and about any severe incident involving the device.
- From 11 December 2027 (penalty date), the EU may charge manufacturers with heavy penalties for any violations of the EU CRA.
- Every device must undergo a conformity assessment.
- For the lowest-risk default devices, manufacturers do a self-assessment.
- For the higher-risk important devices, manufacturers perform an assessment according to a standard and a notified body verifies the assessment.
- For the highest-risk critical devices, a notified body performs the assessment according to a standard.
In the conformity assessment, manufacturers must prove how their devices satisfy the essential requirements of Annex I. Annex I is split into two parts:
- Part I – Cybersecurity requirements relating to the properties of products with digital elements.
- Part II – Vulnerability handling requirements.
I’ll cover Part I in this post and Part II in a later post.
When I read the essential requirements for my newsletter back in November 2023 and for this post, I had my problems with understanding the vague wording of the essential requirements. Hence, I started searching for posts with explanations and examples on the web. Eventually, I stumbled over the page BSI TR-03183: Cyber Resilience Requirements for Manufacturers and Products from Germany’s Federal Office of Information Security (BSI) – with three excellent technical guidelines about Cyber Resilience Requirements for Manufacturers and Products for download.
- Part 1: General Requirements, which covers Annex I of the EU CRA and is the basis for my post,
- Part 2: Software Bill of Materials (SBoM), and
- Part 3: Vulnerability Reports and Notifications.
While coming up with the examples and explanations for each requirement, I often shook my head in disbelief. How should my typical customers – small and medium manufacturers of IoT devices and machines – be able to implement all the essential requirements? Many of them have ignored security issues nearly completely.
These companies barely have enough developers to build the main applications of their devices (their core business!) and struggle to build the embedded Linux system provided by their board vendor. The essential requirements of the EU CRA call for deep system knowledge including secure OTA updates, secure boot, generation and management of cryptographic keys, integrity checks of software images, user authentication and attack surfaces.
Many companies simply lack the people and the expertise to implement the cybersecurity requirements. And – they will have huge problems to find the right people, because such people are rare and every company will be looking for them. The best way out would be a collaborative effort of the embedded industry to provide ready-made solutions for the essential requirements like microservices, build recipes, scripts and default configurations.
Every direct or indirect supplier to a device manufacturer should take over as much as possible of the implementation of the relevant essential requirements as possible or should provide at least some enabling infrastructure. Suppliers could save manufacturers a lot of time and money and enable them to focus on their core business. Suppliers, who provide the most value to manufactures, will be able to charge higher prices and enjoy higher profits.
Essential Requirements One by One
I’ll go through the essential requirements of Annex 1, Part 1 of the EU CRA one by one. The title of each requirement – e.g., Security Updates (§2c, REQ_ER4) – is taken from the BSI’s Technical Guideline TR-03183: Part 1 – General Requirements. I added the numbering from Annex 1 (e.g., “§2c”) and from the Technical Guideline (e.g., “REQ_ER4”) in parentheses after the title.
For each requirement, I repeat the original text from the EU CRA, before I give examples from my own work.The BSI refines the sometimes vague requirements from the EU CRA. The refined requirements are the basis for my examples and explanations. I don’t repeat the BSI’s refined requirements here, because this would increase the length of this article even more.
Security by Design (§1, REQ_ER1)
Products with digital elements shall be designed, developed and produced in such a way that they ensure an appropriate level of cybersecurity based on the risks.
Manufacturers must apply a software development process where security is not an afterthought but built in right from the beginning. They perform a risk assessment early in the project (e.g., in the design or planning phase) and make sure that the risks are addressed during development. Of course, the risk assessment is not a one-off at the beginning but is continuously updated during the project by the development teams.
The BSI requires that “manufacturers must follow best practices for the secure (software-) development lifecycle e.g. OWASP SAMM, BSI TR-03185, ISO 27034“. In plain words, manufacturers must integrate these best practices into Continuous Delivery, Scrum, XP, SAFe or whatever software development process they are following. They must show how they satisfy the essential requirements §2a – §2m of Annex 1, Part 1 of the EU CRA.
No Known Vulnerabilities (§2a, REQ_ER2)
Products with digital elements shall be made available on the market without known exploitable vulnerabilities.
For every partial or full release, manufactures must ensure that the released software doesn’t contain any known actively exploited vulnerabilities. If they know of such a vulnerability, they must not release the software. If they learn about such a vulnerability in a released device, they must provide an security update with a fix for the vulnerability in a timely way (see requirement Security Updates).
Satisfying this requirement is only possible by performing a vulnerability scan of the complete system. Manufacturers generate an SBoM, which lists the CVEs (Common Vulnerabilities and Exposures) for each package of the Linux system running on the embedded device. They can retrieve the CVEs from the National Vulnerability Database (NVD) or the CVE raw database (see Marta Rybczynska’s talk Security Improvements in Yocto 5.1 how to map the package names in the database to those produced by a Yocto build).
For each actively exploited vulnerability in the per-package CVE list, manufacturers must assess whether the vulnerability can be exploited on their devices. If exploitable, they must fix the vulnerability. They can approach the authors of the package (e.g., Sqlite, OpenSSh or Qt), get their assessment and help them to fix the vulnerability. They could also fix the vulnerability themselves and upstream the fix. If not exploitable, manufacturers must justify in writing why this vulnerability is not exploitable in the special context of their device.
Secure Default Configuration (§2b, REQ_ER3)
Products with digital elements shall be made available on the market with a secure by default configuration, unless otherwise agreed between manufacturer and business user in relation to a tailor-made product with digital elements, including the possibility to reset the product to its original state.
This requirement is nothing else but a factory reset to the initial or any other release, which has no known vulnerabilities. Its implementation must remove all local user data and reset security data like passwords, keys and access tokens to their original values.
For example, the driver terminal of a harvester stores data about drivers, farmers and fields. It also stores passwords so that drivers, dealers and technicians can perform different tasks like harvesting, provisioning spare or new parts, failure diagnosis and repair. The reset purges all this data from the terminal.
Security Updates (§2c, REQ_ER4)
Products with digital elements shall ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them.
The device must be able to update all the security-relevant parts of the system like bootloader, Linux kernel and root filesystem automatically. The EU CRA doesn’t mandate, whether updates are performed over the air (OTA), from a USB drive or in any other way.
The automatic procedure could look like this.The device checks regularly – say, once a day – whether a security update is available. If so, it automatically installs the update in the inactive partition when following an A/B update strategy. The device notifies the user that an update waits for its activation. Here are two possible ways how to continue:
- The device automatically switches to the inactive system during the wee hours of the night or any other time when nobody works.
- The user must explicitly confirm the activation of the update. She can postpone the activation until tonight, until the next morning or by one day. She can postpone the activation three times, before the device forces a switch to the inactive system.
There are numerous other ways. Manufacturers must find a way that suits their customers. Machines running 24/7 typically have scheduled downtimes for maintenance and for software updates. Fixing an actively exploited vulnerability may well be the reason for an unscheduled downtime.
The important point is that users have no way to avoid security updates forever. The EU CRA is even stricter: Security updates must happen in a timely way. One day or one week are certainly timely, two weeks may be depending on the severity of the vulnerability, 1 month or longer most likely not.
The update service must have a way to find out whether an attacker tampered with the update package. The build calculates a checksum of the update package and signs the package with a private key. The update service also calculates the checksum of the update package, decrypts the signed checksum from the build with its public key and compares the checksums. If they are equal, all is fine. If not, the device rejects the security update.
Access Control (§2d, REQ_ER5)
Products with digital elements shall ensure protection from unauthorised access by appropriate control mechanisms, including but not limited to authentication, identity or access management systems, and report on possible unauthorised access.
Many driver terminals of agricultural and construction machines are excellent examples how not to do access control. Every person, who has a key for the machine, can use the terminal without any further authentication. The terminal has different user types with different access privileges: normal user, power user, dealer, technician and developer. All people of a certain user type share the same 4-digit PIN. Normal users don’t need a PIN.
The PINs are changed once a year. Manufacturers freely share, say, the PINs of technicians with power and even normal users so that these use can help with fixing machine problems. After a couple of months, the PINs are known by everyone. The root password for SSH access is the same for all terminals. Moreover, it is easy to guess or can be found in easy-to-access documentation.
The only thing the example manufacturer gets right (a real-life example, by the way) is the introduction of different user types. A normal user can perform all the actions required for harvesting. A power user gets a bit more control to carry out simple repair and maintenance tasks. A technician has access to all diagnostic functionality. A dealer can commission the machine for the new owner. Developers can do nearly everything.
The EU CRA forbids, however, that all users of a type share the same password or the same secrets for authentication. Each user must have their own dedicated secret (password, code card, fingerprint, face, etc.). The secrets must be created according to best practices in cryptography. Passwords must not contain any regularities, common strings, MAC addresses, WLAN SSIDs, or any names used by the manufacturer.
Using the same password for logging in to all devices via SSH is a no-go. Each device must have its own password or its own public key. The group of people with SSH access to devices should be kept to an absolute minimum. The owners of the devices should be able to change the secrets if the device was compromised.
The requirement to have unique secrets per device and per user has consequences for bringing devices or machines into service (commissioning). When manufacturers install the bootloader, kernel and rootfs images on the devices during factory installation, they can still use default secrets. Only when commissioning the devices, manufacturers know the owners and users of the devices, can create the secrets and store them on the devices. Removing users, adding users and changing the secrets for users must be possible during the lifetime of the devices. In summary, a commissioning step will be a must for almost all devices.
There is still more for access control. The device must make brute-force attacks on the authentication impracticable. One strategy could be to require 2-factor authentication after three failed attempt. Another strategy could be to enforce a pause of 30 seconds after the first failed attempt and double the pause after each failed attempt. After the eighth failed attempt, the device locks down and requires a user with higher privilege to break the lockdown. Or, the device could automatically delete all the sensitive data from the device. There are many good strategies. The strategies applied by smartphones can be useful. Moreover, devices shall log failed authentication attempts, identify possibly unauthorised access attempts and notify authorised users about these attempts.
Devices must never send any secrets in clear text over unencrypted channels. Channels are include network connections to the cloud or other devices. If one ECU in a construction machine sent a password in clear text over the CAN bus to another ECUs, this would violate the access control requirement. Storing passwords in clear text in binaries would also be a violation. So would be sending passwords in clear text from one application to another via DBus or gRPC.
Confidentiality Protection (§2e, REQ_ER6)
Products with digital elements shall protect the confidentiality of stored, transmitted or otherwise processed data, personal or other, such as by encrypting relevant data at rest or in transit by state of the art mechanisms, and by using other technical means.
X-ray frequency (XRF) analysers detect which chemical elements an object is made of. Companies use XRF analysers to find valuable materials in rubbish or iron ore, silver and gold in mines. The analysers come preloaded with calibration data for exact results. They store the measurements locally and transfer them via WLAN or USB drive to other computers in the company network. Of course, companies want to keep the measurement data secret.
Manufacturers must encrypt data transferred to and from their devices and stored on their devices. Encryption is required no matter whether the data transfer happens over a local company network, over the public Internet or by USB drive. The encryption must follow best cryptographic practices and must withstand brute-force and replay attacks.
In the example of the XRF analyser, measurement data is clearly sensitive and must be protected by encryption. The same is true for harvest data including details about drivers, fields and farmers. Naturally, passwords, keys, TeamViewer or VNC access tokens, API access tokens and other secrets should be protected properly. Only if data are publicly available (e.g., user manuals), the device need not protect them.
Integrity Protection (§2f, REQ_ER7)
Products with digital elements shall protect the integrity of stored, transmitted or otherwise processed data, personal or other, commands, programs and configuration against any manipulation or modification not authorised by the user, and report on corruptions.
Whereas confidentiality protection is more about protecting user data, integrity protection is more about making sure that system data were not illegally modified. The device checks that the bootloader, kernel and rootfs images have been unchanged since their last update. Manufacturers cryptographically sign these images. The device checks the signature and stops the boot process, if a signature check fails. Similarly, the manufacturer must sign the update archive and the update service must check its integrity.
Users must be able to reset the device into a state passing the integrity checks. When the device employs an A/B update strategy, there is always one inviolate system: the currently inactive system. The device can automatically switch to it or the user can trigger the switch. When the device uses OSTree – git-like version control for complete file systems, it can roll back the system to the any version known to be good. The rollback could undo the last changes to an application and its libraries that made the integrity check fail. OSTree works much more fine-grained than A/B updates. As usual there are many other options. Manufacturers should refrain from re-inventing the wheel.
Secure boot is the foundation for integrity protection. It creates a chain of trust from unchangeable ROM code (the root of trust) over the bootloader and the Linux kernel to the rootfs. Every item in the chain verifies the integrity of the next item and only starts the next item if the check succeeds. If one check fails, the boot process fails. The device may start some automatic recovery like switching to the inactive system or rolling back the rootfs to a version known to be good. The device could also stop, because one of the images was tampered with. This approach would require a technician to remedy the problem.
Data Minimisation (§2g, REQ_ER8)
Products with digital elements shall process only data, personal or other, that are adequate, relevant and limited to what is necessary in relation to the intended purpose of the product with digital elements (minimisation of data).
A device may only collect or transfer data that it needs to do its job. A smart speaker may listen to spoken commands like “Play best songs by Leonard Cohen” and execute them. But it must not listen in on conversations, let alone transfer the transcriptions to its manufacturer.
Availability of Essential and Basic Functions (§2h, REQ_ER9)
Products with digital elements shall protect the availability of essential and basic functions, also after an incident, including through resilience and mitigation measures against denial-of-service attacks.
A harvester is controlled by 20-30 ECUs (electronic control units) including ECUs for engine, drive, harvesting implements (e.g., cutting drums, cleaning turbines, conveyor belts), rear-view camera and telematics. These ECUs communicate with each other over 2-4 CAN buses. The telematics box has access to all CAN buses and is connected via 4G/5G to the cloud.
If attackers gain access to the telematics box, they can flood the CAN buses with messages. The driver can’t change the cutting height, the speed of the cleaning turbines and the engine speed, because these changes require CAN messages to get through to the respective ECUs. The best course of action for the driver is to turn off the ignition. The operator cannot drive the harvester to the next garage for repair (an essential function) and cannot continue harvesting (a basic function). Such a harvester clearly violates requirement §2h.
The manufacturer could remedy the situation as follows.
- ECUs providing essential and basic functions are on a different CAN bus than less important ECUs. The telematics box must not write to the CAN bus with the essential and basic ECUs. It may only read from this CAN bus.
- The telematics box has a mechanism to detect when some process is flooding a CAN bus. It can shut down the network interfaces of the flooded or even all CAN buses. Although the telematics box is out of order partially or completely, the harvester must still be capable of executing the essential and basic functions – and especially safety-critical functions.
- The manufacturer could move from the basic CAN protocol to CAN XL, which comes with a security framework. Higher-layer protocols like J1939 or CANOpen may provide security measures. Of course, manufacturers should start using these security mechanisms.
Here is a second example to explore another angle of this requirement. A company builds a hand-held device with a camera to detect whether discolourations of the skin are malignant. The user takes photos of the problematic skin spots. The device sends the photos to a cloud server that runs some clever image recognition algorithms on the photos. The server sends the results back to the device.
If there was no Internet connection to the server or if the server was taken down by a DDoS attack, the device would not be able to diagnose the skin spots. The device would not be able to perform its essential and basic functions: a clear violation of this requirement. The manufacturer would have to find a way how the device could run the image recognition algorithms on the device. This could mean that the manufacturer must use a different SoC. Manufacturers better figure this out early in the project as part of the system architecture and not when production has already started.
Minimising Negative Impact (§2i, REQ_ER10)
Products with digital elements shall minimise the negative impact by the products themselves or connected devices on the availability of services provided by other devices or networks.
The first two remedies for the rogue telematics box from the previous requirement §2h are good examples how to minimise negative impact to connected devices.
- The telematics box must not send messages to ECUs like engine, drive and implements attached to the critical CAN bus. It may only read from this CAN bus. Even if the telematics box is hacked, it cannot take control of critical ECUs or must overcome extra precautions.
- The telematics box shuts down its network interfaces for the CAN buses and cloud connection, if it detects flooding of these network connections. This mechanism minimises the damage that a rogue telematics box can do to other ECUs and cloud servers.
The telematics box must use well-defined protocols for its communication with other ECUs and the cloud and must handle error and exceptional scenarios properly. Well-defined means that the communication partners must always know exactly how to respond to a message. Of course, manufacturers must implement the protocol diligently and test it thoroughly.
If, for example, a product forgets to handle some scenarios, it may show strange behaviour or even crash. Such unhandled scenarios are a classic attack point. The attacker injects the bad message sequence into the communication – and some or more ECUs go down.
Manufacturers must document their protocols so that a notified body can asses whether the product minimises negative impact on connected devices. Manufacturers must provide this documentation in a self-assessment. They don’t have to give this documentation to their customers. They can charge their customers for this documentation.
Limit Attack Surfaces (§2j, REQ_ER11)
Products with digital elements shall be designed, developed and produced to limit attack surfaces, including external interfaces.
Of course, manufacturers must close all communication ports not needed for the functioning of the device. Root login with an empty password and even root login with a password is OK for development but definitely not for production. Note that the requirement Access Control (§2d) covers root access from a different angle – but with the same result. These attack surfaces are quick to fix but there are more onerous ones.
Services like OTA updates and screen mirroring with TeamViewer or VNC establish Internet connections to remote computers. Both services typically run in their own processes. The device must only enable these services when they are needed, and must disable them when not. If the services are libraries or plugins loaded by an application, the application must close any Internet connections when the services are not in use.
This requirement doesn’t apply only to external network connections but also to internal network connections. A driver terminal for agricultural and construction machines provides a special diagnostic mode. The terminal receives and sends tons of diagnostic messages over the CAN buses. Only a privileged user may enter the diagnostic mode. The user must explicitly enable the diagnostic messages for a certain ECU and disable them when not needed any more. If the user forgets to exit the diagnostic mode, the device must force an exit, say, after a certain time of inactivity.
All debugging functionality, which can bypass security precautions, must be disabled by default. Only authorised users are allowed to enable debugging functionality temporarily. The device must ensure that debugging doesn’t stay enabled longer than necessary. Debug servers, test servers, memory checkers, profilers and similar tools have no place on a production system.
The simplest way to limit the attack surfaces is to remove all packages from the embedded Linux system that are not needed for operation. The attack surface for an absent package is zero.
Mitigation of Incidents (§2k, REQ_ER12)
Products with digital elements shall be designed, developed and produced to reduce the impact of an incident using appropriate exploitation mitigation mechanisms and techniques.
Many of the previous requirements suggest mitigation mechanisms for manufacturers to implement. We need not discuss these requirements again. We have, however, neglected two important mitigation mechanisms: process privileges and permissions to use external interfaces.
Each application and service – in short, each process – must run with the least privileges possible. Manufacturers must say good-bye to their applications and services always running with root privileges. The main reason for running everything as root is convenience. For example, an HMI application can change the brightness of the display by writing a percentage to a special file. With root privileges, developers implement this in a couple of minutes. It takes much longer and more system knowledge to create users and groups with different privileges and adapt the privileges of the special file accordingly.
Many applications and services run on the driver terminal of agricultural and construction machines, but only a few should be allowed access to the CAN buses. The TeamViewer server for sharing the device screen, the radio application and the dozens of systemd services don’t have any business with the CAN buses. A solution compliant with the EU CRA would be to create a separate CAN service. If applications and services want to access the CAN buses, they must ask the CAN service for permission. The TeamViewer server, the radio application and most of the systemd services won’t get this permission. Similarly, applications require permission to communicate with the cloud.
In general, applications and services need permission to communicate with other products. A side effect of such a permission system is a reduction of the attack surface.
Recording and Monitoring (§2l, REQ_ER13)
Products with digital elements shall provide security related information by recording and monitoring relevant internal activity, including the access to or modification of data, services or functions, with an opt-out mechanism for the user.
By default, the device must record and monitor any security event including
- changes of passwords, keys and access tokens,
- authentication attempts by users,
- privileged access to system resources and services,
- the status of services, and
- anomalous network accesses.
Monitoring security events allows the device to prevent incidents during operation. Recording them allows specialists to analyse security incidents later, to assess the damage and to avoid such incidents in the future. The retrospective analysis is the basis for calculating the penalties imposed by the EU CRA. The fines will be higher if the authorities have problems in assessing the damage.
The records must contain enough information for a security analysis including the time when the security event occurred, the type of the event (e.g., password change, user authentication, network access for OTA update). and the functions, services or components responsible for the event and affected by the events. The records must not make the life of the bad guys easier by revealing information relevant to device security.
Recording and monitoring must cope with common disruptions like loss of power, low storage or network outages. It must not affect the essential and basic functions of the device. An authorised user can deactivate the recording and monitoring by the device and later activate it again.
Deletion of Data and Settings (§2m, REQ_ER14)
Products with digital elements shall provide the possibility for users to securely and easily remove on a permanent basis all data and settings and, where such data can be transferred to other products or systems, ensure that this is done in a secure manner.
The factory reset to a Secure Default Configuration already takes care of removing security-sensitive data and resetting security settings. This requirement demands the removal of all user and system data collected during operation and the reset of all user and system settings to their defaults – no matter whether the data and settings are security-relevant or not. The device must not contain any traces of any user operations in the end.
Additionally, users can delete all their data and settings on connected devices and cloud services. Instead of deleting all their data, they can also transfer them to another system in a secure way. Such a data transfer happens when a broken device is replaced by a working one, when the user works at multiple instances of the same device at the same time, or when the device is replaced by a newer model. When I upgrade, for example, to a newer model of my iPhone, my music, photos, videos, applications and settings are transferred from my old iPhone via iCloud to my new iPhone.
Dear Burkhard, as usual… great article. The CRA related articles are particularly interesting to me cause it seems like companies are challenged pretty hard. My question is not directly related to this article but pretty significant in practice anyway I guess: It’s pretty common that companies outsource the development and production of complete products or product components (usually china). In practice it’s pretty hard (to impossible) to enforce CRA compliance by those manufacturers. In my opinion it should make a lot of sense to companies to inhouse development. What is your opinion about this topic? Kind regards
Hi Florian,
Great question.
When manufacturers outsource parts or all of their product development, they must comply with the EU CRA as soon as their product enters the EU. And it will, because they want to sell it in the EU. The manufacturers not their suppliers are liable for any violations.
Everyone in the supply chain should do a conformity assessment for their part of the product, as they know them best. The manufacturer aggregates all assessments and reviews them. If non-EU companies want to business in the EU or with EU companies, they will have to figure out how to do a proper assessment. Automotive suppliers have been doing exactly this for years.
Some assessments will be more useful than others ;). I think that manufacturers, who do the development in-house or at least in Europe, will have an advantage. The EU CRA demands an intimate knowledge of the manufactured systems. If development is outsourced, this knowledge is not readily available and the knowledge transfer up the supply chain will always be erroneous.
Even if manufacturers develop in-house, they will face another problem. There are not enough people available to build secure systems according to the EU CRA.
Kind regards,
Burkhard
Thank you for the detailed write-up; it contains a lot of information. I think the possible techniques for fulfilling the requirements are correct (of course time will tell), and I have some comments/suggestions.
1. The notification date is the start of the obligation to notify of exploited vulnerabilities and security incidents, not “exploitable and severe vulnerabilities. “Exploited” and “exploitable” have very different meanings in this context. There is no obligation to report all unfixed vulnerabilities under the CRA.
2. It is worth mentioning that each device will have to have its scope: the intended purpose. The risk analysis must be performed in the context of this intended purpose.
3. Note also that a special lighter regime for small companies is expected, but to my knowledge, it hasn’t been announced yet, and I don’t know what it will mean in practice.
4. A comment on REQ_ER5 (“access control” ): As in many cases, the techniques used need to be in line with the device’s use case. For example, if the device removes all data after the 8th failed login attempt, this is usually fine for phones. However, it could be a DoS (Denial of Service) vector, too, if an attacker can access such a system remotely or otherwise force it to stop when an actual user needs it.
As a side note, the BSI recommendations are a work in progress, so they might change in the future. Today, they add requirements that are not in the text, in my opinion.
To sum up, I like the practical examples of problems and solutions that you describe in the text, it makes the discussion way easier to understand than it it were in text only.
Hi Marta,
Thank you very much for your thoughtful feedback. Let me answer your points one by one.
Regarding 1: My mistake! You are absolutely right: Manufacturers must notify the relevant authorities about “any actively exploited vulnerability contained in the [PDE]” (Article 14(1)) and about “any severe incident having an impact on the security of the [PDE]” (Article 14(3)) via a single reporting platform.
Regarding 2: Yes, the context of the specific device is important. The context can, for example, rule out many vulnerabilities that don’t apply to your device. Considering the context can make your life a lot easier.
Regarding 3: I truly hope that there will be “lighter regimes” for small and medium companies. There are simply not enough skilled and affordable people around to help all these companies. As reasonable as it is, EU legislation like the CRA, AI Act and GDPR are targeted at big companies but seriously affect SMBs. It fosters the concentration of power in a few huge companies and stifles innovation and competition. A lighter regime is definitely needed.
Regarding 4: Context again! What is feasible for a phone might not be the right way for a harvester terminal.
Thanks,
Burkhard