Future Trends in Embedded Security

MuhammadMuhammadEmbedded Security6 days ago14 Views

Embedded security does not stand still. The threat landscape that engineers design against today is materially different from the one that existed when the Mirai botnet demonstrated in 2016 that millions of unpatched, default-password IoT devices could be weaponised into the largest DDoS infrastructure ever assembled. Since then, the scale of the connected device ecosystem has grown by an order of magnitude, attacks have become more targeted and sophisticated, regulations have moved from voluntary guidelines to enforceable law, and the hardware available to embedded security engineers has become substantially more capable. This final article in the course series looks ahead: the seven technology and regulatory trends that will reshape embedded security over the next decade, the practical implications of each for engineers building devices today, and how to build the skills and credentials needed to work at the frontier of this field as it grows.

The Twelve-Section Course in Review

Before looking forward, a brief summary of the ground covered in this course — both as a reference and as a reminder that embedded security is a discipline that spans the full product lifecycle from architecture through decommissioning.

Sections 1 and 2 established the foundation: what makes embedded systems different from general computing, the attack surface model that maps every hardware interface, firmware component and communication channel to a set of attacker capabilities, and the case studies that make the stakes concrete. Sections 3 and 4 moved into the vulnerability space: the six firmware vulnerability classes that account for most embedded device compromises, and the secure software development practices (memory safety, input validation, safe cryptographic patterns, integrity-verified builds) that prevent them at the code level.

Sections 5 and 6 covered the hardware and communication security layers: secure boot with ECDSA signature verification, anti-rollback through OTP counters, secure elements for key storage, TLS 1.3 implementation with mbedTLS, DTLS for CoAP, certificate pinning, and the cipher suite selection decisions that determine what an eavesdropper can recover from a captured session. Section 7 embedded security into the development process through the secure SDLC: threat modelling with STRIDE, security requirements that are testable, static analysis and fuzzing in CI, and the pre-deployment checklist that gates production releases.

Sections 8 and 9 addressed the operational phase: tamper-evident logging under storage constraints, behavioural monitoring and anomaly detection, the six-phase incident response lifecycle with scenario-specific playbooks, secure OTA with dual-bank flash and automatic rollback, remote device management protocols, and the patch management and long-term support processes that keep devices defensible across a decade of deployment. Section 10 assembled the practical toolkit: Cppcheck, Semgrep, ASan, libFuzzer, AFL++, ChipWhisperer, Wireshark, Scapy, binwalk, Ghidra, and the forensic analysis workflow that reconstructs what happened on a compromised device. Sections 11 and 12 brought it together: the secure-by-default principles, the hardening checklists and the compliance requirements that are now enforceable law in multiple jurisdictions.

The six principles that underpin every technical decision in the course and that will continue to underpin embedded security as the technology evolves:

  • Security by design: Security controls built into the architecture from the first design review are cheaper, more effective and more maintainable than controls retrofitted after the fact.
  • Defence in depth: No single control is sufficient. The combination of secure hardware, verified firmware, encrypted communication, access-controlled management, tamper-evident logging and tested incident response is what makes a device defensible in practice.
  • Least privilege: Every identity, every process and every device has exactly the access required for its function and nothing more. The blast radius of any compromise is bounded by what the compromised identity was authorised to do.
  • Secure defaults: The out-of-box configuration must be the secure configuration. Opt-in security that most users never enable provides no security at scale.
  • Trust no input: Every value arriving from outside the device’s trust boundary is potentially attacker-controlled. Validate before processing. Reject clearly; log silently.
  • Plan for failure: Assume that a device will be compromised at some point in its operational lifetime. The incident response plan, the tamper detection, the key revocation infrastructure and the fleet recovery process are not optional — they are what determines whether a compromise is contained or catastrophic.

The Scale Problem: 75 Billion Devices

The trajectory of the connected device market is the most important context for understanding where embedded security is heading. Analyst estimates place the global IoT device count at approximately 15 billion in 2023, growing to 75 billion by 2030. That growth is not uniform: industrial IoT (manufacturing, energy, water), medical IoT (implanted and wearable devices) and automotive (connected and autonomous vehicles) are each growing at rates that are challenging the security practices of their respective industries.

The scale itself creates security challenges that do not exist at smaller deployment sizes:

Fleet heterogeneity: A fleet of 10 million devices across five years of production runs will contain dozens of hardware variants, hundreds of firmware versions and multiple generations of security controls. Managing security across a fleet that cannot be uniformly updated is a qualitatively different problem from managing a homogeneous deployment.

Operational lifetime mismatch: Consumer IoT devices are typically replaced every 3–5 years; industrial equipment is replaced every 15–25 years. A device designed to run for twenty years will encounter vulnerabilities in its firmware components that were not known when it was designed. The long-term support problem (Section 9) scales directly with deployment lifetime.

Attack economics: The economic incentive for attacking embedded device fleets scales with fleet size. A vulnerability that allows an attacker to compromise every device of a particular model gives them access to a botnet proportional to that model’s installed base. As fleet sizes grow into the hundreds of millions, the value of a single exploitable vulnerability grows accordingly, which means the resources sophisticated attackers are willing to invest in finding them also grow.

Post-Quantum Cryptography: The Clock Is Running

Quantum computers capable of running Shor’s algorithm at sufficient scale would break all currently deployed public-key cryptography: RSA, ECDSA, ECDH and Diffie-Hellman. These algorithms are the foundation of TLS handshakes, firmware signing, device certificate authentication and key exchange in virtually every embedded security system deployed today. When a cryptographically relevant quantum computer becomes available, every device using these algorithms becomes vulnerable to having its encrypted traffic decrypted and its signed firmware forged.

The timeline is uncertain but the direction is clear. NIST (National Institute of Standards and Technology) standardised the first four post-quantum cryptographic algorithms in 2024: ML-KEM (Module Lattice Key Encapsulation Mechanism, formerly CRYSTALS-Kyber) for key exchange, ML-DSA (Module Lattice Digital Signature Algorithm, formerly CRYSTALS-Dilithium) for digital signatures, SLH-DSA (Stateless Hash-Based Digital Signature Algorithm, formerly SPHINCS+) and FN-DSA (Fast Fourier Transform over NTRU Lattice, formerly FALCON). These are now FIPS (Federal Information Processing Standard) standards: FIPS 203, 204 and 205.

The “harvest now, decrypt later” attack is the reason this matters for devices shipping today: an adversary who captures encrypted firmware update traffic or device-to-cloud communication today can store it and decrypt it when quantum computers are available, potentially years in the future. For devices that handle sensitive long-lived data, the quantum threat is not a future problem — it applies to data being transmitted right now.

For embedded device manufacturers, post-quantum migration means:

  • Firmware signing keys should transition to ML-DSA or SLH-DSA for new device generations. ML-DSA offers smaller signatures (2,420 bytes for ML-DSA-65) than SLH-DSA but requires keeping state; SLH-DSA is stateless and simpler to implement correctly for embedded targets.
  • TLS connections should migrate to hybrid key exchange: ECDH combined with ML-KEM, so that the session key is secure against both classical and quantum attackers. This is already supported in TLS 1.3 via the hybrid key exchange extension.
  • Device certificate infrastructure (the root CA, intermediate CAs, device certificates) should be planned for migration to post-quantum algorithms, which requires coordination across the entire PKI chain.

Migrating Embedded Firmware to Post-Quantum Algorithms

The practical challenge for embedded targets is that post-quantum algorithms have larger key sizes, larger signature sizes and higher computational requirements than their classical equivalents. A Cortex-M4 running at 168 MHz can verify an ECDSA P-256 signature in approximately 30 milliseconds. The same processor running ML-DSA-65 verification takes approximately 2–5 milliseconds (faster, because lattice operations are efficient on integer arithmetic hardware) but the signature is 3,309 bytes versus 64 bytes for ECDSA P-256. Flash storage for firmware manifests, RAM for signature buffers, and OTA packet sizes all need to accommodate the larger parameter sizes.

/* Post-quantum firmware signature verification using ML-DSA-65 (FIPS 204).
   This is a forward-looking example showing the API shape for when
   PQC libraries are stable for embedded targets.

   Currently (2025), the most embedded-ready implementations are:
   - liboqs (Open Quantum Safe): ARM Cortex-M port in progress
   - wolfSSL PQC: commercial embedded PQC support
   - pqm4: research implementations for Cortex-M4

   The verification logic below follows the same pattern as the mbedTLS
   ECDSA verification shown in Section 3 of this course, with the key
   difference being the larger signature and public key sizes. */

#include "pqcrystals_dilithium3_ref.h"   /* ML-DSA-65 reference implementation */

/* ML-DSA-65 parameter sizes (FIPS 204) */
#define MLDSA65_PUBLICKEY_BYTES  1952
#define MLDSA65_SIGNATURE_BYTES  3309

/* Root public key for firmware verification.
   Stored in the bootloader's read-only flash region.
   Generated offline by the production signing HSM.
   Note: 1,952 bytes vs. 64 bytes for ECDSA P-256 public key.
   Plan your bootloader flash layout to accommodate this. */
static const uint8_t g_root_mldsa_public_key[MLDSA65_PUBLICKEY_BYTES] = {
    /* ... public key bytes provisioned during manufacturing ... */
};

typedef enum {
    PQC_VERIFY_OK,
    PQC_VERIFY_INVALID_SIGNATURE,
    PQC_VERIFY_BUFFER_ERROR
} PqcVerifyResult;

/* Verify a firmware image signature using ML-DSA-65.
   message: the firmware image bytes
   message_len: firmware image length
   signature: the ML-DSA-65 signature from the firmware manifest
   Returns PQC_VERIFY_OK if the signature is valid. */
PqcVerifyResult verify_firmware_mldsa65(
    const uint8_t *message,     size_t message_len,
    const uint8_t *signature,   size_t signature_len)
{
    if (signature_len != MLDSA65_SIGNATURE_BYTES) {
        return PQC_VERIFY_BUFFER_ERROR;
    }

    /* pqcrystals_dilithium3_ref_verify returns 0 on success */
    int result = pqcrystals_dilithium3_ref_verify(
        signature,
        MLDSA65_SIGNATURE_BYTES,
        message,
        message_len,
        g_root_mldsa_public_key
    );

    if (result != 0) {
        log_security_event(SEC_EVENT_FW_UPDATE_REJECTED,
                           OUTCOME_FAILURE, NULL, NULL, 0);
        return PQC_VERIFY_INVALID_SIGNATURE;
    }

    return PQC_VERIFY_OK;
}

The migration strategy for existing devices in the field is a hybrid approach: keep the existing ECDSA signature verification in place for backward compatibility while adding ML-DSA verification as a parallel check. New firmware releases are signed with both algorithms; the bootloader accepts firmware that passes either check during the transition period, then transitions to requiring both once the fleet is updated. This allows migration without bricking devices that have already received the PQC-capable bootloader update.

AI and Machine Learning in Embedded Security

Machine learning is becoming practically deployable on embedded targets. Microcontrollers with dedicated neural network accelerators (the Cortex-M55 with Arm Helium, the ESP32-S3 with vector instructions, the Kendryte K210 with a dedicated NPU) can run inference on models that would have required a cloud API call five years ago. This changes the possibilities for on-device security: anomaly detection models that previously ran only on the cloud SIEM backend can now run locally on the device itself, with zero latency and no dependency on network connectivity.

On-Device Anomaly Detection

The classical threshold-based anomaly detection described in Section 8 is effective but brittle: thresholds must be manually tuned, they do not adapt to seasonal or contextual variation, and they cannot detect multi-dimensional anomalies where no single metric exceeds its threshold but the combination of metrics is statistically unusual. A small LSTM (Long Short-Term Memory) or autoencoder model trained on the device’s normal operational telemetry can detect these complex anomalies automatically, with lower false positive rates than static threshold rules.

# TensorFlow Lite Micro: deploy an autoencoder anomaly detector
# on an embedded Linux target for network traffic anomaly detection.
# The autoencoder learns to reconstruct normal traffic patterns;
# high reconstruction error indicates an anomaly.

# Step 1: Train the autoencoder on normal device telemetry (run on a
# development machine, not on the device itself)

import numpy as np
import tensorflow as tf

# Feature vector: [cpu_pct, mem_pct, bytes_out_per_min,
#                  auth_failures_per_hour, connections_per_hour]
FEATURE_DIM = 5
ENCODING_DIM = 3   # Bottleneck dimension

def build_autoencoder():
    inputs = tf.keras.Input(shape=(FEATURE_DIM,))
    # Encoder: compress to bottleneck
    encoded = tf.keras.layers.Dense(4, activation='relu')(inputs)
    encoded = tf.keras.layers.Dense(ENCODING_DIM, activation='relu')(encoded)
    # Decoder: reconstruct from bottleneck
    decoded = tf.keras.layers.Dense(4, activation='relu')(encoded)
    decoded = tf.keras.layers.Dense(FEATURE_DIM, activation='sigmoid')(decoded)

    autoencoder = tf.keras.Model(inputs, decoded)
    autoencoder.compile(optimizer='adam', loss='mse')
    return autoencoder

# Train on 30 days of normal operational data (5-minute samples = 8,640 rows)
# normal_data: numpy array of shape (n_samples, FEATURE_DIM), normalised to [0,1]
autoencoder = build_autoencoder()
autoencoder.fit(normal_data, normal_data,
                epochs=50, batch_size=32, validation_split=0.1)

# Determine anomaly threshold: 99th percentile of reconstruction error on
# training data. Anything above this threshold at inference time is an anomaly.
reconstruction = autoencoder.predict(normal_data)
mse_per_sample  = np.mean((normal_data - reconstruction) ** 2, axis=1)
threshold       = np.percentile(mse_per_sample, 99)
print(f"Anomaly threshold (99th pct): {threshold:.6f}")

# Step 2: Convert to TensorFlow Lite for embedded deployment
converter = tf.lite.TFLiteConverter.from_keras_model(autoencoder)
converter.optimizations = [tf.lite.Optimize.DEFAULT]   # Post-training quantization
tflite_model = converter.convert()

with open("anomaly_detector.tflite", "wb") as f:
    f.write(tflite_model)

print(f"TFLite model size: {len(tflite_model) / 1024:.1f} KB")
# Typical output: ~8-15 KB for this architecture — suitable for embedded Linux
/* On-device inference using the TFLite anomaly detector.
   Run every 5 minutes in a background task on the embedded Linux device.
   Generates a SEC_EVENT_ANOMALY_DETECTED log event when reconstruction
   error exceeds the threshold. */

#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "anomaly_detector_model.h"   /* Generated from .tflite via xxd -i */

#define FEATURE_DIM        5
#define ANOMALY_THRESHOLD  0.045f    /* From training: 99th percentile MSE */

/* Feature collection: gather current device metrics */
static void collect_features(float features[FEATURE_DIM]) {
    features[0] = get_cpu_utilisation_pct() / 100.0f;
    features[1] = get_heap_used_pct() / 100.0f;
    features[2] = get_bytes_out_per_min() / MAX_EXPECTED_BYTES_OUT;
    features[3] = get_auth_failures_last_hour() / 100.0f;
    features[4] = get_active_connections() / MAX_EXPECTED_CONNECTIONS;
}

/* Run anomaly detection inference */
void run_anomaly_detection(void) {
    float input_features[FEATURE_DIM];
    float reconstructed[FEATURE_DIM];

    collect_features(input_features);

    /* Run TFLite inference (details depend on target platform and TFLite port) */
    tflite_run_inference(input_features, reconstructed, FEATURE_DIM);

    /* Compute mean squared reconstruction error */
    float mse = 0.0f;
    for (int i = 0; i < FEATURE_DIM; i++) {
        float diff = input_features[i] - reconstructed[i];
        mse += diff * diff;
    }
    mse /= FEATURE_DIM;

    if (mse > ANOMALY_THRESHOLD) {
        /* Log anomaly event with the feature vector for investigation */
        uint8_t detail[FEATURE_DIM * sizeof(float)];
        memcpy(detail, input_features, sizeof(detail));
        log_security_event(SEC_EVENT_BEHAVIOURAL_ANOMALY,
                           OUTCOME_FAILURE,
                           NULL, detail, sizeof(detail));

        /* Optionally: trigger automated containment for high-MSE anomalies */
        if (mse > ANOMALY_THRESHOLD * 3.0f) {
            trigger_automated_containment(CONTAINMENT_REDUCE_CONNECTIVITY);
        }
    }
}

AI-Powered Attacks Against Embedded Devices

The same AI capabilities that improve defender tools also improve attacker tools. Three emerging AI-powered attack vectors that embedded security engineers should understand and design against:

LLM-assisted vulnerability discovery: Large language models trained on source code can identify patterns similar to known vulnerability classes in firmware source code and binary disassembly. Attackers with access to decompiled firmware (from an extracted flash image) can use LLM-based tools to identify candidate vulnerability locations significantly faster than manual review. The implication for defenders: the time between public vulnerability disclosure and working exploit development is shortening. Patch velocity targets that made sense in 2020 may not be adequate for the threat landscape of 2027.

AI-generated phishing for firmware engineers: Supply chain attacks targeting the engineers and infrastructure behind embedded firmware development have become more sophisticated with AI assistance. Spear-phishing campaigns that target firmware engineers with AI-crafted messages referencing their specific projects and colleagues are a vector for compromising the build pipeline that signs firmware images. The defence is multi-factor authentication on all build infrastructure, hardware security keys for code signing operations, and HSMs that ensure the signing key is never present on a networked machine.

Adversarial inputs against on-device ML models: If your device uses ML for anomaly detection, authentication (face recognition, voice recognition) or intrusion detection, adversarial ML attacks can craft inputs that the model classifies incorrectly. A carefully crafted anomalous traffic pattern that falls just below the reconstruction error threshold of an autoencoder evades detection while achieving the attacker’s goals. The defence is ensemble detection (threshold-based plus statistical plus ML, as described in Section 8) so that an adversarial bypass of one detection layer does not bypass all layers.

Edge Computing and Its Security Implications

Edge computing moves computation from the cloud toward the device, reducing latency, bandwidth consumption and cloud dependency. For security, edge computing creates both new capabilities and new risks.

The new capabilities: on-device ML inference (as described above), local anomaly detection that functions without network connectivity, local policy enforcement that does not depend on a cloud connection to authorise each operation, and the ability to process sensitive data locally without transmitting it to a cloud provider.

The new risks: edge nodes are physically deployed in less controlled environments than cloud data centres, they run more complex software stacks that have larger attack surfaces than simple sensor firmware, and they are often given elevated trust within the network architecture (a gateway device that aggregates data from hundreds of sensors may have broader network access than any individual sensor). Securing edge nodes requires applying the full set of controls from this course: secure boot, hardened OS, encrypted storage, mTLS for all communication, RBAC access control and tamper detection with appropriate physical security for the deployment environment.

The 5G network expansion that is enabling many edge computing architectures also introduces new attack surfaces: 5G network slicing creates isolated virtual networks that reduce inter-tenant interference but introduce new configuration attack surfaces, 5G’s reliance on software-defined networking increases the attack surface of the network infrastructure, and the higher bandwidth of 5G makes data exfiltration from compromised devices faster and harder to detect through volume-based anomaly detection alone.

Zero Trust Architecture for IoT

Zero Trust is an architectural principle: never trust implicitly based on network location, always verify explicitly based on identity and context. In traditional perimeter security, a device on the corporate network was trusted by default; in Zero Trust, a device on the corporate network is treated the same as a device on the internet until it has proven its identity and the legitimacy of its current request.

For IoT and embedded devices, Zero Trust translates into four specific design requirements that contrast with the traditional approach:

Traditional IoT Network Security Zero Trust IoT Security
Device trusted because it is on the IoT VLAN Device must present a valid certificate for every connection; network location is not a trust signal
All devices on the VLAN can communicate with each other Device-to-device communication blocked by default; only explicitly authorised paths permitted
Credentials provisioned once at factory, valid for device lifetime Short-lived credentials rotated frequently; revocation infrastructure enables instant invalidation
Access to cloud resources controlled by VPN membership Access to each cloud API endpoint controlled by per-request authorisation based on device identity and current context

Implementing Zero Trust for a large IoT fleet requires: a scalable PKI that can issue and revoke certificates for millions of devices, a broker ACL system that enforces device-to-device communication restrictions at the protocol level (the Mosquitto ACL patterns from Section 11), short-lived JWT tokens for API access rather than long-lived API keys, and continuous verification that treats any device showing anomalous behaviour as untrusted regardless of certificate validity.

Next-Generation Hardware Security Features

Silicon vendors are responding to the embedded security challenge with hardware features that make several of the software security controls in this course either more robust or more accessible:

Enhanced Secure Elements and TEEs

The next generation of secure elements (ATECC608B successors, SE050, STSAFE-A) are adding support for post-quantum key types, larger key stores, and faster ECC operations that remove the performance penalty for using certificate-based authentication on constrained devices. Arm TrustZone-M (available on Cortex-M23 and M33 and higher) provides a hardware-enforced secure world / non-secure world separation that allows the security-critical code (key management, attestation, secure boot) to run in an isolated execution environment that the application cannot access, even if the application is compromised.

AI Inference Accelerators

Purpose-built neural network accelerators (Arm Ethos-U55, Kendryte K210 NPU, ESP32-S3 vector extensions) make on-device ML inference practical at sub-milliwatt power levels for certain model architectures. This enables the anomaly detection models described above to run continuously without dominating the device’s power budget, opening the door to always-on behavioural monitoring even in battery-powered sensor devices.

PUF Technology Maturing

PUF (Physical Unclonable Function) technology generates device-unique identifiers and keys from the physical characteristics of the silicon itself (manufacturing variation in SRAM startup states, ring oscillator frequencies). A PUF-derived key exists nowhere except in the physical device: it cannot be extracted, copied or cloned. PUF implementations are moving from research prototypes to production silicon, with SRAM PUFs now available in STM32 H5, i.MX RT and Microchip PIC32 devices. The limitation noted in Section 5 remains: PUFs have error rates (the same device may produce slightly different measurements across temperature and voltage variations) that require error correction codes to produce a stable key, adding implementation complexity.

Attestation at Scale

Hardware attestation is the ability for a device to cryptographically prove its software state to a remote verifier: “I am running firmware version 2.4.1 with SHA-256 hash 3a7f…, and this claim is signed by my TPM/TEE using a key that was provisioned at manufacturing and is bound to this specific hardware.” DICE (Device Identifier Composition Engine), the TCG standard for hardware-based attestation in constrained devices, is being implemented in Arm TrustZone-M, enabling remote attestation for Cortex-M series devices for the first time. This gives fleet management systems the ability to continuously verify that every device in the fleet is running the expected firmware version and has not been tampered with.

The Regulatory Tide: From Guidelines to Law

The regulatory environment for embedded device security has shifted fundamentally since 2020. What were previously voluntary best-practice guidelines are becoming enforceable legal requirements with market access consequences for non-compliance:

EU Cyber Resilience Act (enforcing from 2027): Requires CE marking to include cybersecurity compliance for products with digital elements. Non-compliant products cannot be sold in the EU. Manufacturers must maintain an SBOM, a vulnerability disclosure process, a security update mechanism for the product’s expected lifetime, and notify ENISA within 24 hours of discovering actively exploited vulnerabilities. The CRA applies to hardware products: embedded device manufacturers are directly in scope.

UK PSTI Act (Product Security and Telecommunications Infrastructure Act, in force from April 2024): Bans universal default passwords, requires a published vulnerability disclosure policy and a minimum defined support period for consumer IoT devices sold in the UK. Enforced by the Office for Product Safety and Standards with civil and criminal penalties.

US Executive Order 14028 and NIST guidance: While not directly mandating product security requirements for commercial devices, the EO and resulting NIST guidance (SP 800-213 for IoT, NIST IR 8259 for IoT baseline security) are shaping the procurement requirements for US federal government customers and are influencing state-level legislation (California SB-327 and similar bills).

Liability shift: The most consequential long-term regulatory trend is the movement toward manufacturer liability for security failures in shipped products. The EU CRA and product liability reforms include provisions that make manufacturers liable for damages caused by products that do not meet the security requirements, including products that were secure when shipped but received no patches for vulnerabilities discovered post-market. This creates a direct financial incentive to invest in the long-term support capabilities described in Section 9.

Right to Repair: The EU Right to Repair Directive and similar legislation in multiple US states require that manufacturers make spare parts, repair manuals and diagnostic software available to independent repairers. This creates a tension with some hardware security controls: a device designed to resist unauthorised access makes independent repair harder by design. The emerging regulatory expectation is that security controls that prevent repair must be counterbalanced by accessible manufacturer repair services, not used as a justification for planned obsolescence.

Supply Chain Transparency and SBOM as Infrastructure

Supply chain attacks on embedded software (SolarWinds, the XZ Utils backdoor) have elevated software supply chain security from a niche concern to a mainstream requirement. For embedded device manufacturers, the supply chain security requirements are converging on three practices:

SBOM as a continuous asset: The SBOM is no longer a static document produced once at release. Tools like Grype, OSV-Scanner and OWASP Dependency-Track continuously scan the SBOM against live CVE feeds and alert when a new vulnerability is discovered in a component that is deployed in the fleet. The SBOM becomes infrastructure: it is the machine-readable inventory that enables automated vulnerability monitoring at scale.

Firmware provenance and signing: The firmware image itself needs a verifiable chain of custody from the developer’s commit to the device’s flash. This means: signed commits (GPG-signed git commits verifying developer identity), reproducible builds (the same source code produces the same binary output, enabling independent verification), and timestamped build attestation (the build system records and signs the exact environment, dependencies and source state used to produce each release). The SLSA (Supply chain Levels for Software Artefacts) framework from Google provides a structured approach to achieving and documenting these controls.

Vendor security assessment: Third-party components (hardware modules, pre-compiled libraries, RTOS SDKs, cloud platform SDKs) carry the security posture of their vendor into your product. The due diligence process for selecting and qualifying third-party components must include a security assessment: does the vendor have a published CVE process? Do they provide SBOMs? Do they have a track record of timely security patches? What is their end-of-life policy? Answering these questions before a component is designed into the product is far less costly than discovering a vendor’s poor security practices after millions of devices have shipped.

Right to Repair and Its Security Tensions

The Right to Repair movement presents embedded security engineers with a genuine design tension that does not have a simple resolution. Security features that make a device harder to tamper with (RDP Level 2 on STM32 that permanently disables debug access, eFuse-burned JTAG disable on ESP32, epoxy-potted electronics) also make legitimate repair harder. An independent repair technician who needs to diagnose a hardware fault cannot use a debug interface that has been permanently disabled.

The emerging design approach that addresses both concerns is tiered access with cryptographic authorisation: the debug interface remains physically present but is locked at the firmware level, and can be unlocked by presenting a manufacturer-issued debug certificate. The certificate is time-limited (valid for 48 hours), scoped to specific operations (memory read but not write, or restricted to certain address ranges), and requires the device to be in an authenticated management session before the unlock command is accepted. This allows legitimate repair access through a controlled, auditable process while preventing unauthorised physical attack through the same interface.

Career Paths in Embedded Security

Embedded security is a growing specialisation at the intersection of firmware engineering, hardware engineering and cybersecurity. Demand is increasing across every industry that deploys connected devices, and the supply of engineers with deep expertise in both embedded systems and security is limited — which means the field offers strong career prospects for engineers who invest in building this combined skill set.

Role Core Responsibilities Entry Path Industries
Embedded Security Engineer Design and implement security controls in firmware: secure boot, TLS, key management, secure coding practices Firmware engineer who learns security, or security engineer who learns embedded systems Consumer IoT, automotive, medical, industrial
Hardware Security Engineer Side-channel analysis, fault injection testing, secure element integration, PCB-level security design Electrical engineering or hardware engineering background with security specialisation Automotive, defence, payment terminals, HSM vendors
IoT Penetration Tester Device security assessments: firmware extraction, protocol analysis, hardware attack, vulnerability reporting Cybersecurity background with embedded systems skills added; CEH, OSCP plus hardware lab experience Security consulting firms, device manufacturers, product certification bodies
Security Architect (Embedded) Design the security architecture for product lines: threat model, key management, update infrastructure, compliance mapping Senior firmware security engineer with cross-functional experience All industries; often in product security or platform security teams
Compliance and Product Security Specialist Map regulatory requirements to engineering controls, manage certification processes, prepare compliance documentation Security engineering background with regulatory knowledge; often requires specific industry experience (medical, automotive) Medical devices, automotive, industrial, consumer IoT
Security Researcher Vulnerability discovery in deployed devices, responsible disclosure, publishing research, developing new attack and defence techniques Deep technical skills across firmware, hardware and networking; often self-taught through CTF and bug bounty Security research firms, academic institutions, internal red teams

The Skill Stack Employers Are Looking For

The embedded security skill stack is wider than either pure firmware engineering or pure cybersecurity. Employers consistently look for candidates who can operate across both disciplines. The six core skill areas and the specific competencies within each:

Embedded programming: Production-quality C (memory safety, error handling, defensive coding), familiarity with at least one RTOS (FreeRTOS, Zephyr, ThreadX), understanding of bare-metal firmware architecture, experience with Cortex-M or similar microcontroller families, ability to read ARM/RISC-V assembly for security analysis. Python for tooling and automation. CMake or similar build system.

Applied cryptography: Working knowledge of AES-GCM, SHA-256, ECDSA, TLS 1.3 and the correct way to use each. Ability to spot cryptographic misuse in code review. Understanding of key lifecycle management. Awareness of post-quantum algorithms and migration paths. Practical experience with mbedTLS, WolfSSL or libsodium for embedded targets.

Hardware and protocol knowledge: Understanding of common bus protocols (SPI, I2C, UART, CAN) as attack surfaces. Basic oscilloscope and logic analyser skills. Familiarity with JTAG/SWD debug interface security. Understanding of flash memory architecture, secure elements and trust zones. Ability to trace a PCB and identify security-relevant components.

Security testing and analysis: Static analysis tools (Cppcheck, Semgrep, Clang SA). Dynamic analysis (ASan, fuzzing with libFuzzer/AFL++). Network analysis (Wireshark, Scapy). Firmware analysis (binwalk, Ghidra). Ability to write a clear, actionable vulnerability report. Basic penetration testing methodology for embedded targets.

Threat modelling and architecture: STRIDE threat modelling applied to embedded systems. Attack surface mapping. Risk assessment and prioritisation. Security requirements definition. Understanding of the secure SDLC and where security activities fit into an embedded development process.

Communication: The ability to explain a security vulnerability and its business impact to non-technical stakeholders is consistently cited by hiring managers as one of the hardest skills to find. A firmware engineer who can write a clear one-page risk summary of a discovered vulnerability — what it is, what an attacker could do with it, how it should be fixed, and what the consequences of not fixing it are — is significantly more valuable than one who can only communicate in technical terms to other engineers.

Certifications and Continued Learning

Certifications signal demonstrated competence to employers and provide structured learning paths for engineers building new skills. The certifications most relevant to embedded security careers:

OSCP (Offensive Security Certified Professional): The most widely recognised hands-on penetration testing certification. Requires passing a 24-hour practical exam demonstrating exploitation of real systems. Not embedded-specific, but the exploitation techniques are directly applicable to embedded device penetration testing and the credential carries significant weight with hiring managers.

GREM (GIAC Reverse Engineering Malware): Covers binary analysis and reverse engineering with a focus on malware, but the skills are directly applicable to firmware analysis. Recognised by defence and security consulting employers.

CEH (Certified Ethical Hacker): Broad coverage of penetration testing methodology. Less technically rigorous than OSCP but widely recognised in enterprise hiring. Good as a complement to deeper technical certifications.

IEC 62443 Cybersecurity Certificate Program: From ISA/IEC, specifically for industrial control system security. Directly relevant for engineers targeting industrial IoT roles. The CSSLP (Certified Secure Software Lifecycle Professional) from ISC2 covers secure SDLC across all domains including embedded.

Conferences: Black Hat and DEF CON (annual, Las Vegas) are the premier security research conferences; both have an IoT/embedded security track. Embedded World (Nuremberg) is the leading embedded systems conference with a growing security track. ARM DevSummit covers Arm platform security features including TrustZone-M and DICE. Hardware Security Summit is dedicated to hardware security research. Attending these events, even if only via recorded talks, keeps you current with the practical techniques being used and researched in the field.

CTF competitions: Capture The Flag competitions with hardware and embedded categories (including DEF CON CTF, Hack The Box, and IoT-focused events from companies like Pwn2Own Mobile) provide structured practice with real hardware exploitation challenges. CTF write-ups are also one of the best learning resources available: they document exactly how specific vulnerabilities were found and exploited on real devices.

Building Practical Experience

Embedded security skills are learned by doing. No certification or university course substitutes for the experience of extracting firmware from a real device, finding a vulnerability in it, and understanding what it would take to exploit and patch it. Five ways to build this experience practically:

Personal hardware projects: Build a device with intentional security controls and then attack it yourself. An ESP32-based sensor node with MQTT, OTA updates, a local management API and a debug UART console covers most of the attack surfaces in this course. The discipline of then auditing your own implementation with the tools from Section 10 reveals how the controls you thought you implemented compare with what you actually shipped.

Firmware analysis on off-the-shelf hardware: Consumer IoT devices (routers, IP cameras, smart plugs, MQTT sensors) are widely available and legally analysable for security research purposes under most jurisdictions’ computer misuse law (check the specific laws in your jurisdiction before proceeding). Extracting firmware with binwalk, analysing it with Ghidra, testing the network interfaces with Wireshark and Scapy, and attempting to access the debug UART console with a USB-serial adapter is a complete hands-on curriculum in itself.

Bug bounty programmes: Several device manufacturers (including router vendors and IoT platform vendors) run bug bounty programmes that pay for responsibly disclosed vulnerabilities. Working on bug bounty provides real-world research experience, financial compensation, and a track record of findings that demonstrates practical skills to employers.

Open source contribution: Contributing to embedded security tools (binwalk, liboqs, Zephyr RTOS security subsystem, OpenOCD, FACT) builds both skills and professional visibility. Security bug reports and fixes in widely used open source embedded libraries are strong portfolio items.

Home lab: A minimal embedded security lab requires less than $200 of hardware: a USB-serial adapter ($5), a logic analyser ($15), a JTAG probe ($20), a ChipWhisperer Nano ($50), an ESP32 development board ($10) and an STM32 Nucleo board ($15). This equipment covers the hardware attack techniques in Section 10 of this course and is sufficient to practise all of the hardware security testing workflows.

Recommended Resources

The resources that provide the most value for continuing education in embedded security, beyond the tools and techniques covered in this course:

OWASP IoT Security Project (owasp.org/www-project-internet-of-things): The OWASP IoT Attack Surface Areas and the IoT Security Testing Guide are practical references that map attack surfaces to testing techniques. Updated periodically to reflect current threat landscape.

IoT Security Foundation (iotsecurityfoundation.org): Industry association that publishes the IoTSF Security Compliance Framework and best practice guidelines aligned with ETSI EN 303 645. Good resource for understanding the regulatory landscape and what compliance looks like in practice.

Azeria Labs (azeria-labs.com): Comprehensive ARM assembly and exploitation tutorials with a focus on embedded and IoT targets. The ARM exploitation series covers the techniques used in real embedded device vulnerability research.

Applied Cryptography Engineering (by Fiona Barrett and similar): For deeper coverage of the cryptographic foundations that embedded security depends on, the academic literature and practitioner-written texts on applied cryptography provide the theoretical grounding that complements the practical implementation guidance in this course.

Embedded.fm Podcast: Industry podcast covering embedded systems engineering with periodic episodes on security topics. Good for staying current with industry trends and hearing from practitioners in the field.

NVD (nvd.nist.gov) and CVE feeds: Subscribe to the CVE feeds for the components and platforms you work with. The NVD provides CVSS scores and fix information; setting up alerts for new CVEs affecting your SBOM components is the minimum viable vulnerability monitoring practice.

Your Next Steps

Having completed this twelve-section course, you have the technical foundation to implement security across the full embedded device lifecycle. Translating that knowledge into practical capability requires applying it to real devices and real code. Six concrete next steps:

Audit an existing project: Take a firmware project you have worked on or have access to and run the Section 10 toolchain against it: Cppcheck, Semgrep with the custom rules from that section, binwalk on the output binary, and a Wireshark capture of the device’s network traffic. The findings will be instructive and the process of investigating them will cement the techniques more effectively than any further reading.

Build the hardware lab: Assemble the minimal hardware kit described in the practical experience section above. Connect a USB-serial adapter to the debug UART of a consumer IoT device and see what the console output reveals. Attach a logic analyser to the SPI bus between a microcontroller and its external flash and capture the boot sequence. These direct experiences make the abstract attack concepts in this course concrete.

Write a threat model for your current or next project: Apply the STRIDE methodology from Section 7 to a real device you are working on or planning. Identify the trust boundaries, enumerate the threats in each STRIDE category for each boundary, and prioritise the mitigations. Share the result with your team and use it to drive security requirements into the design.

Implement the hardening checklist: Take the complete hardening checklist from Section 11 and work through it against a development device. For each item that is not yet in place, implement it and verify it with the verification method listed. The process will identify gaps in the current security posture and produce a concrete backlog of security work.

Join the community: The embedded security community is active on several platforms: the r/ReverseEngineering and r/netsec subreddits, the REverse Engineering Discord server, the Hardware Hacking Village at DEF CON, and the Embedded Security Podcast. The practical knowledge-sharing in these communities covers techniques and tools that are not yet in any textbook.

Teach what you know: The most effective consolidation of learning is teaching. Writing a blog post, giving a talk at a local meetup, or mentoring a colleague on a security topic you learned in this course forces the kind of precise, gap-free understanding that passive reading does not. It also builds professional visibility in a field where the community is small enough that reputation travels quickly.

Conclusion

The trajectory of the embedded device market points toward a future where security is not optional: not because vendors have developed a collective conscience, but because regulation is making insecurity financially costly, customers are learning to demand it, and the scale of the consequences when billions of poorly secured devices are compromised makes the alternative increasingly untenable. The engineers who understand how to build security into embedded devices from the architecture through to the end-of-life process, and who stay current with the technologies reshaping the discipline — post-quantum cryptography, on-device ML, hardware attestation, Zero Trust architecture and the evolving regulatory framework — will be both in demand and well positioned to make a meaningful contribution to the security of the infrastructure that the world is increasingly depending on.

Security is not a problem that gets solved and stays solved. New vulnerabilities will be found in deployed firmware. New attack techniques will be developed that circumvent controls that seemed robust when they were designed. New regulations will raise the baseline requirements. The engineers who treat this as a reason for ongoing investment rather than a cause for fatigue are the ones who build devices that remain defensible for their full operational lifetime, and the ones who build careers that grow with the field. The foundation this course has provided gives you the starting point. What you build on it is up to you.

Leave a reply

Loading Next Post...
Follow
Search Trending
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...