Common Embedded System Vulnerabilities and Threats

MuhammadMuhammadEmbedded Security6 days ago9 Views

The most common embedded system vulnerabilities are not exotic zero-days. They are repeating patterns: buffer overflows from unsafe C functions, credentials compiled directly into firmware, debug ports left open on shipped hardware, third-party libraries nobody maintains any more. This article works through every major vulnerability category in the embedded threat landscape, from firmware-level coding flaws to supply chain compromises and physical side-channel attacks, with real-world cases and concrete code examples showing both what goes wrong and what the fix looks like.

Firmware Weaknesses

Firmware is the first place most attackers look and the place most development teams spend the least time on security. The reasons are structural: firmware is written under schedule pressure in C, a language that puts memory management entirely in the developer’s hands, targeting hardware with no memory protection unit (MPU) configured or no operating system privilege model. A single bug anywhere in the firmware can give an attacker full control of the device with no further steps required.

The six firmware vulnerability classes that appear most frequently in security research and CVE (Common Vulnerabilities and Exposures) disclosures for embedded devices are:

  • Buffer overflows: Writing past the end of a fixed-size buffer into adjacent memory.
  • Hardcoded credentials: Passwords, API keys or encryption keys embedded in the binary.
  • Poor input validation: Accepting data without checking type, length or range.
  • Insecure update mechanisms: Accepting firmware images without verifying authenticity.
  • Memory leaks: Allocating heap memory without freeing it, eventually exhausting RAM.
  • Race conditions: Two tasks accessing shared state in an order the developer did not anticipate.

Each of these is covered in detail below, with code examples showing the vulnerable pattern and its secure replacement.

Buffer Overflows in Embedded C

A buffer overflow occurs when a write operation puts more data into a fixed-size buffer than the buffer can hold. The excess data overwrites whatever sits next to the buffer in memory: often a return address on the stack, a function pointer or a critical variable. On a device with no MPU and no stack canaries, the attacker can control exactly what that overwritten value is, redirecting execution to shellcode or to a known-good function that gives them privileged access.

The most common source is the unsafe C standard library functions that perform no bounds checking:

Unsafe Function Risk Safe Replacement
strcpy(dst, src) Copies until null terminator, no length limit strncpy(dst, src, dst_size - 1)
strcat(dst, src) Appends without checking destination space strncat(dst, src, dst_size - strlen(dst) - 1)
sprintf(buf, fmt, ...) Writes formatted output without length limit snprintf(buf, buf_size, fmt, ...)
gets(buf) Reads line from stdin with no length limit at all fgets(buf, buf_size, stdin)
scanf("%s", buf) Reads whitespace-delimited string with no length limit scanf("%255s", buf) with explicit field width

Here is the vulnerable pattern and its corrected version in the context of a real embedded task: parsing a command received over UART:

/* VULNERABLE: UART command parser with no length check
   If the sender transmits more than 63 bytes before '\n',
   the stack is overwritten. On a device with no stack canaries,
   this directly enables arbitrary code execution. */

void parse_uart_command(void) {
    char cmd_buf[64];
    int  idx = 0;
    char ch;

    while (uart_read_byte(&ch) == UART_OK) {
        if (ch == '\n') break;
        cmd_buf[idx++] = ch;   /* No bounds check: overflow when idx >= 64 */
    }
    cmd_buf[idx] = '\0';
    execute_command(cmd_buf);
}
/* SECURE: Enforce the buffer limit at every write, drop oversized input
   sizeof(cmd_buf) - 1 leaves one byte for the null terminator.
   Oversized commands are discarded entirely rather than truncated silently,
   which prevents partial-command injection attacks. */

#define CMD_BUF_SIZE 64

void parse_uart_command(void) {
    char cmd_buf[CMD_BUF_SIZE];
    int  idx = 0;
    char ch;
    bool overflow = false;

    while (uart_read_byte(&ch) == UART_OK) {
        if (ch == '\n') break;

        if (idx >= (CMD_BUF_SIZE - 1)) {
            overflow = true;  /* Mark as invalid but keep draining input */
            continue;
        }
        cmd_buf[idx++] = ch;
    }

    cmd_buf[idx] = '\0';

    if (overflow) {
        log_security_event(SEC_EVENT_UART_OVERFLOW);
        return;  /* Discard the entire oversized command */
    }

    execute_command(cmd_buf);
}

Beyond the standard library, integer overflow is the second most common path to buffer overflow on embedded systems. When a length value is computed as an unsigned integer and the computation wraps around, the resulting allocation or copy size can be far smaller than intended, allowing a subsequent write to overflow the undersized buffer:

/* VULNERABLE: Integer overflow leading to undersized allocation
   If 'count' is 0xFFFFFFFF (UINT32_MAX) and element_size is 4,
   count * element_size wraps to 0xFFFFFFFC which then adds 1 to get 0xFFFFFFFD,
   causing malloc to return a tiny allocation that is immediately overflowed. */

uint8_t *alloc_items(uint32_t count, uint32_t element_size) {
    uint32_t total = count * element_size + 1;   /* Integer overflow risk */
    return (uint8_t *)malloc(total);
}

/* SECURE: Check for overflow before computing the allocation size */
uint8_t *alloc_items_safe(uint32_t count, uint32_t element_size) {
    /* Detect multiplication overflow before it happens */
    if (element_size != 0 && count > (UINT32_MAX / element_size)) {
        return NULL;  /* Refuse the allocation */
    }
    uint32_t total = count * element_size;

    /* Detect addition overflow */
    if (total > UINT32_MAX - 1) {
        return NULL;
    }

    return (uint8_t *)malloc(total + 1);
}

Hardcoded Credentials and Secrets

Hardcoded credentials are among the most frequently discovered embedded system vulnerabilities because they are trivially found using tools every security researcher already has installed. The strings command, binwalk and Ghidra all reveal plaintext and near-plaintext secrets in firmware binaries within minutes of opening the file.

The categories of secret that appear hardcoded in firmware include: WiFi passwords used for device provisioning, MQTT broker credentials, REST API keys and tokens, cloud endpoint URLs containing authentication parameters, symmetric encryption keys, X.509 private keys embedded in the image, and default admin passwords that every device of that model shares.

# What an attacker or researcher runs against a firmware binary.
# binwalk extracts embedded file systems, compressed archives and known signatures.
# strings then finds printable character sequences in the extracted content.

# Step 1: Extract firmware contents
binwalk -e router_firmware_v2.3.1.bin

# Step 2: Search for credential-shaped strings in the extracted filesystem
grep -rE "(password|passwd|secret|api_key|token|private_key)" \
  _router_firmware_v2.3.1.bin.extracted/ \
  --include="*.conf" --include="*.cfg" --include="*.json" -i

# Step 3: Find hardcoded strings directly in binary blobs
strings _router_firmware_v2.3.1.bin.extracted/squashfs-root/usr/sbin/httpd \
  | grep -E "^[A-Za-z0-9+/]{20,}={0,2}$"   # Base64-shaped strings

The correct solution is to provision device-unique credentials at factory time into a protected storage region, completely separate from the firmware image. The firmware reads credentials at runtime from that protected region rather than having them compiled in.

For ESP32-based devices, the NVS (Non-Volatile Storage) partition with flash encryption enabled is the appropriate location. For STM32 devices, a dedicated protected flash sector with RDP Level 1 or higher is the baseline. For devices with a secure element (ATECC608A, SE050), credentials never leave the secure element: the element performs the cryptographic operation internally and returns only the result.

/* WRONG: Credentials hardcoded as constants in firmware source.
   These will appear in the compiled binary and be visible to anyone
   who extracts and strings the firmware image. */

#define MQTT_USERNAME  "device_user_prod"
#define MQTT_PASSWORD  "Xk9#mP2$vL7qN4"   /* Hardcoded, same on every device */
#define CLOUD_API_KEY  "sk-prod-a3f8b2c1d4e5f6078910"

/* CORRECT: Read credentials from protected NVS storage at runtime.
   The NVS partition is encrypted by ESP32 flash encryption.
   Credentials were provisioned at factory time, not compiled in. */

#include "nvs_flash.h"
#include "nvs.h"

typedef struct {
    char username[64];
    char password[64];
    char api_key[128];
} DeviceCredentials;

esp_err_t load_credentials(DeviceCredentials *creds) {
    nvs_handle_t handle;
    size_t len;
    esp_err_t err;

    err = nvs_open("credentials", NVS_READONLY, &handle);
    if (err != ESP_OK) return err;

    len = sizeof(creds->username);
    err = nvs_get_str(handle, "mqtt_user", creds->username, &len);
    if (err != ESP_OK) goto cleanup;

    len = sizeof(creds->password);
    err = nvs_get_str(handle, "mqtt_pass", creds->password, &len);
    if (err != ESP_OK) goto cleanup;

    len = sizeof(creds->api_key);
    err = nvs_get_str(handle, "api_key", creds->api_key, &len);

cleanup:
    nvs_close(handle);
    return err;
}

Poor Input Validation

Every byte of data that enters a firmware process from an external source is potentially attacker-controlled: bytes from a UART receive buffer, bytes in an MQTT message payload, bytes in an HTTP POST body, bytes in a BLE (Bluetooth Low Energy) characteristic write, bytes read from an SD card or external EEPROM. Treating any of those as trusted without validation is the root cause of injection attacks, type confusion bugs and protocol state machine corruption.

Input validation has three dimensions that must all be checked:

Type Validation

Is the data the expected type? A field that should be a 16-bit unsigned integer should not be processed as a signed 32-bit integer. A string that should contain only printable ASCII should not be passed to a shell command if it contains semicolons or backticks. A JSON value that should be a number should not be processed if it arrives as a string or array.

Range and Length Validation

Is the value within the allowed bounds? A temperature setpoint field should reject values below -40 and above 120 degrees Celsius even if they are valid integers. A device name field with a 32-byte buffer should reject strings longer than 31 bytes before copying. A packet length field that claims the packet is 65,000 bytes when the MTU (Maximum Transmission Unit) is 1,500 should be rejected immediately.

Format and Encoding Validation

Does the data conform to the expected format? A URL field should only contain characters valid in a URL. A firmware version string should match a semantic version pattern. A MAC address should match the six-octet colon-separated hex format and nothing else.

/* Input validation for a MQTT message carrying a temperature setpoint command.
   The payload arrives as a null-terminated string from the MQTT receive callback.
   All three validation dimensions are applied before the value is used. */

#include 
#include 
#include 

#define SETPOINT_MIN_CELSIUS   -40
#define SETPOINT_MAX_CELSIUS    120
#define SETPOINT_PAYLOAD_MAXLEN  8   /* "-40.000\0" is longest valid value */

typedef enum {
    SETPOINT_OK = 0,
    SETPOINT_ERR_NULL,
    SETPOINT_ERR_TOO_LONG,
    SETPOINT_ERR_NOT_NUMERIC,
    SETPOINT_ERR_OUT_OF_RANGE
} SetpointParseResult;

SetpointParseResult parse_setpoint(const char *payload, float *out_celsius) {
    if (payload == NULL || out_celsius == NULL) {
        return SETPOINT_ERR_NULL;
    }

    /* Length check: reject before any parsing attempt */
    if (strnlen(payload, SETPOINT_PAYLOAD_MAXLEN + 1) > SETPOINT_PAYLOAD_MAXLEN) {
        log_security_event(SEC_EVENT_OVERSIZED_INPUT);
        return SETPOINT_ERR_TOO_LONG;
    }

    /* Type check: must be a valid floating-point number */
    char *end_ptr;
    errno = 0;
    float value = strtof(payload, &end_ptr);

    if (end_ptr == payload || *end_ptr != '\0' || errno == ERANGE) {
        return SETPOINT_ERR_NOT_NUMERIC;
    }

    /* Range check: must be within physical operating limits */
    if (value < SETPOINT_MIN_CELSIUS || value > SETPOINT_MAX_CELSIUS) {
        log_security_event(SEC_EVENT_OUT_OF_RANGE_INPUT);
        return SETPOINT_ERR_OUT_OF_RANGE;
    }

    *out_celsius = value;
    return SETPOINT_OK;
}

Command injection deserves specific attention on embedded devices that call system commands or construct shell strings from user input. Even on embedded Linux systems where shell access is available only to privileged processes, a command injection vulnerability in a low-privilege daemon can chain into privilege escalation. The rule is simple: never construct shell command strings from external input. If you must call an external program, use execv() with a fixed path and explicitly passed arguments, never system() with a string containing user data.

Insecure Firmware Update Mechanisms

A firmware update path that does not authenticate the incoming image is not a security feature: it is a mass-compromise mechanism available to any attacker who can reach the update endpoint. The threat model is straightforward. If an attacker can deliver a firmware image and your device will flash and execute it without verification, the attacker owns every device in your fleet simultaneously, with no physical access and no exploitation of a specific bug.

The minimum required security properties for any firmware update mechanism are:

  • Authenticity: The image must be signed by the manufacturer’s private key. The device verifies the signature against the corresponding public key stored in OTP (One-Time Programmable) memory at factory time.
  • Integrity: The image must not have been modified in transit. The digital signature covers this if the signature scheme includes the full image hash (RSA-PSS or ECDSA over SHA-256).
  • Anti-rollback: A signed but older firmware version must be rejected. A monotonic counter in OTP memory records the minimum acceptable version number. Downgrade attacks are used to re-expose previously patched vulnerabilities.
  • Atomic installation: The update either completes fully and boots cleanly, or the device rolls back to the previous version. A power failure mid-flash should not leave the device in an unbootable state.
/* Firmware update verification using ECDSA P-256 signature check (mbedTLS).
   This runs before writing a single byte to the update flash partition.
   'image_data' is the raw firmware binary. 'image_len' is its length in bytes.
   'sig_buf' is the detached ECDSA signature. 'sig_len' is the signature length.
   The manufacturer's public key is stored as a const byte array compiled into
   the bootloader - the ONLY secret allowed to be hardcoded in firmware. */

#include "mbedtls/ecdsa.h"
#include "mbedtls/sha256.h"
#include "mbedtls/ecp.h"

/* Manufacturer public key (P-256, uncompressed, 65 bytes).
   This is a PUBLIC key - safe to hardcode. The matching PRIVATE key
   never leaves the secure signing infrastructure. */
static const uint8_t MANUFACTURER_PUB_KEY[65] = {
    0x04,  /* Uncompressed point indicator */
    /* X coordinate (32 bytes): */
    0xA1, 0xB2, 0xC3, 0xD4, 0xE5, 0xF6, 0x07, 0x18,
    0x29, 0x3A, 0x4B, 0x5C, 0x6D, 0x7E, 0x8F, 0x90,
    0x01, 0x12, 0x23, 0x34, 0x45, 0x56, 0x67, 0x78,
    0x89, 0x9A, 0xAB, 0xBC, 0xCD, 0xDE, 0xEF, 0xF0,
    /* Y coordinate (32 bytes): */
    0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88,
    0x99, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF, 0x00,
    0x10, 0x20, 0x30, 0x40, 0x50, 0x60, 0x70, 0x80,
    0x90, 0xA0, 0xB0, 0xC0, 0xD0, 0xE0, 0xF0, 0x01
};

bool verify_firmware_signature(const uint8_t *image_data, size_t image_len,
                               const uint8_t *sig_buf,   size_t sig_len) {
    uint8_t image_hash[32];
    mbedtls_ecdsa_context ecdsa_ctx;
    mbedtls_ecp_keypair   keypair;
    bool result = false;
    int  ret;

    /* Step 1: Hash the full firmware image */
    mbedtls_sha256(image_data, image_len, image_hash, 0);

    /* Step 2: Load the manufacturer public key */
    mbedtls_ecdsa_init(&ecdsa_ctx);
    mbedtls_ecp_keypair_init(&keypair);

    ret = mbedtls_ecp_point_read_binary(
              &keypair.grp,
              &keypair.Q,
              MANUFACTURER_PUB_KEY,
              sizeof(MANUFACTURER_PUB_KEY));
    if (ret != 0) goto cleanup;

    /* Step 3: Verify signature over the image hash */
    ret = mbedtls_ecdsa_read_signature(&ecdsa_ctx,
                                       image_hash, sizeof(image_hash),
                                       sig_buf, sig_len);
    result = (ret == 0);

cleanup:
    mbedtls_ecdsa_free(&ecdsa_ctx);
    mbedtls_ecp_keypair_free(&keypair);
    return result;
}

Memory Leaks and Race Conditions

Memory leaks and race conditions are reliability issues that become security issues in embedded contexts where there is no operating system to reclaim leaked memory and no automatic restart when a process crashes.

Memory Leaks

On an embedded device with 128 KB of RAM running continuously for months, a leak of 100 bytes per hour will exhaust available heap in roughly 53 days. The device crashes or enters an undefined state. In a best case it reboots and recovers. In a worst case the crash corrupts NVS data or leaves the device in a configuration that bypasses security checks on the next boot.

The mitigation discipline is simple in principle and hard in practice: every malloc() must be paired with a free(), every allocation must be checked for NULL, and error paths must be audited to ensure they free any allocations made before the error occurred. Static analysis tools such as Cppcheck and Clang Static Analyzer detect many of these automatically.

Race Conditions

Race conditions in RTOS-based firmware occur when two tasks or an ISR (Interrupt Service Routine) and a task access shared state without proper synchronization. The security-relevant variant is a TOCTOU (Time-of-Check to Time-of-Use) race: the firmware checks a condition, then acts on it, but the condition changes between the check and the action.

/* VULNERABLE TOCTOU race condition in an authentication check.
   If an interrupt or higher-priority RTOS task can modify 'auth_state'
   between the check on line A and the privileged action on line B,
   an attacker who can trigger that modification can gain access
   without valid credentials. */

volatile bool auth_state = false;  /* Modified by authentication ISR */

void handle_admin_request(AdminCommand *cmd) {
    if (auth_state == true) {           /* Check (line A) */
        /* auth_state could be set to false HERE by another context */
        execute_admin_command(cmd);     /* Use  (line B) - TOCTOU window */
    }
}

/* SECURE: Copy the auth state atomically into a local variable.
   The local variable cannot be modified by an ISR or other task
   after it is read. The check and use both operate on the local copy. */

void handle_admin_request_safe(AdminCommand *cmd) {
    /* Disable interrupts briefly to get an atomic snapshot */
    taskENTER_CRITICAL();
    bool is_authenticated = auth_state;
    taskEXIT_CRITICAL();

    if (is_authenticated) {
        execute_admin_command(cmd);
    }
}

Insecure Interfaces

The firmware-level vulnerabilities above describe what goes wrong inside the binary. Insecure interfaces describe what goes wrong at the boundary where the device meets the outside world, whether that boundary is a web management page, a REST API, a BLE GATT service or a mobile companion app.

Web Interface Vulnerabilities

Embedded web servers (lighttpd, uhttpd, mongoose, custom implementations) expose the device to every web application vulnerability class. The six most commonly found in embedded device web interfaces are:

  • No HTTPS: Session tokens and credentials transmitted in plaintext, interceptable by any device on the same network.
  • Default or shared passwords: Every device of the model has the same admin password, which is published in the manual and available on the manufacturer’s support site.
  • Command injection: Diagnostic tools (ping, traceroute, DNS lookup) that pass user input to shell commands without sanitization. CVE-2021-20090 (Arcadyan routers) is a recent example where a path traversal in the web interface exposed configuration files containing credentials.
  • XSS (Cross-Site Scripting): Stored XSS in device names, SSID labels or log entries that execute in the admin browser session when the page is viewed.
  • CSRF (Cross-Site Request Forgery): State-changing requests accepted without a CSRF token, allowing a malicious web page to reconfigure the device when visited by an authenticated admin.
  • Directory traversal: URL path parameters that resolve to filesystem paths without sanitization, allowing ../../etc/shadow style reads on embedded Linux devices.

API Security Problems

IoT devices that expose REST or MQTT APIs over a local network or the internet introduce five recurring API weaknesses:

  • No authentication on control endpoints: Lock, unlock, set, get and reboot endpoints accessible without credentials.
  • Excessive permissions per token: A single device token that authorises every action the API exposes, including firmware update and factory reset.
  • No rate limiting: Authentication endpoints that accept unlimited login attempts per second, enabling trivial brute-force against 4-digit PIN codes.
  • Verbose error messages: Error responses that identify the database schema, internal file paths, software version strings or specific reason for authentication failure (distinguishing “user not found” from “wrong password”).
  • No input validation: API parameters passed directly to internal functions without type or length checking, creating injection paths.

Mobile App Risks

The companion app is part of the embedded product’s attack surface even though it runs on a phone, not the device. Weaknesses found by reversing companion APKs and IPAs routinely include: API keys and server URLs hardcoded in the app binary, cleartext credentials stored in shared preferences or NSUserDefaults, BLE pairing using “Just Works” mode with no MITM protection, no certificate pinning on HTTPS connections to the cloud backend, and insecure direct object references in API calls where a sequential device ID allows any authenticated user to access any other user’s device.

Supply Chain Risks

Supply chain attacks compromise a device before it reaches the end user, either by tampering with hardware components, injecting malicious code into software dependencies or compromising the update delivery mechanism. They are particularly difficult to detect because the attack is embedded in components or processes that are implicitly trusted.

Counterfeit and Tampered Components

Counterfeit electronic components enter the supply chain through grey-market distributors, particularly for legacy or allocation-constrained parts. Security-relevant risks from counterfeit components include:

  • Absence of security features present in the genuine part: a counterfeit microcontroller may not implement the hardware crypto accelerator or flash read protection that the genuine part provides, causing security controls that rely on those features to silently fail.
  • Reduced reliability characteristics that cause field failures in patterns inconsistent with the genuine component, masking the counterfeit origin.
  • In the most sophisticated cases, hardware trojans: additional logic embedded in the silicon that activates under specific conditions to exfiltrate data, create a backdoor or cause failure. The 2018 Bloomberg Businessweek “Big Hack” story claimed this had been done to server motherboards, though the specific claims remain disputed. The general technique is real and well-documented in academic security research.

Mitigation: source components from authorised distribution channels only, conduct incoming inspection against published datasheets and use golden-sample comparison for high-security applications.

Compromised Third-Party Libraries

62% of organisations report experiencing a supply chain security incident. Most firmware projects use third-party libraries: mbedTLS, FreeRTOS, lwIP, Newlib, wolfSSL and various vendor SDKs. Each library is a dependency with its own CVE history, maintenance lifecycle and potential for compromise.

The attack vectors are:

  • Known-vulnerable versions: Using a version of mbedTLS or lwIP that has published CVEs with available exploits. This is the most common supply chain risk and the most easily addressed.
  • Abandoned projects: Libraries that no longer receive security updates. Once the maintainer stops issuing patches, any vulnerability discovered after that point will never be fixed.
  • Dependency confusion: Attackers publish packages with the same name as private internal libraries on public registries, causing build systems that search public registries first to pull the malicious version.
  • Targeted compromise: An attacker compromises the maintainer’s account or CI pipeline and injects malicious code into a legitimate package. The SolarWinds attack (2020) followed this pattern at the software level, distributing a backdoored build of the Orion network management platform to approximately 18,000 customers through the official update mechanism.
  • ASUS Live Update (2019): Attackers with access to ASUS’s build infrastructure distributed malware-laced updates signed with a legitimate ASUS certificate to approximately 1 million devices before the campaign was discovered.

Mitigation practices: maintain an SBOM (Software Bill of Materials) for every firmware release, subscribe to CVE feeds for all components used, pin dependency versions and verify checksums in the build system, and use a private artifact registry with access control rather than pulling from public sources at build time.

Malicious or Compromised Firmware Delivered Through Legitimate Channels

If an attacker can compromise the build server, the code signing infrastructure or the OTA update delivery system, they can distribute malicious firmware to every device in the field under the guise of a legitimate update. Defences: sign firmware with a hardware security module (HSM) that stores the private key in tamper-resistant hardware, require multi-party authorisation for production signing operations, and implement anti-rollback so that a downgrade to a pre-compromise firmware version cannot be forced.

Physical Tampering and Side-Channel Attacks

Physical attacks against embedded devices target the hardware layer directly, bypassing any software-based security control. The assumption that physical access equals game over is largely correct for devices that have not been specifically hardened against it, but the degree of sophistication required varies considerably across attack types.

Memory Chip Extraction

The simplest physical attack: desolder the flash memory chip from the PCB, place it in a programmer and dump the contents. This requires a hot air station, a flash programmer and about 20 minutes of work. Without flash encryption, the entire firmware image, any stored credentials and any provisioned keys are recovered in plaintext. Flash encryption (ESP32 AES-256, STM32 with external encrypted flash, or dedicated SPI flash devices with hardware encryption) makes this attack yield only ciphertext.

Bus Probing

Attach a logic analyzer to SPI, I2C or UART traces on the PCB to passively capture all traffic. No soldering required in most cases: test point pads or through-hole via pads can be probed with spring-loaded clips. This captures all data transiting those buses including: firmware images read from external SPI flash at boot time, sensor data, EEPROM configuration reads (containing credentials), and any debug output the firmware emits over UART.

Power Analysis and Fault Injection

Side-channel attacks extract information from the physical characteristics of a device’s operation rather than from its logical outputs. Power analysis and fault injection are the two most practically relevant against embedded targets:

SPA/DPA (Simple and Differential Power Analysis): The power consumption of a processor varies depending on the data it is processing. AES key schedule operations consuming different amounts of power depending on the key byte values. By recording thousands of power traces during encryption operations and applying statistical analysis, an attacker can recover the AES key without ever touching the firmware. The ChipWhisperer platform (open source hardware and software) makes this attack accessible for under $100 in equipment cost.

Voltage glitching: Briefly dropping the supply voltage by 100-200 mV for 10-100 nanoseconds during a specific instruction causes the processor to malfunction in a predictable way: skipping an instruction, loading a register with a different value or corrupting a comparison result. Attackers use this to skip the signature verification instruction in a secure boot implementation, allowing unsigned firmware to boot. The timing precision required is achievable with a microcontroller-based glitching circuit costing under $50.

Clock glitching: Inserting extra clock pulses or removing clock pulses during execution causes similar instruction-skipping effects. Some processors are more susceptible to clock glitching than voltage glitching.

Mitigations at the hardware level include: voltage monitors that halt the processor on supply deviation, redundant security checks (verify the condition twice in different code paths so a single glitch cannot bypass both), adding randomised delays between security checks to defeat precise timing attacks, and using microcontrollers with built-in glitch detection circuits (some STM32 and Nordic nRF series devices).

Tamper Detection

Where physical security matters and hardware hardening is insufficient, active tamper detection provides a last line of defence. Implementations range in sophistication:

  • Tamper-evident seals: Void-on-open labels over enclosure screws. Detect tampering after the fact, do not prevent it.
  • Tamper switches: Mechanical switches connected to a GPIO that trigger when the enclosure is opened. Can be defeated by careful opening, but raise the difficulty. The response on trigger should be to zeroize keys and enter a locked state.
  • Conductive mesh: A mesh of fine conductive traces covering the PCB surface. Drilling or milling through the PCB to access components breaks traces, triggering detection. Used in payment terminals and hardware security modules.
  • Secure enclaves: Dedicated secure processor sub-systems (ARM TrustZone, ATECC608A, SE050) that zeroize their key storage autonomously when tamper conditions are detected, regardless of what the main application processor is doing.

Real Attack Cases and Their Lessons

Four cases are examined here because each one illustrates a different vulnerability class at a different layer of the embedded stack, and together they cover the full range from consumer IoT through automotive to medical devices.

Mirai Botnet (2016): Default Credentials at Scale

Mirai infected over 600,000 IoT devices by scanning the internet for Telnet services and attempting a list of 62 default username/password combinations sourced from device manuals. IP cameras and DVRs from multiple manufacturers shared the same default credentials. The infected devices were then used to launch a DDoS (Distributed Denial of Service) attack against Dyn, a major DNS provider, taking down Twitter, Netflix, Reddit, GitHub and Spotify for hours on 21 October 2016.

The vulnerability required no exploitation skill. The entire attack infrastructure was built on credentials that every user could change but almost no user did, because the devices worked without requiring a change and the manufacturers made no effort to force one.

Lesson: Unique per-device credentials set at the factory and forced change on first use are not optional security niceties. They are mandatory. Regulations in the UK (PSTI Act 2024), EU (Cyber Resilience Act) and US (IoT Cybersecurity Improvement Act) now mandate this specifically because the industry did not adopt it voluntarily after Mirai.

Jeep Cherokee Remote Hack (2015): Missing Network Segmentation

Researchers Charlie Miller and Chris Valasek demonstrated complete remote control of a 2014 Jeep Cherokee at highway speed over a cellular connection. The attack exploited an internet-exposed Uconnect infotainment system, pivoted to the CAN (Controller Area Network) bus, and issued commands to the ABS and steering ECUs (Electronic Control Units). The critical flaw: the infotainment CAN bus and the chassis/safety CAN bus were bridged without access control, so any node on the infotainment bus could send authenticated-looking frames to safety-critical ECUs.

The recall that followed covered 1.4 million vehicles. The fix involved both a software patch to the Uconnect system and, for some models, a physical CAN bus isolator fitted by dealers.

Lesson: Network isolation between subsystems with different trust levels is a safety requirement. CAN bus lacks authentication by design and that cannot be fixed in the protocol layer. Autosar SecOC provides message authentication codes at the application layer as a compensating control, but it requires that all ECUs on a bus be updated to support it, which is impractical for deployed vehicles.

Abbott Cardiac Devices (2017): Unpatched Firmware in Medical Hardware

The US FDA (Food and Drug Administration) issued a safety communication in 2017 warning that certain pacemakers and implantable defibrillators manufactured by St. Jude Medical (acquired by Abbott) contained radio frequency vulnerabilities allowing an attacker within radio range to issue unauthorised commands, potentially causing inappropriate shocks or battery drain. The affected device count was approximately 465,000 implanted patients.

Abbott issued a firmware update in August 2017, the first time a firmware update had been pushed to already-implanted cardiac devices in the US. The update required patients to visit a clinic for a 3-minute update procedure rather than being delivered OTA, because of concerns about update failure risks on implanted devices.

Lesson: Embedded system vulnerabilities in medical devices have patient safety implications that fundamentally change the risk calculus. Security testing must be part of the pre-market design validation process, not a post-release discovery cycle. The FDA’s 2023 mandatory cybersecurity guidance for medical devices was a direct regulatory response to incidents like this one.

Ring Camera Compromise (2019): Credential Stuffing Against Embedded Endpoints

Multiple Ring camera users reported in late 2019 that attackers had gained access to their cameras and were using the two-way audio to speak to, and in some cases harass, family members including children. The attack method was credential stuffing: using username/password pairs harvested from unrelated data breaches to attempt login to Ring accounts, exploiting users who reused passwords across services.

Ring did not have two-factor authentication (2FA) enabled by default. Once an attacker authenticated to the Ring account, all cameras associated with that account were accessible through the cloud API.

Lesson: Cloud-connected embedded devices inherit the authentication weaknesses of their cloud accounts. 2FA must be enabled by default, not offered as an optional extra. Notification on new device login and anomalous access patterns (login from new IP geolocation, multiple cameras accessed in rapid succession) are standard detection controls that Ring lacked at the time.

Common Patterns Across All Attacks

Examining these cases alongside the broader vulnerability landscape, five patterns recur in almost every significant embedded security incident:

Default Credentials

Factory-set credentials that are identical across all devices of a model, published in documentation and never required to change. Present in Mirai, Ring, and the majority of router vulnerabilities disclosed annually. The fix is both technically and organisationally straightforward: generate unique credentials per device at the factory, store them in protected memory and print them on a label. The barrier is cost and process change, not technical difficulty.

No Encryption in Transit

Management traffic, sensor data, firmware update downloads and API communication sent in plaintext. Capturable by any device on the same network, any router in the path or any wireless eavesdropper within range. The barrier to adopting TLS on constrained devices has dropped dramatically: mbedTLS with a minimal configuration compiles into under 60 KB of flash on Cortex-M4, and hardware TLS accelerators are available on mid-range microcontrollers.

No Patch Mechanism

Devices that cannot receive firmware updates remain vulnerable to every bug they shipped with for their entire operational lifetime. This is the embedded security debt problem: bugs discovered five years after device release cannot be fixed, and the deployed fleet continues to be exploitable indefinitely. Every new embedded design must include an authenticated OTA update mechanism, even if the first firmware version it delivers is the same version that ships.

Missing Network Segmentation

Safety-critical and non-critical subsystems on the same bus or VLAN without access control between them. Demonstrated in the Jeep Cherokee case. Present in industrial control networks where SCADA workstations share a flat network with both the engineering workstation and the internet-facing historian server.

Insufficient Pre-Release Security Testing

Vulnerabilities that a one-day security review would have found shipping in production hardware. Buffer overflows in input parsers, command injection in web diagnostic tools, hardcoded credentials visible in a five-minute strings analysis. The cost of finding a vulnerability before release is orders of magnitude lower than the cost of a recall, an FDA enforcement action or a class action lawsuit after release. IoT attacks have tripled in the past three years, and regulatory pressure is increasing in direct proportion.

Conclusion

The embedded system vulnerability landscape is wide but not unpredictable. Buffer overflows from unsafe string functions, credentials compiled into binaries, debug ports left open on production hardware, unsigned firmware update paths, supply chain dependencies with untracked CVEs and physical interfaces accessible to anyone with a logic analyzer are the same categories appearing in disclosures year after year. Understanding each one precisely, knowing what the vulnerable code pattern looks like and what the secure replacement is, and building the verification steps into your development process before release, is how you stop shipping the same bugs the industry has been shipping for two decades. The real attack cases make clear what happens when you do not.

Leave a reply

Loading Next Post...
Follow
Search Trending
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...