Portable and Secure C- A Practical Guide to MISRA C & CERT C

The C language has powered satellites, pacemakers, and aircraft flight computers for five decades. That longevity is not accidental, and neither is the discipline required to keep it safe.
- Why portability matters, and why it’s harder than you think
- The great myth of “Embedded C vs C”
- Who sets the guidelines, and by what authority
- MISRA C and CERT C: why two standards when we want both?
- Industry values and testing for security and portability
- Testing strategies that complement static analysis
- Failures, liabilities, and the price of non-compliance
- The future of C: C23, new ISO directions, and the long game
- Rust as a reference point
Why portability matters, and why it’s harder than you think
Ask a firmware engineer what portability means and you’ll likely hear something about target architectures: ARM Cortex-M versus RISC-V versus MIPS. Swap the compiler flags, rewrite the linker script, done. That’s migration, not portability.
Real portability is subtler and far more dangerous. It lives in the gaps between what the C standard defines, what your compiler assumes, and what your hardware silently does. It hides in a line like:
int x = 65536;
short s = (short)x; /* Implementation-defined. Could be 0. Could be -1. Could be anything. */
On your 32-bit x86 development machine with GCC, that cast might produce a predictable result. On a target with a different ABI, a different INT_MAX, or a compiler that treats overflow as a licence to optimise away your entire conditional, the result is anything but.
“Portability is not a feature you bolt on at the end. It’s a constraint you design to from the start, or you pay for it with bugs that only manifest in the field.”
The cost of getting this wrong is measured in recalls, patch cycles, and in safety-critical domains, human lives. The 1996 Ariane 5 Flight 501 disaster, caused by a 64-bit floating-point value being converted to a 16-bit integer without range checking, destroyed a 500-million-dollar rocket 37 seconds after launch. The code had been correct for Ariane 4. It was portable only by accident.
Portability matters because C is the language of constrained targets, and constrained targets are everywhere: automotive ECUs, industrial PLCs, medical implants, avionics, SCADA systems. The code you write today will outlive the compiler you used to build it. The board it runs on today may be replaced by something with a different word size tomorrow. Designing for portability isn’t an academic exercise. It’s professional hygiene.
The great myth of “Embedded C vs C”
There is a persistent belief in firmware circles that “Embedded C” is a separate dialect: a hardened, stripped-down version of the language with special rules for registers, memory-mapped I/O, and interrupt handlers. Developers treat it as an excuse. “Of course I’m using implementation-defined behaviour — this is embedded code.”
This is fiction. There is no Embedded C standard separate from C89, C99, C11, or C23. There are only C implementations targeting constrained environments, and all of them are still governed by ISO/IEC 9899.
The misconception
The myth likely originated from the fact that bare-metal code frequently uses genuinely non-portable constructs like the
- inline assembly (code enclosed with #asm … #endasm)
volatile-mapped peripheralsinterrupt attribute macros
which does feel “different” from desktop C. They are. But they’re non-portable not because they’re “embedded”; they’re non-portable because they’re implementation extensions. The C standard has always accounted for this through its notion of undefined and implementation-defined behaviour. Recognising the boundary between standard C and your vendor’s extensions is what separates professional firmware from code that breaks mysteriously when you switch toolchains.
The practical consequence of the myth is that it discourages engineers from applying rigorous coding standards. MISRA C was written specifically for embedded and safety-critical C. Every rule in MISRA C 2023 has been crafted with resource-constrained, compiler-diverse, long-lived firmware in mind.
The myth also conflates two distinct problems. Platform-specific code the __attribute__((interrupt)), the DMA register layout, the linker section pragma belongs in a hardware abstraction layer (HAL), clearly bounded and isolated. walled off from everything else. But the features like:
- The business logic
- The protocol stack
- The sensor driver
that code should be portable, standard C.
Calling everything “Embedded C” and shrugging at portability issues is how you end up with 40,000-line monolithic HAL files that might take months to port to a new SoC generation.
Who sets the guidelines, and by what authority
The landscape of C coding standards is fragmented in a way that reflects the industries that depend on C most heavily. Understanding who writes the rules and why their authority matters is essential before you can meaningfully apply them.
ISO / IEC JTC1 SC22 WG14
The C language itself is standardised by ISO Working Group 14. Their output - C89, C99, C11, C17, and now C23, is the normative foundation. Every compiler, every coding standard, every static analysis tool references this document. WG14 doesn’t write portability guidelines for embedded systems; they define what the language is.
MISRA
The Motor Industry Software Reliability Association began as a UK automotive consortium in the early 1990s. Their first C guidelines (1998) were explicitly aimed at making C safer for vehicular embedded systems. MISRA C is now on its fourth major edition (2023) and has been adopted far beyond automotive: aerospace (DO-178C contexts), medical devices, railway (EN 50128), and industrial control. MISRA’s the standard is a purchased document, which shapes how it propagates through supply chains (usually mandated by OEMs on their Tier 1 and Tier 2 suppliers).
CERT / Carnegie Mellon SEI
The CERT Coordination Center at Carnegie Mellon’s Software Engineering Institute took a different approach. Their CERT C Coding Standard (now in its third edition, aligned to C11) emerged from secure software research and is publicly available. CERT’s perspective is fundamentally security-first: the rules exist to eliminate classes of vulnerability — buffer overflows, integer wrapping exploits, race conditions — rather than to satisfy automotive functional safety requirements.
IEC, AUTOSAR, DO-178C
Domain-specific standards layer on top of or reference MISRA and CERT. IEC 61508 (functional safety) mandates the use of appropriate coding standards; AUTOSAR’s C++ guidelines borrow heavily from MISRA philosophy; DO-178C for airborne software requires “coding standards” without specifying which — in practice, MISRA C is the industry answer.
| Body | Standard | Focus | Access | Primary Domain |
|---|---|---|---|---|
| ISO WG14 | ISO/IEC 9899 | Language definition | Purchased | Universal |
| MISRA | MISRA C 2023 | Safety + portability | Purchased | Automotive, safety-critical |
| CERT SEI | CERT C (C11) | Security + correctness | Free / online | Systems, secure software |
| IEC | IEC 61508 | Functional safety process | Purchased | Industrial, all domains |
MISRA C and CERT C: why two standards when we want both?
This is the question every team asks when they first encounter the landscape. Both standards exist to make C safer and more correct. Both prohibit dangerous constructs. Both reference the same underlying C standard. So why are there two, and which one should you use?
The answer reveals something important about the nature of “safe code” itself: the threat model changes depending on what you’re protecting against.
MISRA C 2023 (Safety-by-design)
- Rooted in functional safety - IEC 61508, ISO 26262
- Focuses on determinism, defined behaviour, and portability across toolchains
- Rule-based with mandatory, required, and advisory tiers
- Mandates use of
stdint.hfixed-width types - Prohibits dynamic memory allocation entirely (Rule 21.3)
- Restricts recursion (Rule 17.2) - stack depth must be bounded
- Designed for compliance auditing and tool certification
- Deviation process: formal documented process for justified departures
CERT C Coding Standard (Security-by-default)
- Rooted in secure software research - CVE patterns, CWE taxonomy (CWE - Common Vulnerabilities and Exposures)
- Focuses on eliminating exploitable vulnerability classes
- Recommendation-based with risk severity (L1-L3) ratings
- Prioritises input validation and boundary checking
- Addresses concurrency and signal safety (POSIX context)
- More permissive on dynamic memory, but with strict rules for its use
- Free to use, openly published, constantly updated
- Maps to CWE identifiers for traceability to vulnerability databases
The key distinction: MISRA asks
“will this code behave correctly on the target?”
while CERT asks
“can an adversary abuse this code?”
These are related but not the same question. A MISRA-compliant pacemaker firmware is deterministic and portable, but if it has a buffer overread in a BLE command parser, CERT would flag it as exploitable.
A CERT-hardened connected gateway might handle all attack surfaces correctly but contain integer types that invoke implementation-defined behaviour on a 16-bit MCU, which MISRA would prohibit.
The pragmatic answer
Use both, layered.
MISRA C as the portability and safety baseline, especially for any component that must pass a functional safety assessment.
CERT C as the security review lens, especially for any component that handles external inputs:
- network packets
- file parsing
- command interfaces.
Where the two overlap, you get duplicate safeguards for extra coverage. In case, where they diverge, document your risk decision explicitly.
Teams building under IEC 62443 (industrial security) or UNECE WP.29 (automotive cybersecurity regulation) increasingly need to demonstrate compliance with both families simultaneously. The tools have caught up: modern static analysers like
- PC-lint Plus
- Polyspace
- Helix QAC
- CodeSonar
support simultaneous MISRA and CERT rule sets with unified dashboards.
Industry values and testing for security and portability
Knowing the standards is necessary but not sufficient. The real question is how engineering organisations internalise them and how they verify compliance in a way that is meaningful rather than performative.
The compliance theatre problem
A deviation log is a formal record of every place where your code intentionally violates a rule from a coding standard (like MISRA C or CERT C), along with a justification for why that violation is acceptable.
It’s not a workaround, it’s actually part of the official compliance process. If your deviation log is longer than your coding standard compliance summary, something has gone wrong. Deviations should be documented with specific justification, and reviewed
This pattern is very common in the Embedded Industry, a project adopts MISRA, runs a static analyser at the end of development, gets 1000s of violations, and proceeds to suppress them all with blanket deviation justifications before the audit. The code ships. The certificate is granted. But it doesn’t changes the risk profile.
Testing strategies that complement static analysis
Static analysis finds structural violations. It doesn’t find all runtime behaviour. A few approaches that work well alongside it:
Mutation testing verifies that your test suite actually detects behavioural changes. A test that doesn’t kill a mutant isn’t testing anything. Tools like mull (LLVM-based) work with embedded cross-compilation pipelines.
Sanitisers on host-side unit tests - ASAN, UBSAN, and MSAN on your portable logic layer catch undefined behaviour at runtime, before the hardware ever sees the code.
UBSAN is especially useful for the integer overflow and type aliasing issues that MISRA rules are designed to prevent.
Fuzz testing command interfaces - any code that parses external input (AT commands, CAN frames, Modbus PDUs, BLE characteristics) should be fuzz-tested.
LibFuzzer and AFL++ can be integrated against your protocol layer running on a host simulator.
**Architecture First Approach **
Traditional embedded V-model development defers security and portability testing to verification phases.
By that point, architectural decisions that create portability problems are already baked in. Make type choices, integer promotion decisions, and platform assumptions part of the design review, not the code review.
Failures, liabilities, and the price of non-compliance
The standards community presents MISRA and CERT as good professional practice. Industry regulators increasingly present them as legal instruments. Understanding both framings changes how seriously engineers and management take them.
Ariane 5, Flight 501 (1996)
The inertial reference system reused from Ariane 4 attempted to convert a 64-bit floating-point velocity value to a 16-bit signed integer. The value was outside the representable range. The conversion triggered a hardware exception. The backup SRI failed identically. The rocket self-destructed 37 seconds after launch.