A couple of reasons, both to do with the fact that data memory is scarce on embedded systems. One is that it has high memory overhead (every entry includes a pointer), two is that making many small allocations can cause memory fragmentation, especially with the kinds of allocators commonly used in embedded systems.
Also if time is scarce, following pointers is slower than calculating offsets, but that's usually not an issue worth thinking about.
Oh ok. Counterpoint: allocating a single node if memory is already fragmented will succeed where growing a vector would fail. But I suppose the point of the question is being able to discuss the issues involved.
You don't have to worry about dozens of other applications fragmenting your memory. You keep your threads neat and your memory neat, and you don't have to worry about failed allocations.
Another reason to keep memory contiguous is that fragmentation can be much more punishing (sometimes), especially when latency is important. On large devices it doesn't matter much if you're under-utilizing the CPU while waiting for RAM, because the OS will do other things and then let you do a lot of work all at once. Embedded processors will have nothing else to do.
I used to interview embedded engineers, and I was one for a very short time. I think the issue is that many embedded engineers are not good at articulating when a linked list is preferable compared to a continuous data structure. The times when a linked list is better are getting increasingly rare as "embedded" increasingly means "includes a raspberry pi."
On devices with 1 GB of memory, the contiguous structures are usually better, but embedded developers often still reach for linked lists. On devices with 1 kB of memory, trained software engineers will scoff at linked lists and instead waste hundreds of bytes on large buffers sized for peak usage.
I'd say a linked list is preferable when you're writing a `malloc` implementation. It makes the code simple to understand (and small), you don't have any dynamic memory allocation available (because you're writing your system's allocator), and the size overhead is lower than alternatives. Linked lists aren't that bad in processors that lack cache, since then walking the list won't incur cache misses (every memory access is equally slow).
There are alternatives, of course. Your particular target might perform poorly with linked lists, so you might make your malloc use a different internal data structure. Or you might have plenty of flash but not much RAM, so you might trade code size for runtime overhead. Or... There are lots of alternatives, but a linked list is the simple default.