I think it's too weak as a paper for me to put out there. I also haven't even applied the minor corrections from the reviews.
If you want to take a peek at the case studies, I've blogged about my butchering of aln across C standard libraries and operating systems [1]; there is also widberg's outstanding write-up of their FUEL decompilation project [2], which uses ghidra-delinker-extension as part of the magic.
If you want to read a paper on the technique, there is the one for Ramblr [3] which I became aware of after the reviews came back.
I haven't been fully keeping up, but anyone else notice the same with Cursor? Could be because I use Claude models (GPT-5 was too slow and iffy results). Funnily enough, Codex has been really impressive recently and I've swapped almost completely to using Codex.