I have tried feeding some of the foundation models obfuscated code from some of the competitions.
People might think that the answers would be in the training data already, but I didn't find that to be the case. At least in my small experiments.
The model's did try to guess what the code does. They would say things like, "It seems to be trying to print some message to the console". I wasn't able to get full solutions.
It's definitely worth more research, not just as a curiosity, but these kinds of problems are good proxies for other tasks and also excellent benchmarks for LLMs particularly.