I feel like there should be a much stronger effort to solve optimization problems with ML enabled guesses. It's arguably the most important problem to be solving to improve ML itself.
Humans, for example, can provide extremely strong guesses by just eyeballing travelling salesmen problems without doing any calculations. If we could use ML to take a problem and guess how to reformulate it with 95% of the search space cut out, we would be in a much stronger place. My gut says this should be theoretically possible and is probably the mechanism that under the hood biological learning systems use to such a great effect that its ok to just use greedy and less efficient methods to do last mile of optimization without something like backprop.
Human can mostly only do these kinds of guesses for traveling salesmen problems embedded in 2d Euclidean space. But we have pretty good heuristics for these cases to kickstart a solver, too. Give a human a general graph with arbitrary edge weights, and they'll be dumbfounded.
(I don't think you even have to go all the way to an arbitrary graph, I suspect a decent sized graph with edge lengths embedded in 3d euclidean space will already confuse humans. Definitely once you get to 4d.)
My point is not that we should mimic humans. My point is that there's probably learnable but inexplicable heuristics you could learn for generally solving gradient descent problems just by the formulations on their own that a neural net would be good at.
Humans, for example, can provide extremely strong guesses by just eyeballing travelling salesmen problems without doing any calculations. If we could use ML to take a problem and guess how to reformulate it with 95% of the search space cut out, we would be in a much stronger place. My gut says this should be theoretically possible and is probably the mechanism that under the hood biological learning systems use to such a great effect that its ok to just use greedy and less efficient methods to do last mile of optimization without something like backprop.