I suppose that, if I'm asked "What is something that makes homoiconic languages special?" and the answer can't be that it makes macros easy, then I would say it's that it's easy to write an interpreter for that language in the language itself, because the data structures that form the syntactic structure of the language are themselves first-class objects. That's why the interpreter from the appendix of the LISP 1.5 manual can be so short. That's also why the EVAL function takes as a parameter a list of atoms, and not a string, as in non-Lisp dynamically typed languages; since a Lisp program is also a list of atoms, that's the form it takes in the interpreter. The first Lisp compiler [0] isn't so long, either. The equivalent eval function in non-homoiconic languages requires a lexer, parser, and special definitions of data structures to represent the syntax, as well as functions to manipulate those data structures.
Scheme was the first language with lexical scoping of variables, and after Scheme was published in 1975, most new languages have had lexical scope and most languages used now (Java, C#, Javascript, Rust) have it. I think the qualities you describe make it easier to experiment with (Lisp-like) languages and language implementations, which is why lexical scope appeared in a Lisp-like first. Also, I think the concept of a "closure" appeared first in discussions about Lisp (maybe as part of the same discussions that led to lexical scope). Ditto I think continuations and continution-passing style. Also the interpreter you refer to in Appendix A of the Lisp 1.5 manual had a subtle problem or area for improvement around functions appearing as arguments to functions, and the Lisp's community's discovery of that and solution to that seems to have been incorporated into other languages, probably first the early functional languages like Miranda, then on to Javascript, etc.