Even if you don't run HDFS and YARN, you aren't escaping Hadoop. And if some configuration goes wrong, and you'll probably need to look into the Hadoop conf files.
The original comment was about the mass of libraries that Hadoop brings in. Spark isn't a solution that allows you to leave the mess. If you try to dockerize spark, you'll still see that you have 300 MB size images full of JARs that came from wherever.