The method can't possibly null out the arguments before using them. According to Rich Hickey, this is actually an artifact caused by the way the byte code is generated. The purpose of this code is to null out the arguments to prevent 'holding the head' in case of a recursive function call. Because of the way the byte code is generated, the decompiler can't accurately know when the nulling out takes place; it is definitely not happening at the beginning of the method.
This seems wrong to me. If the JVM can tell when things actually take place, why shouldn't the decompiler be able to do the same thing? It's all in the code, right?
push const__3
push x
push y
push more
push null
dup
store x
dup
store y
store more
call invoke
What this does is loads the values of x, y, and more onto the runtime evaluation stack, and then nulls out the values of x, y, and more, and then proceeds with a call to const__3.invoke() [I've simplified the logic]. A Java decompilation doesn't have an exact translation for this, because it requires names for each of those logical stack locations that were pushed. A better translation would be something akin to this:
t1 = x;
t2 = y;
t3 = more;
x = null;
y = null;
more = null;
const__3.invoke(t1, t2, t3)
except that locations t1, t2 and t3 (in the current scope) are guaranteed to "go away" as soon as the invocation is dispatched.
This kind of thing matters most in debug scenarios, where the underlying JVM will likely keep variables alive on the stack as long as possible rather than restricting them to their live ranges, and thus artificially extending the lifetime of objects held in those variables. Though I guess it is possible that a JVM might not perform sufficiently aggressive liveness analysis and over-extend variable lifetime inappropriately.