It would just be a useful reminder of that fact. Remember: you're trying to sell voting to someone who doesn't normally vote. It's easier to sell it as being a one-off thing versus sell them on voting in all future elections.
> It's easier to sell it as being a one-off thing versus sell them on voting in all future elections.
So a promise to permanently and irrevocably change the country? If it is truly one off that is what it would have to be, which is not possible via normal legal mechanisms in the USA.
> Do you guys got not feel shame if a person with that character and that track record runs your country?
The Donald Trump that your media reports on isn't the real Donald Trump, or at the very least the one his supporters see.
Example: Trump talks to a group of people who normally don't vote, and asks them to make an exception and vote this time, noting that this will be the last time he runs, and so they won't need to vote for him again. The media then takes "you won't need to vote for me again" out of context and uses it to claim that Trump will end elections in the US. People who only listen to the media see one thing, and his supporters (who are aware of the context) see another.
The man said enough on the record to disqualify him (e.g. demanding his political enemies shot in tribunals), if people still vote for him that means they value democratic principles and the Rule of Law so low that these didn't matter — because the excuse that he didn't mean it that way is not flying after Jan 6th — at least you can't risk that he is not daring enough.
A closely related technique for debugging optimization passes is that of "optimization fuel". Each rewrite decreases the fuel by one, and when the fuel is gone no more rewrites happen. You can then perform binary search on the optimization fuel to find a specific rewrite instance that breaks things.
Yes, I believe LLVM has a flag for "run only the first N optimization passes" that gets used with binary search this way.
A global optimization fuel that worked at finer granularity would be even more precise but you'd have to have the compiler run single-threaded to make sure the numbering always matches. At least in the Go compiler, we compile different functions in different threads, so that fouls up any global numbering.
The state-of-the-art in refcounting[1] greatly improves the barrier situation over a naïve implementation: no read barrier, and the write barrier only uses atomics when mutating an (apparently) unmodified field in an old generation object.
Certainly improvements in refcounting can bring its performance closer to that of tracing (and there are constant improvements in tracing, too), but one of the main reasons some languages choose refcounting is due to simplicitly of implementation, and these improvements bring the complexity of tracing and refcounting closer to each other. Currently, tracing leads in performance, but if that changes we'll see a shift in the algorithm for languages that depend heavily on GC performance.
BTW, our GC team investigated the implementation in that paper, and it is still significantly behind that of tracing in too many relevant workloads to be considered for adoption in production.
Lisp-style macros are actually difficult to write because of the infinite evaluation of the language. I was able to write a quasiquote package for myself to help with that that though.