Sure it does. Failing to recognize someone at scale can very much impact someone's privacy. Holes in data are, in and of themselves information.
For example, if an unrecognized face was logged on 47th street, then an unrecognized face was logged on 50th, then 52nd, then 55th (where a crime occurred), then back down to 47th, you might be able to draw some patterns about where to look for your suspect even if they hadn't opted in.
Edit: Also, you are trusting that the system isn't storing markers for people it doesn't recognize. My concern would be that if I opt in later, suddenly all my past history becomes discoverable. That's a lot of trust to put in the hands of private firms (or the police), neither of which I'm comfortable holding that information.
Excellently put. Also, just because it didn't generate a facial ID hit, doesn't mean they can't use ReID and track the descriptor vector.
In fact if I were a not-quite ethical user of facerec, I'd flag and bin all vectors and image chips that failed to ID.
I work on facerec and ReID. But it's for deep fake detection and the like, blue team stuff. But the deep ID technologies are very concerning to me in general.
For example, if an unrecognized face was logged on 47th street, then an unrecognized face was logged on 50th, then 52nd, then 55th (where a crime occurred), then back down to 47th, you might be able to draw some patterns about where to look for your suspect even if they hadn't opted in.
Edit: Also, you are trusting that the system isn't storing markers for people it doesn't recognize. My concern would be that if I opt in later, suddenly all my past history becomes discoverable. That's a lot of trust to put in the hands of private firms (or the police), neither of which I'm comfortable holding that information.