Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, the problem comes with how the government uses the data it has on you.

Say, for example, I have... 10,000 points of data about you. 10,000 HN posts + Facebook posts + Reddit posts, etc. You've said a lot during your Internet career.

I can take what you've written, and form a profile of you from these words. They're just words, you've done nothing wrong. Freedom of speech being what it is, and let's even say you haven't said anything particularly inflammatory. Nothing threatening, nothing dangerous - you're just an average guy.

Now, I take these words, and I compare them with words that other folks have said. Using some fancy technology, I can group you with people who say thinks kind of like what you say. Using this data, I can group everyone this way, into clouds.

And here's where things get sticky - I can use these clouds of people to look at folks who are "similar" to known terrorists. Folks who, themselves, have done nothing wrong, but who "look" similar to people I know are bad. Let's say, for some reason, you're grouped with someone who has known ties to I dunno... the militant branch of the KKK or whatever. Now you're suddenly interesting to the authorities, even if you've done literally nothing wrong. Or have you?

If you're grouped via my super special technology with a terrorist, maybe this puts you on a no-fly list. Maybe this gets your security clearance denied. Maybe you get "randomly" audited. World-ending? No. Completely unwarranted and totally annoying? Yes.

On a philosophical level, this is all kinds of against the freedoms we expect to have in America, and that sucks but let's be more practical. People are screaming about the sky falling and the world ending because the NSA knows you like Japanese porn or whatever, but that's not frankly a big deal. What's more likely is that you're going to be annoyed and inconvenienced, and there's not a lot of reason for it.

It kind of sucks, and I guess you have to decide for yourself if you're okay with what might happen to someone if they turn out to be a false positive. For me, I don't so much mind the data collection, I just want it to be fully exposed. I want Google, Microsoft, et. al. to have to say publicly when they comply with a request for data, and in a perfect world, I'd want these companies to be required to notify the people whose data they hand over. Does it make it more difficult to catch bad guys? Yes. Does it provide a level of transparency that a representative democracy requires to function properly? Yes.

If they collect too much data (or too little!) I want to be able to vote someone out of office. Just saying, "attacks haven't happened so therefore what we're doing is working" isn't something I can buy. Is that too much to ask?



> Now you're suddenly interesting to the authorities, even if you've done literally nothing wrong. Or have you?

    <reply type="devils-advocate">
This argument could be taken to mean that the fear isn't that information is being collected, aggregated, and analyzed, but fear in that algorithms will be wrong and results will be misinterpreted or misused.

As technology advances, both of these problems will reduce more and more.

Furthermore, if I came up with some kind of math that could determine with a high degree of certainty that someone is a (terrorist/communist/pedophile/father raper) given their online activities, the authorities would be negligent not to follow up on that information and determine if it's valid or not.

Much like spam, verified false positives help train the filters further.

That leaves only the abuse argument.. and honestly, I don't see /potential/ abuse as an argument against any kind of technological advance. We have ways of dealing with abuse.


I'd argue that we don't have any good ways of dealing with abuse/misuse, and that's precisely the problem with such a system.

And let's not forget the "verified false positives" are counted in lives ruined/ended. Could we do it? Yeah, no one's denying that. But if we throw out ethics in the name of technological progress, we could do a lot of great things.

There's not really a deterministic line, beyond which a person is "certainly" a threat. At the end of the day, a person has to decide what is and isn't a threat, all the computer can do is help that decision along. A person pulls the trigger, and as we all know, people can really suck sometimes.


"As technology advances, both of these problems will reduce more and more."

For the first problem (that algorithms' results will be wrong), you're assuming that the technology to avoid false positives will progress faster than the technology to collect more data. On what do you base that assumption?

The second problem (that the results will be misinterpreted or misused) isn't a technological problem at all, so how will the advancement of technology reduce that problem?


Another thing that bothers the CRAP out of me is that now I feel like I can't say how I feel with friends because I know government is monitoring me. Its not that I am a terrorist or criminal, but this feels /exactly/ like I'm living in some unfree communist piece of shit country where you cannot speak how you feel without facing scrutiny from informants or stasi style police; UNLESS its in the comfort of your four walls with your close family and friends. This is tyranny, make no mistake about it.


You can, you just can't do it over Facebook without employing client-side encryption.

Something your friends probably don't give a crap about and won't do. I know mine wouldn't if I tried to get them to - it's just too hard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: