Why is this scummy exactly? If a salesperson was to try to sell to you in a store, they would take into account how you appear and act to tailor the sale. There's nothing wrong with that. Why is it suddenly bad if a machine does it?
Because when you talk to a salesperson you know you're being looked at (and reciprocally you're looking at them), and human memory is limited so it's unlikely they will retain any "data" about you when the contact is finished.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
Reminds of a law here in Sweden and how car surveillance work on the bridge to Denmark. The law forbids the unnecessary registration of people so in order to avoid breaking the law the police have a live system in place where information of a car on the danish side get show on a screen on the Swedish side, giving border and toll guards enough time to react. The whole thing is legal only because the system operate live and never store any data, which otherwise would create a illegal register with personal information.
I assume that the data is being used for A/B testing on the display designs (we get 20% more attention from teenagers when the background is orange) - if that's the case, not very scummy.
If you are in public you are being looked at I do not understand your logic. When you go in a public place there are already public accessible web cams that people use to track this kind of thing i remember a thesis that used public accessible cams to try and track people and build up a database. I have always had the opinion you lose privacy when you leave your house since you are in public, and public like is opposite of private/privacy so to me it makes sense.
I have always had the opinion you lose privacy when you leave your house
Privacy is not black and white.
There is a world of difference between someone seeing you for a moment as they pass you in the street and forgetting you a moment later, and automated systems that permanently record everything, analyse it, correlate it with other data sets, make it searchable, and ultimately make automated decisions or provide information that will be used by others to make judgements about the affected individuals, all without the knowledge or consent of those individuals and therefore without any sort of reciprocity.
The idea that you have no reasonable expectation of privacy in a public place dates from a time when you could also expect to pass through town in relative anonymity, go about your business without anyone but your neighbours and acquaintances being any the wiser, and would probably change those neighbours and acquaintances from time to time anyway so the only people who really knew much about you would be your chosen long-time friends and colleagues. I think it's safe to say that that boat sailed a while ago, and maybe what privacy means and how much of it we should expect or protect aren't the same in the 21st century.
Just because there is no expectation of privacy does not mean that a reasonable person would assume that their every action is being recorded in precise detail to be stored away forever by a third party.
A lot of things are technologically feasible, and in many cases can't realistically be prevented ahead of time, yet are still considered socially unacceptable or even made illegal. Just because we can do something, that doesn't mean we should. This principle has never been more relevant than in the use of technology.
What's technologically feasible is irrelevant to our moral expectations. It's technologically feasible to brain you with a club and steal your stuff, and has been for millennia.
Preventing the misuse of Blunt Instrument Technologies™ is literally what laws are for. Surveillance is just a club we don't have laws about yet, but should.
Well, your behavior and appearance isn't logged in some computer somewhere available for someone to look at whenever they want. Not to mention, face-to-face interaction means you know someone else is watching. This allows someone to do this without your knowledge.
Your reply is disingenuous. The problem is not that abuse is not possible in a human-driven system. Of course some gifted salespeople have incredible memories, hypnotic powers of persuasion, and so on. However, you must consider the following:
1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
>1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
A temporary problem solved by natural selection, technological augmentation, and increasing incentives. Perfect performers in any profession are hard to come by. Ambitious people still strive to get there.
>2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
Because people don't understand technology or sales. In your reality, people should be on guard all the time because sales and marketing were already continuous, even before hidden cameras. In actual reality, most people don't care that much about being sold to as long as the sale itself it not abusive.
> 3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
Sure, but that omits the necessary step of justifying this behavior as being either mildly bad on an individual level or terrible on a mass scale, much less both. It is neither.
Also, I would add item 0: advances in technology mean that surveillance devices will only become smaller, cheaper, and more connected over time. The future you fear so much is, in fact, inevitable.
You are applying binary "all or nothing" logic to the real world, which contains many more shades of grey.
It is true that technology (both social and digital) continues to progress, and that the genie can't be put back in the bottle once it escapes. However, you don't have to put it back in the bottle. Speed limits don't stop speeding, and laws against murder don't stop homicide. The legal and regulatory system exists not to fully prevent understand behavior, but rather to reduce it to a manageable level.
In short: I agree with one part of your premise. Technology will continue to evolve and will continue to challenge human society in this area. Unlike you, however, I don't believe that we have to roll over and accept the implications and consequences of unregulated privacy invasions, neuromarketing and whatnot.
I don't think that either, because I correctly recognize that in public, you do not have privacy, either de jure or de facto. Especially if you're not even wearing a burqa, which would today at least give you de jure privacy because it demonstrates intent.
I'm sure that in the future, we will also create cheaply available opaque faraday cages that you can roll around in if you wish. And that most people will not care to do so.
You do have privacy in public. Both the de jure "reasonable expectation of privacy" and the de facto privacies of anonymity, free association, and predictable rules of social engagement.
Well, I seem to have no trouble practicing all of those, so I know they are based on fact. Perhaps you don't actually understand what I'm talking about? Or maybe your experiences differ. Either way, telling me the things that that I personally do are not being done is... not an argument.
>>the de jure "reasonable expectation of privacy"
> Does not protect your exposed face
Yeah, that's why it is "reasonable expectations" not "absolute enforcement."
In other words, as long as you are unaware of the surveillance, you are happy to pretend it doesn't exist? So where's the problem? Just don't click on links like the OP.
Eidetic memory and follows you everywhere and can transfer all those memories perfectly to any number of other people? Yeah. It's like super-stalking and it's obviously horrible.
Stalking per se is mostly only illegal because it becomes harassment and bothers the victim. This kind of monitoring is entirely unobtrusive. As the response to the original tweet illustrates, most people aren't even aware that it is happening.
The information is being used to conduct asymmetric psychological warfare. The notion that it's harmless even if never outright abused where we define abuse as use for other than its intended purpose, is false.
Being subjected to constant sensory input and trickery from dozens of teams of experts on consumer psychology is bad enough when they haven't also been stalking and recording your every move.
Caveat emptor becomes an absurd position when the power imbalance is so great. Massive data collection and mining needs to be reigned in. The fact that it's not obvious people seeking to trick you by any means necessary are recording you everywhere you go does not make it OK, at all. Surveillance capitalism is way, way over the line, has been for some time, and just keeps going farther. That they're good at keeping you from realizing you're under surveillance is no defense whatsoever.
Complaining about warfare that is asymmetrical solely due to the incompetence of one side does not elict any sympathy from me.
Consumers do try to aggregate data for the equivalent of "massive data collection and mining". Most just don't care to pay for something that is not wholly controlled by a storefront. Generally, producers are more likely to understand the ROI.
I find it funny that the store doesn't trust their salespersons to make such a judgement on their own. Probably they hope to do analytics on what kind of people are visiting and when. Selling the data would only make sense if they are able to link it to an identity, I am not sure that they can legally do that.
Well, you never know into what dystophia you are heading...
Most humans today are prejudiced against nonorganic life due to not growing up interacting with anyone but other meatbags.
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
It's because people view the AI as an infallible machine that records everything, which is much more intimidating than the gut instincts of a salesperson.
Right, that's the manifestation of their prejudice. In reality, there is a spectrum, not a dichotomy, and some humans can have better memories than some computers.
To scale up the principle a little... Uber acts like a real estate agent, they charge the seller a percentage of the total for the service of connecting a buyer and a seller.
Imagine you were selling a house, and your agent came to you with an offer of $1M, of which they would take a 10% commission. You agree to this, but find out later that the buyer actually offered $1.1M. The fact that each party agreed to the transaction with the real estate agent isn't relevant here. What is relevant is that if you charge for services based on a percentage of the price, you can't then set different prices at both ends, this strongly violates the expectations of the contract.
Looking at https://www.uber.com/info/how-much-do-drivers-with-uber-make..., it says "Drivers using the partner app are charged an Uber Fee as a percentage of each trip fare." This is analogous to the real estate agent example, and this is why this is fraud on Uber's behalf. If they told drivers that they were simply buying their services for an arbitrary price, then it would be fine, but they don't say that.
Can you explain? The analogy seems like a terrible one to me.
We are not talking about charging different amounts of money depending on the brand of device you are consuming the data on. NN is about not differentiating the cost or quality of bits based on their source. In the US, can you not opt to pay a slightly higher rate for renewable energy? (This happens in Australia).
I'm all for NN, but the analogy Sam used doesn't hold in my opinion.
As an additional thought from someone outside the US: NN doesn't exist in places like Australia, and has actually overall led to better services, especially in the early days of the internet, because overseas data is significantly more costly to provide than local data. The difference is that we have more robust competition and we can more easily switch providers, where is seems in the US (purely based on things I've read on the internet) that the near-duopoly cuts consumer choice, so if NN was not in place, people would have little ability to switch providers, and they would be stuck with it.
Is the lack of competition the real issue here? If people in the US had a choice of many providers and it was easy to switch, then people would likely switch to services that are Net Neutral.
Yes. Governments -- from federal to small towns -- created rules giving de facto local monopolies to certain ISPs and making market entry of competition very difficult. And yet, people want to solve this created problem with more rules about how ISPs are allowed to compete.
I've not seen many 3D movies, but those I did see were very flat. Like cardboard cut outs. I saw some of the hobbit in 3D and it looked weird. There is no volume to the characters...
These types of articles seem to always get the same mixture of responses. The biggest problem that I see is that everyone starts with completely different sets of assumptions and they are almost never up front about them.
The lack of cited sources in articles like these leads people to bolster or criticize particular studies that they have read or heard about, usually without referencing those. Many of these studies are either flawed or contain assumptions that some people don't agree with, so this ends up going nowhere also.
Are there any really good studies on this topic that we may discuss as a common point of reference? Once that take into account all the facts, and don't start with assumptions like the following:
1. There should be equal numbers of men and women in tech (or there is some other ratio that is preferred or correct).
2. Women and men in tech should - on average - be paid the same.
Some people have these assumptions as part of their personal belief systems, but they entail a whole bunch of other assumptions that are not prima facie true.
One other huge weakness in these kinds of studies is that they measure the things that are easy to measure; things like education and experience. If companies are hiring compensating employees rationally, they would use these only as heuristics, and have some measure of how much an individual employee would contribute to the company as the determining factor.
Measuring job skill, as well as all the other skills that go into being a good employee is really hard, but until a study tries to actually do this, they are coming up with conclusions that aren't at all useful in the real world.
This is spot on. What makes the above probabilities look incorrect is that people are assuming that the algorithm understands the relationship between tiger and animal the same way that humans do. Clearly they are evaluating each independently.
This seems like a PR-based defensive move to me, rather that one rooted in principle.
This practice has been known for some time (the recent news is not new at all) and has been used to prevent drivers from being caught up in the regulation battles by taking fake fares. This move makes drivers' experience worse.
It seems inconsistent for Uber to maintain their position when it comes to undermining/circumventing taxi monopoly laws and also make this move.
Is there a broader context or principle that can explain this in a way consistent with Uber's values?
A cache seems like a particularly odd example to choose for this. If you are changing the way a cache functions, then presumably you either have fairly short expiration times (problem will fix itself) or you would have some form of cache invalidation as part of the deployment process.
Additionally, it would have been nice to see some mention of patterns that solve this issue more completely, like CQRS, where state is disposable.
Does anyone know why GCP isn't one of the supported cloud providers for EE? This is surprising to me since they had docker-related offerings a long time before AWS and Azure.