While I’m unsure I’ll be using Swift for the rest of my life, I’ll continue to work on my little tool to detect unused code for as long as I can. It’s my most popular open-source contribution, and it brings me joy knowing others find it useful.
If you don't want to see ads then fine it's your computer.
But Brave is hijacking ads by force then strong-arming websites to sign up to their own shitty crypto currency if they want to be paid. It's basically an extortion scheme, or a mafia stye "protection scheme"
I wonder who's more unethical: The guy who comes in, smashes your stuff and leaves or the guy who comes in, smashes your stuff and leaves a check for you at his place.
I bet that's a common battle ground between consequentialists and idealists.
Fun aside, I don't think the comparison holds up very well. I don't think that anyone has a right to execute code on my devices just because I'm browsing a website. In this concept, "hijacking" ads is completely void of any ethical meaning.
Braves platform attempt is still not very good. A good solution for paying content creators would need to be open, decentralized and accepted by the stakeholders involved.
Requesting the code doesn't mean I'm obliged to run all of it. If you serve data to my computer I am free to do whatever the hell I want with it, if you don't like that, don't serve me the data.
I've read a few threads on this and while I don't want to be absolutist and say you are wrong, I believe it's a lot more nuanced than what you described. My understanding is that the Brave injected ads are strictly opt-in at this point for the user, not the default, which makes it a lot less of a racket imho, but I dislike Brave simply because I don't need it, which seems like a good enough reason to grumble about it.
One some larger roundabouts in the UK (and no doubt other European countries), they have traffic lights on the roundabout, between exits (I'm not talking about the entry lights). If one exit is totally blocked, these lights can allow traffic to continue out of other exits.
I hear the argument "Americans don't understand roundabouts, therefore we wont build any" quite often in these kinds of articles. If you survey people about something they've never comes across before, of course you're going to get negative results.
If Americans can drive and talk on their cell phones, they can handle a roundabout.
Somebody gave me the argument in NASA that the US didn't switch metric because the cost would be too high because they are so big. Ireland and UK have gone metric in our lifetimes.
I mean, there are lazy people throwing out platitudes to justify doing nothing wherever you go.
My initial reaction was acquihire. Though given their recent improvements to Pages [1], this acquisition could mean a push into the Easel market space.
EDIT: Just to point out why this is a big deal - What percentage of new sites are mostly static, presentational? Probably a slim minority. Whilst Github can't code your site for you (yet?), giving you the tools to develop your app layer and frontend - with some as-of-yet seen integration tools? - those are some very large slices of the pie. Don't forget Github pages is currently free too, perhaps there'll be a paid tier for dynamic sites.
Another thought is that there may be some ratio between quality and rewatability that signifies the first-time (but not repeated) watchability of a film.
For example, 5 stars for quality and 0 for rewatachability doesn't tell me I should watch the film if I haven't already. But maybe 2 stars for rewatability/subjective enjoyment is enough justification to watch it to appreciate the quality?
It'd be interesting to see the overall variance on quality vs. rewatachability. That could give some clue on the objectivebess of users' answers to quality. Perhaps quality is just one of those things that is more universally objective than it is subjective?
It's not actually linear correlation, since we effectively normalise the pairwise scores to [-1,0,1],[-1,0,1] (nine possible combos). We're exploring blending in a few other signals along the way, but we wanted to see how far we could get by discretising the pairwise comparisons in this way.
Once we've collapsed all pairs down to a Vector Victor, we treat matching Vector Victors as a thumbs up and non-matching as a thumbs down, take the square root of both then take the lower bound of the Wilson interval as our ranking function.
I assume each vector has its own weight? So better in "Better in both respects" is a stronger sign of similarity than just "Higher quality but same rewatchability."
So say..
"Same in both dimensions" = 0
"Same quality but more rewatchable." = +1
"Same quality but less rewatchable." = -1
"Higher quality but less rewatchable." = +2
"Higher quality but same rewatchability." = +3
"Better in both respects." = +4
etc..
Then you could pass those to a coefficient like Pearson's R.
x = [0, 1, 2, -1, -3, 4, -4]
y = [0, 1, 1, 2, -1, -2, 0]
It'd be an interesting experiment to see what results that gives vs. your current algorithm.
That's something we haven't tested, but my gut tells me contrasting Vector Victors (like better in both dimensions) is 'worth' more than similar Vector Victors.
The really significant change would be that agreeing in one dimension (yes A is better quality than B, but we disagree on which is more rewatchable) still contributes to your correlation with someone. We're not doing that at the moment, because it felt like pairwise partial agreement would weaken the signal - I wanted _real_ agreement (in both dimensions) to stand out.
While there might be a way to capture that with a linear function, I've favoured solutions that reflect that our ratings are two-dimensional.
Also, if you avoid the normalisation step you could easily factor in the degree at which user A liked the quality vs. user B, instead of just a 'more' or 'less' question.
If you factor your vector weights by the scale of your quality rating (0 - 10?) then if user A liked the quality film X vs. film Y +6 more points than user B's +1, this would give you a more accurate correlation.
Anyway, food for thought. A very fun problem to be working on!
I think the weightings I describe above would give you that.
Say we start at 0 and user A likes the next move in both directions (+4) and user B only likes it more in one direction and the same in the other (+2) then you're still going to get a positive correlation, just a slightly lower one than if both users liked it in both directions.
https://github.com/peripheryapp/periphery