That doesn't make much sense. Why would the easily-burnt trust still be intact at this point? It's a really weird comment if it wasn't supposed to mean "already burnt, or unable to burn".
I read it to mean 'already burnt or can't be burnt.' I'm aware of the definitions of 'inflammable,' but it just made the most sense in the context to interpret it that way.
I can't edit it anymore, but I did mean "unburnable". I realize (now) that "inflammable" comes from "inflame"/"enflame", but "in-"/"im-" has lots of use as a negation. "Invisible", "indivisible", "inconceivable", etc.
Netflix - Essentially a paywall for great CONTENT. You subscribe to their service with regular payments.
Youtube - Ads for free content.
Both seem to be working just fine as far as revenue is concerned. Can you elaborate on why you find only sites like SE or GH "content"? What's your definition? Are you suggesting entertainment doesn't have a value of its own?
> Are you suggesting entertainment doesn't have a value of its own?
Yes, exactly.
Netflix is paywall for entertainment. Youtube does have some educational content. Some of it generates revenue from ads. Although those who actually do create educational content, do it for self expression or as a token to return to community.
Just another day there was a top ranking comment about Apple's music streaming service. The TL;DR was that there are millions of great musicians who do not care about profit, they care about music.
I think art is about self expression, not means to making money. Somehow it is much more prominent in tech community with open source and bunch of free information. The information that is actually valuable.
Of course we do have day jobs, but there we get paid for implementing something that people actually need.
I realize this is probably a unusual worldview, but perhaps we are too much entertained and not enough educated?
I don't think the OP has a problem figuring out how to try a new DE. The question was whether it's worth it. The nuanced answers of users with considerable experience is more valuable (IMHO) than just firing up Xfce for a session or two.
To help you out,
"Couples therapists estimate that only 5% of relationships ever break out of the power struggle and into true intimacy and self _actulization_."
s/actu/actua/ I guess? In the page titled "Warning!".
A "confusion matrix" is statistical analysis you're looking for. An entry (i,j) is a score on how much the ith character is mistakenly typed (inputted) as the jth character. This can be easily adapted to various input devices.
Are the particles found first and retrofitted into the Standard Model?
Or does the model act as a map of sorts by telling us where to look? If latter, does that mean we're not looking for particles outside of the standard model?
The Standard Model is a list of fundamental particles (electron, muon, tau, their corresponding neutrinos, and 6 flavors of quarks: up, down, strange, charm, top, and bottom) as well as their interactions (electroweak force & strong nuclear force). The input to the SM are around 20 numbers controlling the masses of these particles and the strengths of the interactions, as well as a few Higgs parameters. After that, everything is fixed, so that the SM makes definite predictions about where to look to find new baryons and mesons (things made out of quarks). It's not quite so simple as "add up the masses of the constituent quarks" because the strong interactions translate into mass (essentially via E=mc^2)---exactly how big an effect the strong force has requires calculation.
So the SM is a map telling you where to look, but a very sneaky kind of map. In the sector where the strong force matters, it's as if someone encrypted a map, and for every new destination you want to find out about, you have to expend computational resources to decrypt it. In principle, you have all the information, but in practice it is hard to extract predictions from the theory. That is a very peculiar situation for scientists to be in: to have a definite, precise theory, and the opportunity to do experiments, but to struggle to compare the two!
Anyway, these particles are predicted by the SM, where "predicted by" means after expending a lot of computation to understand the strong dynamics one finds out that these particles (which are quarks held together by gluons) should be there.
The approach seems to be we observe a lot of behaviour, then build up models to explain that behaviour. If particles need to be made up to balance an equation then they get made up.
You project that model into some scenario to say "If this model is right, the following ... will happen" and observe. If you're right then work goes on for further evaluate "What about this scenario" and finally a big machine gets built to directly work out, from all that's prior, if a particle that needs X,Y,Z properties really does exist is should be seen in the measurements that are supposed create it.
So to answer your question; Both?
E.g You could take the model, put it in a computer. Simulate what the LHC does and look at that graph of kind of particles it makes. Then you build the LHC, actually smash some things together. Same graph?
Going to Tirupathi is a pilgrimage. Its misrepresenting religion. Faith by itself isn't irrational. You're putting 'Going to Tirupathi' and 'Not cutting your hair on Tuesdays' in the same bucket. This will only hinder progress in dispelling actual superstitions.
Categorizing all religion as superstition seems to be some sort of atheist superiority complex. (I'm an atheist too, btw.)
That argument doesn't really help the discussion.
Except for very sensitive data, usability is generally a bigger factor. And anyway, its not like you have a lot of options to choose from.