Notwithstanding any fallacies that may exist (can you point out a specific one?), the best available scientific evidence suggests that natural immunity is much stronger than the vaccines against COVID:
"Israelis who were vaccinated were 6.72 times more likely to get infected after the shot than after natural infection". https://archive.is/RlwBc
Here's a very quick summary of what I linked above: In Israel it would be easy to look at the data and conclude that vaccines are providing ~67% efficacy against severe disease/death.
But, once the data is broken down into buckets that help address confounding variables (i.e. different vaccination rates among different age groups), things look very different. All of a sudden efficacy numbers are looking better than 90% for a lot of people.
This will similarly matter a great deal as people try to figure out how long vaccines provide protection. The groups that got vaccinated the earliest in many places were older people and health care workers -- groups which start out at higher risk, and also have a higher probability of less effective immune response to vaccines (older people).
As a result of that, it will be easy for analysts that don't consider that issue to under-estimate the effective time period of vaccines.
The archive.is link you provided isn't working for me at the moment, but to address your statement in the context of the above framework:
The group of people most likely to have been infected with the virus are not the same as the group of people most likely to have antibodies as a result of immunization. In many places, there are a lot more younger people who have gotten infected with the disease than older people. There are other socioeconomic and behavioural differences too.
Given that young people tend to have a more effective immune responses to begin with, and given that they have been shown to have better outcomes after being infected with this virus, it's easy to see a way to incorrectly conclude that stronger immunity results from infection-acquired antibodies, even if the opposite may be true.
In short: Apparent differences may be better explained by the fact that it's a different group of people who have been infected vs those who have not been infected.
We have no idea how complete "known to have recovered" is where as the vaccinated population is well known.
It also doesn't say how "known to have recovered" as well as vaccinated is being calculated.
I'm also not sure what the relevance is given the comparative deadliness of the vaccine and "natural" immunity.
Edit: That's actually another statistic issue. You need to consider the "known to have not recovered." The population of "known to have recovered" is itself a biased selection of those with stronger immunities than the general population.
For a little bit of ancient history: I was one of the admins who worked to create OFTC. We all knew each other from Open Projects Network (which rebranded as Freenode).
I was barely in high school when I came up with the name OFTC and I registered OFTC.net. Very early on in the process of creating OFTC, I agreed with all of the people I was creating OFTC with that I would behave as caretaker rather than owner of OFTC.net while we figured out our governance.
Ultimately we came up with a governance model, and we also managed to convince Software in the Public Interest to take custody of the domain name and have it managed in accordance with the governance model we designed.
We started with a pretty great group of both capable and well-intended people, and one of the things we figured out was that if OFTC was going to be a sustainable project, it needed more sustainable governance than the project we were leaving.
One of the key people behind the very early push for OFTC to have a stable governance model later became a Member of Parliament here in Canada.
With homage Moxie's Cryptographic Doom Principle, I propose the Cache Doom Principle: If a system's behaviour can be influenced by a cache, eventually someone will figure out a way to use that cache to leak data.
(3) Chrome is migrating to using it's own store: "Historically, Chrome has integrated with the Root Store provided by the platform on which it is running. Chrome is in the process of transitioning certificate verification to use a common implementation on all platforms where it's under application control, namely Android, Chrome OS, Linux, Windows, and macOS. Apple policies prevent the Chrome Root Store and verifier from being used on Chrome for iOS."
sigh. DigitalOcean continues their march towards irrelevance.
First it was the app platform and now this. Gouging us at $0.10/gigabyte bandwidth charges makes us: (1) think less of you, and (2) adds a bunch of cognitive complexity & work to developers' lives.
If this is how it's going to be we may as well just use AWS or move on to one of your competitors that isn't trying to pretend that bandwidth is expensive. It isn't, and there isn't any reason we should have to design applications around artificially absurdly inflated costs.
Even Oracle pretends to understand this. _ORACLE_ are the ones trying to make the case that they aren't only about having hostages/locked in customers.
When Oracle is beating you on this metric you've really jumped the shark.
I don’t think this is reasonable criticism. It’s more than fair for them to charge for egress traffic, and they’re doing it at a price point less than AWS.
Of course it would be nicer if things were free, but to claim this to be evidence of a march into irrelevance? For charging for egress traffic on a Docker registry? That’s a bit too much don’t you think? Especially considering how easy it is to set up some GitHub action and or another CI tool that constantly keeps hammering their registries without a lot of value. Docker (the company) clearly feels the pain of this a lot, and they just want to prevent this type of thing from happening.
If I were to guess the intended use case is to help you with deployments inside the DO cloud, and to actually reduce your ingress traffic when pulling from other, remote docker registries. It’s a win/win for these use cases, and to be honest, it’s not expensive.
Besides, DO’s pricing still is very much favorable compared to other cloud vendors.
There are many CDNs that make money charging < $0.01/gb.
Indeed DigitalOcean themselves built their place in the market by charging $0.01/gb for bandwidth. How do we reasonably get to $0.10 as is the case here?
If it were really that expensive for them they could outsource it to a CDN for well under $0.01/gb at their scale, which would leave them the ability to get margin. But all of this pricing is in fact completely detached from the underlying physical realities -- they are charging these prices because they think they can get away with it, not because they need to do so to cover costs and have some margin.
Bandwidth prices shouldn't be going up, indeed they should be going down. 100 gigabit interconnects are a thing now.
I believe that bandwidth charge is after you've exceeded the monthly allowance and for when someone wants to pull your image over the internet, and not distributing to your instances.
"In the future, each plan will have a bandwidth allowance and additional outbound data transfer (from the registry to the internet) will be $0.10/GiB."
BW topic is not related to container registry, but worth mentioning as it is being discussed in this thread. Regarding the outbound data transfer pricing for a $5/month droplet, you get minimum 1TB/month free Internet-bound BW. The price of 0.01/GB kicks in only after that.
There's a really big gap between the $0.01/gb you are talking about being charged on droplets and the $0.10/gb that DigitalOcean is using on newer offerings like this and "App Platform".
The fact that somebody could put a caching proxy in front of the container registry -- on a droplet also hosted at DigitalOcean -- and have their bandwidth costs fall 10x for doing that does indeed provide further illustration of the absurdity of DigitalOcean's new approach to bandwidth pricing.
I don’t think I agree. This is still a great choice if you need a container registry for your images that you run on Digital Ocean compute - as far as I can tell there is no bandwidth charge there. This means that DO has a total container hosting platform.
That depends on the TTL of your DNS records. But if it’s a brand-new registration for a dot-com then I’ve found DNS queries work within 3 minutes of me completing GoDaddy’s regustration (and using GoDaddy’s DNS zone hosting) even through my ISP’s DNS servers (provided there’s no cached NXDOMAIN results).
The .com zone file is updated every few minutes. Caching behaviours will vary significantly. Frequently a significant fraction of traffic can be using new nameservers within minutes, with a long tail of traffic with older information.
Each TLD does their own thing. For example, last time I checked, .ca only seemed to be serving a new zone file every few hours. How long new nameservers take will depend on your luck in terms of where you are in their refresh cycle.
Third party DNS servers are helpful in one sense - you can share your state with other users.
Turning off EDNS with your own recursor won't really make much difference. Limiting the maximum cache length will help, but will also eliminate much of the benefit of having a local recursor.
The other issue with running your own recursor is nasty networks will transparently proxy DNS and you can end up using a cache you don't even know exists.
DNSCurve, DNSCrypt, and DNS-over-HTTPS solve one set of problems while introducing different ones.
Sharing a cache with other users introduces its own set of problems, e.g., cache poisoning. The problems that arise from shared DNS caches gave rise to "solutions" that in turn introduced further problems.
For transparent proxying, i.e., hotel internet, I use a local forwarder and a remote recursor listening on a non-standard port and it has worked flawlessly.
I prefer to serve static address info via authoritative DNS or /etc/hosts. I have other methods of getting DNS data besides querying caches. I have no need for DNS caches. Most websites I visit do not change addresses frequently. I also like to know when they change, if they ever do.
I have not experienced any problems with DNSCurve.
- Never allow any part of the computing systems you use to cache anything.
- Insist that everything in your life exist in a state of being functionally pure & stateless.
- Eliminate access to all sources of timing data.
- Make sure that all tasks are completed in a pre-determined fixed amount of time regardless of resource contention.
There are so many different side channel attacks, and the computing primitives & API choices we have been making for years make it challenging to build secure systems.
Caches are very deeply embedded in the culture of how computing is done. Making tasks take longer than strictly necessary to avoid leaking information goes against our instincts to optimize system performance.
It's going to take a lot of work and cost a lot of money to get software to a point where we aren't playing whack-a-mole with side channels.
More pragmatically, the current implementation of this technique can be dealt with by being very conscious of how much data your DNS resolver(s) are leaking & being conscious of how large the anonymity set is of the userbase of your DNS resolver(s).
If you limit DNS cache times and use blinding computation techniques to limit the identity information your DNS resolver has or retains about you, then DNS cookies can be largely mitigated. If you have faith that 1.1.1.1 is operated in the manner that Cloudflare claims, the measures they have taken go a long way to making DNS cookies unusable.
I also pointed out some additional specific mitigations when I reported this issue to the Chromium team in October 2015:
What if we designed the resolver to fetch many responses with the caching disabled and then caching all of them? In essence, force it to give you as many cookies as your desired anonymity set size and then sample this local store of cookies when calculating the response for the end client.
This would make it harder to build a fingerprint, especially if responses were sampled from a number of independent sources.
The next logical step in the arms race would likely involve fingerprinting systems using more bits than strictly necessary, and using error correcting codes - i.e. treat the sampling as "noise" to be overcome.
It seems both more straightforward and more effective to build recursion paths that you can trust aren't doing any intentional or unintentional caching.
This of course means the performance benefits of caching go away. This has been a theme in computing lately (i.e. CPU speculative execution leaks such as meltdown).
A recursor could be built which only uses each query response once, with prefetching used to reduce the performance impact.
However, the mere fact prefetched responses exist would also leak data.
> It seems both more straightforward and more effective to build recursion paths that you can trust aren't doing any intentional or unintentional caching.
I agree, but as you say, that will take quite some work and time to happen and will be costly. I was thinking of this as a possible temporary mitigation which would retain some benefits of caching. If it was made adaptive[1], it would also have the nice side-effect of being more resource intensive for those servers that attempt to use tracking.
[1] i.e. only fetch many responses if they appear to vary while doing a smaller number of "probing" requests. Continue fetching more responses for your local sample until they stop varying with some degree of confidence.
It would be difficult to differentiate between responses that vary due to load balancing and responses that vary due to active fingerprinting.
Even when a site only has a single physical location, load balancing might be done in part by having DNS randomly return one of many valid IP addresses. E.g. this is a behaviour supported by Amazon's Route53.
Larger sites frequently use a combination of anycast and DNS based routing to get packets to the closest POP. This introduces both (1) difficulty identifying when fingerprinting is occurring, and (2) still more opportunities for fingerprinting.
Most users will find it impossible to control which POP their packets get routed towards. For someone doing fingerprinting, it could be a very useful signal.
https://en.wikipedia.org/wiki/Base_rate_fallacy
https://en.wikipedia.org/wiki/Simpson%27s_paradox
Someone that's done some analysis using the above mental models: https://www.covid-datascience.com/post/israeli-data-how-can-...