This is actually better for most consumers. The SLC cache was increased nearly three fold and the controller is a superior one (it now uses the same one as the 980 PRO). TechPowerUp[0] has a much better post on this, and you can clearly see there that the new one is better in most cases than the old one.
The only ones disadvantaged by this change are people who constantly write >42GB which I would think are video editors. (The old version would get a speed reduction to 1500MB/s after overflowing the SLC cache (42GB), the new one goes down to 800MB/s after overflowing (115GB))
P.S.: Not defending this, just clarifying because most posters here seem to believe it's a straight up downgrade. Should also be worth noting that Samsung changed the product box, product number, firmware version and the spec sheet for this change. So they're significantly better than the others who have done similar moves. That said, I still believe that they should have called this the 971 Evo+ or something, as it's genuinely different.
> Should also be worth noting that Samsung changed the product box, product number, firmware version and the spec sheet for this change
This is yugely better than what some other SSD makers have done. ADATA for example has massively downgraded some drives, trying to sell those under the same name and part numbers as a popular good-selling drive, and done so completely silently. ADATA isn't the only one. The screaming about this situation is endless on several pc part enthusiast subreddits.
What Samsung should have done it change the name and call it the "971 Evo Plus SSD" or "970 Evo Gold SSD" or some such change that distinguishes this new product with different performance characteristics than the actual "970 Evo Plus SSD".
But no they want to benefit from the good name and customer perception of the "970 Evo Plus SSD" while selling a substantially different product under that name. That is fraudulent behavior!
How can they do that without reducing the overall capacity? My understanding is that part of the MLC storage in SSDs is used as an SLC cache so that it's faster, but can store only half, one third or one fourth of the data it otherwise would.
In general, there are two separate components to the SLC cache strategy (which as you said, is writing only one bit instead of 3 for TLC, because it's much faster to do so). First you have some overprovisionned NAND, the size of which depends with models. I believe it is 6 GB on this one.
Then you have what they call "intelligent turbowrite", which is a dynamically allocated/reallocated SLC cache (about 108 GB).
For both, the concept is broadly the same, your writes go into the overprovisionned "SLC cache" first, then into the dynamic one.
When the drive is idle, it will consolidate the writes of both caches as 3 bit writes, freeing the NAND for "SLC cache" use again. This can take a few minutes of idle time.
As you fill up your disk things get more complicated, you need to keep some free space to be able to consolidate your writes, the exact way this controller works in that case is not known to me, but this is an issue with every SSD that's not full SLC. Modern controllers usually are doing much better than the old ones.
Generally SLC cache has almost no connection to the overall size of the drive. After finishing writing a huge portion of data (for example), the controller will start to move the written data out of the SLC section and turn them into normal TLC mode, releasing the SLC space for next turn of writing. When the drive usage becomes higher, some drive (apparantly Samsung's drive does) have a dynamic SLC capacity policy that will reduce the avaliable SLC space, so the disk can have enough space to store normal TLC data.
Consumer SSDs don't have a a lot of overprovisioning. For example, a 1 TB SSD will never have more than 1 TiB of flash. Server SSDs are a different story.
> a 1 TB SSD will never have more than 1 TiB of flash
It's a bit more complicated than that. None of the quantities precisely correspond to the definitions of 1TB = 1000^4 or 1TiB = 1024^4 bytes. A "1TB" drive will have a host-accessible capacity of 1,024,209,543,168 bytes.
The NAND chips on a consumer 1TB drive will collectively have a nominal capacity of 1TiB (1,099,511,627,776), but that's more of a lower bound; the actual capacities those chips add up to will be higher. If we assume defect-free flash and count the bits used for ECC in order to get an idea of how many memory cells are physically present, then we get numbers as high as 1,335,416,061,952 bytes for our 1TB drive. If we don't count the space reserved for ECC, then we're down to about 1,182,592,401,408 bytes on defect-free flash, and 1,172,551,237,632 after initial defects (taken from a random consumer TLC drive in my collection).
So that means the SSD is starting out with about 14.48% more capacity to work with than it provides to the host system—considerably more than the 9.95% discrepancy between the official definitions of 1TB and 1TiB. Of course, that 14.48% will be reduced as the drive wears out, and the low-grade flash used in thumb drives and bargain barrel SSDs from non-reputable brands will tend to have more initial defects.
I used to work in their SSD vertical on the software side. For what it's worth, SSD/memory business unit is their pride. They never used to cut corners and do rigorous testing to ensure their customers aren't screwed over. Even when a bad batch of SSDs were shipped by mistake, they would recall/refund all of them proactively. When ever a new model is released, they would first ship limited units to a set of SSD enthusiasts including gamers in Korea. One such time, there was a BSOD reported by a single tester in some edge case - > they sent a lot of team members in person to chase and fix that issue.
So I really doubt they would do anything stupid to get some quick bucks at the cost of reputation.
On the other hand, their smart tv division is absolute garbage and employees themselves wouldn't buy Samsung TVs.
A better contrast is the night and day difference between Samsung SSDs and Samsung SD cards. The former are among best in class and the latter are among the worst in class and notorious for dying mighty fast in Raspberry Pis and DSLRs (going read-only). (Best SD cards are probably from Sandisk.)
Is this anecdotal? I had two sandisk cards die in a Samsung Galaxy phone in succession. I've never had a problem since I switched to using Samsung SD cards.
Sorry, I should have been more specific. Sandisk Extreme, in particular. It’s not anecdotal but “common knowledge” in certain circles. I believe the more recent Samsung SD cards are better, but the older ones were basically notorious.
I'm using a Samsung SD card on my OrangePi Zero for more than a year 7/24/365, and it's very stable so far.
However, nothing is as durable as SanDisk and Sony high end (Extreme Pro in SanDisk terminology) cards. My old CF is still kicking and a pair of 64GB cards in my mirrorless doesn't care what I pump into them (all of them are SanDisk).
A serious question. I use a “Smart TV” without any internet connectivity whatsoever. Never let it connect to my WiFi, neither did I connect Ethernet. This way I am happily enjoying a high quality “dumb TV” that I always wanted.
Are all the complainers in HN really letting their TV connect to Internet (inspire of knowing all the crap TV manufacturers have been pulling all these years?!)
My TV required connectivity during setup and then removed the config afterwards. About a week later I noticed the "home screen" showing adverts and somehow the Android TV had "auto-magically" reconnected. Had to change the WiFi password to avoid reconnecting.
That reaffirmed my stance on Smart TVs remaining dumb in my house. Netflix through the Xbox only!
My LG TV is always connected to the Internet and I don't use it as a dumb device at all, its only use it to watch local Netflix-like service and Youtube.
I don't complain, though. Its smart capabilities are absolutely sufficient to me.
I have a smart tv and I don't like the fact that it connects to the internet...but would it be any better if I had a dumb tv and a fire stick? Honest question...I don't really know the difference.
Fire stick is probably not any better than using the apps on TV. I bought a used Mac mini, and connected it to my “dumb” TV via HDMI. I have a wireless keyboard and mouse for my TV. Netflix, YouTube, and anything available on a web browser is now available on my TV. In addition, I can use AirPlay to stream photos or music from my iPhone. This setup has been lovely.
It's better in the sense that if it starts pulling shit that goes one step too far for you, throwing a FireTV stick away is less of a sacrifice than junking the entire TV. That's my main reason for wanting to keep things separate.
> Are all the complainers in HN really letting their TV connect to Internet (inspire of knowing all the crap TV manufacturers have been pulling all these years?!)
Why are people complaining about Smart TVs in a thread about SSD?
Because the parent commenter said Samsung Smart TVs are so garbage that even their own employees don’t prefer to buy it. Hence my comment that you can always buy a smart TV and use it as a dumb one by never allowing it to connect to the internet.
....how? There are literally hundreds of devices that will offer YouTube playback and music for as little as £30-40.
Whether it's a good idea or not is a different matter entirely, but the notion that you "have no choice" is just silly. Is someone forcing you to watch YouTube through your TV?
Is there a difference in your tv spying on you and those devices spying on you? I don’t mean this to sound confrontational or rhetorical, I really don’t know.
You can sometimes build your own Linux with Kodi with those devices. On the plus side, you're then not limited to use streaming services Netflix and can grab media from any place.
I do too. I bought a used Mac Mini online (any computer with a HDMI port will suffice). You get your browser on Tv. Hook up a wireless keyboard and mouse to your computer, and you can have your privacy friendly Internet TV experience.
In meant in terms of durability and quality. We used to by a lot of Samsung electronics (along with TVs) through employee discounts and the failure rates among the TVs are always high. So much so that we used to joke about that there is a "if condition" that some how causes the pcb to fail in first month after first year (TVs used to have 1 year warranty).
My perception of Sony is that they sell higher quality products. Sometimes only slightly higher, maybe not enough to offset the price premium if there is one, but I assume all their higher end stuff is top notch.
> but I assume all their higher end stuff is top notch.
From my unscientific sample, yes.
The Sony Master series of OLED TV's (e.g. A9) are IMHO, most likely the best TV you can buy at the moment.
I've seen them side-by-side next to high-end Panasonic kit, and whilst the Panasonics put on a good show compared to your average TV, the high-end Sony's are noticeably better.
The Sony firmware also seems to behave more consistently with HDMI auto-switching than the Panasonics. Sometimes you ended up having to switch manually on the Panasonic because whatever automagic was meant to happen didn't happen.
Samsung will give updates for a maximum of 5 years. So all the security holes or missing root certificates (for HTTPS connections) of the integrated browser stagnate after that.
Then you have to use a raspberry PI or some other mini computer to get the same use out of it. And then why use a "smart" TV in the first place?
I want all the smarts to be outside my display device. If my TV is acting smarter than what’s needed to do the HDCP handshake, I’m not going to be happy.
The latest series has an issue with stuttering when watching live TV, hundreds of pages of complaints on their own and other forums, and Samsung has officially replied that yes, the issue exists, but they won't fix it. I know at least 3 people who have bought Samsung TVs in the last year and they all returned them due to this very issue.
Sony and LG flagship devices also have the same problems. I've a bravia and our office has an LG c9 tv - the softwdre on them both is unusable. The problem is there's no other real choice in the high quality TV market, and the experiences are still better than before; having one remote for Netflix, prime, live tv and a sound system via ARC is such a huge improvement too.
Do they? I've had an LG CX for almost a year now and haven't had a single issue with it, the interface and all the apps are certainly super fast and I wouldn't bother switching to a dedicated device, everything built into the TV works fine. And it definitely doesn't stutter like the Samsungs do. Maybe the C9 had some faults in the firmware, but it's 2 generations behind now. What do you mean by "software is unusable"? In what sense?
Samsung’s pride and butter is their SSD group? I find that hard to believe… they provide a range of products, including cars (they dominate the SKorean auto market), as well as financial services (in S Korea).
In America, they have TVs, and cell phones. I find it very hard to believe SSDs are their pride and butter
Samsung does not dominate the South Korean car market. They own an important supplier in Harman and they're doing their best to sell automotive grade SKUs of their Exynos SoCs, but they don't act significantly as an OEM or anything.
The reporting is this case is a little dramatized IMHO. As you can read here [0] they actually use the controller from the 980 Pro and more dense NAND, Billy Tallis from Anandtech filled in some missing technical detail. You actually gain performance in the random 4K case, which is important in day to day operations but you loose sequential write speed if you exceed the SLC-Cache of 115GB.
This differs quite significantly from WD silently changing the Blue drive from TLC (okay with Cache) to QLC which nets you write speeds lower than HDDs after the cache is full. Crucial did TLC->QLC on the P2 too, but you could get those drives for 30% less up until a week ago.
(edit) forgot to mention ADATA who switched the controller of their SX8200 Pro [1]
(OT) If my information is not wrong......the new WD SN550 drive still uses a TLC NAND (Kioxia BiCS5, which is the same NAND flash as the SN350 960GB version) but may have done some modification to the firmware/controller to lower the speed.
This is massively overblown, especially the comparison to what happened at other manufacturers. Samsung announced the change, changed the packaging, and updated the specs on their site.
And they changed for a better controller (from the 980 Pro) with 4X more write queues and an improved dynamic SLC cache strategy. It is slower in one (minor) edge case but will be faster in every single other ones. If I had the choice I'd pick the new one, without question.
The downgrades seem to be prevalent across the Evo (Plus) range, for a while now. I got burnt twice, on two different microSD cards -- one was for a Raspberry Pi (64GB) in Jan 2020, and another quite recently for my S20 (128GB). The 256GB version was not affected by the downgrade to a lower write speed.
Initially, the changes in lower write speeds from 90MB/s to 60MB/s were not communicated, either through a press release, or via the specs on the Samsung site. However, it was eventually updated, i.e. the change of letters indicating different model numbers: MB-MCxxxGA vs MB-MCxxxHA, the latter suggesting a new '2020 model' with inferior write speeds.
In light of the chip shortages everyone is talking about, I've been scratching my head about all the laptops, desktops and AIOs on the market - there are millions of them, from dozens of manufacturers (or so it seems).
All soldered down but using modern chips from Intel and nVidia, same chips everyone claims are in short supply.
Who is buying these things?
What happens to them when they're not sold?
Where are all the old ones going?
Is anyone even bothering to desolder older generation chips like Intel 9xxx CPUs and nVidia 10xx GPUs and even RAM?
It has always seemed weird to me, but now that building a desktop has become more expensive than it should be it just seems insane.
The chip shortage doesn't mean no chips, just fewer. Vendors like Dell, HP, and Lenovo locked in long-term contracts with fixed pricing which enables them to still build PCs. Store shelves also show anti-survivorship bias where the stuff people want is sold out while the stuff no one wants is still on the shelves.
Yes, why does the chip shortage only effect high end card/car chips. I would be pretty darn surprised if laptop/smartphones have less number of chips than cars, still the former industry doesn't seem to be affected by chip shortage even though they sell a lot more pieces for lot less price so the later industry would be willing to pay more per chip.
I'm an uninformed consumer but if I had to guess chip binning plays a huge part in the availability of parts. If a production line can produce X chips per day with 20% highest quality, 30% "mid range" and 40% "low end" (with 10% waste) you'll have 2x as many low end chips available as high end ones.
That's one way to deal with the counterfeiting problem -- just counterfeit your own devices! It's what the "sort by lowest price first" market demands, I guess.
Yeah, I think Samsung's consumer SSDs are up there. I have two and they're great. But, there is no "sort by highest quality first" option on Newegg or Amazon, and their happy customers are already happy with their SSDs.
(This is going to digress into reviews being the "sort by quality", but I think we know that reviews aren't very good. The fundamental problem is that most throwaway reviewers don't know how to review something as complex as an SSD. So unless the option on the ecommerce site is "sort by what Anandtech thinks about this", you are probably just sorting by a random and easily-manipulated number.)
Even if they did review them well, most reviews are samples provided by manufacturer, and are performed on a single unit typically. If a manufacturer is willing to swap components in batches of products, they're willing to try to cheat reviewers too.
It's important for reviewers to keep their eye on what people are saying about a product. I after they review something, user reviews don't line up, they should purchase a unit themselves and redo their testing. If the testing shows a change, then they should both shame the company and stop accepting review units from that division of the company so they can be sure they're not getting hand-picked samples.
In an ideal world yes but I don't think I've ever experienced reviews being done like this. Rtings purchase their own units but still would run afoul of swapping xomponents in lines. Reviews of live service video games are rarely updated unless something drastic changes (see the Destiny 2 issues where they pulled old content from the game...)
They do have good R&D and some pretty nice technology, that side of the business is solid, but their product divisions will frequently cut some pretty severe corners to get stuff out on time regardless of whether it's ready or not.
It the last years Samsung where the ones to beat in the SSD space. Higher price and higher perfomance, which can be measured but seldom felt in day to day operations, so i opted to save some bucks and bought Crucial SSDs.
With the introduction of QLC consumer SSDs, with worse write endurance and life time but the same price as TLC, I took a look at used enterprise SSDs and never looked back.
The only issues is the connectors. You need 8x PCI-E lanes (4x might work) or a SAS connector that nearly no consumer board has.
I got myself a used pm1725b (PCI-E v3 8x) with 3.2TB for 400€.
It hat ~350TB written, and according to Samsung is rated for 3 full device writes per day for 5 years = 17280 TB. So I have some wriggle room left.
They use lots of capacitors to hold written data in a dram and flush it out to flash on sudden power down. So they can optimize write operations for speed and low flash wear. Write speed and endurance is the part where consumer SSDs suck more and more ass.
Annoyingly many sellers do not put up the written amount of data. But I think that is more out of ignorance and less to try to dump worn down devices.
I would not trust a cheap chinese PCI-E to U.2 connector. And used SAS cards become kind of hot and need active cooling. So why not just directly go for a PCI-E SSD?
QLC is complete trash and should never be used outside of a WORM architecture in a larger pool. That they are trying to force it on consumers is criminal. Same with SMR.
>QLC is complete trash and should never be used outside of a WORM architecture in a larger pool.
Not everyone is a creator editing/rendering 4k videos. For the typical surfer/office worker/gamer QLC drives are fine because they rarely need to dump 50+GB of writes at a time. What you're suggesting is basically "ban all atom processors because they're criminally slow!"
QLC also has shitty slow write speed. (Slower then linear writes on spinning rust!)
So consumer SSDs now depend on a limited dynamic part of flash being used as SLC to cache writes. Just that this dynamic part mostly gets smaller the fuller the device is.
And as soon as you do anything with large writes you will hit a brick wall of slowness, on full devices sooner then on empty ones.
Yes many consumers will be fine with this drivers.
But the information about the limited write cache never gets mentioned by the manufacturers. So people that actually need to write large amounts of data from time to time, fall into a performance trap.
Samsung 980 is rated for 0.3 DWPD. For a 250 GB drive that's 75GB per day. I have a hard time imaging what type of workloads would cause the target audience (ie. surfers/office workers/gamers) would reach that limit on a consistent basis.
It's worse than that. Most people do use their drives so it runs out of SLC cache and most QLC drives are low end and don't come with DRAM. The systems they're put in are usually equally low end with low system RAM so the OS is going to page a lot. QLC has atrocious write endurance so these devices are going to eventually run out and either die or go RO. QLC devices in consumer electronics are essentially just ewaste and waste wafers. SLC->PLC is a case of diminishing returns in terms of storage/area. I do actually think we need to attach warning labels for QLC drives/devices. Same for SMR HDD's.
> Host managed SMR is fine for data-hording if the savings in size/price are forwarded.
I'll believe it when I see it, though.
Unless you're trying to cram more data into a 2.5 inch drive, there seems to be zero upside to getting an SMR drive. They're priced about the same per byte despite "25% density improvements", and the high capacity 3.5 inch models on the market don't use it.
> and the high capacity 3.5 inch models on the market don't use it.
Sure they do. WD sold device managed SMR 3.5 disks without telling anyone. And then people noticed because of absolutely terrible raid re-silvering performance.
The consumer market is absolute trash for SMR devices, all the disadvantages with no benefit. But you can be sure that the enterprise sector gets host managed SMR drives at a better size/price ratio.
>Host managed SMR is fine for data-hording if the savings in size/price are forwarded.
Exactly. When you architect your tiered data storage solution to take advantage of technologies such as SMR/QLC it works great. This is absolutely not the case in consumer applications. Consumers will use QLC for their boot drives and fill their NAS boxes with SMR drives and have a horrible experience.
Linus and Luke (of ltt) had a brief discussion about this on yesterday's WAN show [1], might be somewhat relevant. They mention intel and sk hynix as well.
That will always have some very minor impact on power/reliability at least. The difference here is it makes a major difference.
See also:. Apple sourcing 4g modems from both intel and Qualcomm, and the performance being quite different so they disabled features of qualcomms chip to make them similar.
No, not really. The whole point of buying Samsung SSDs is that they're guaranteed to contain a Samsung controller and Samsung NAND. Likewise with Hynix or WD/SanDisk.
Something doesn't line up here. The "old" model number in the ars piece is given as MZVLB1T0HBLR, which corresponds to a PM981a according to Samsung's website. Was that ever sold as a 970 Evo Plus?
IANAL, but I believe that as long as they reach their advertised speeds, they’re fine. Alternatively, you would need to prove malicious intent to deceive the benchmarks; but I can see Samsung make the argument for “we had to change to different $supplier because of chip shortage, and we incremented the revision number accordingly, and still reach advertised speeds”.
I wonder if there's scope for dealing with this sort of situation in trademark law.
That is, maybe we could make the law say "if you want to be able to use the courts to limit the use of your brand name, you have to follow certain rules about when you can use the same model name for different products".
This story (different publisher) was linked here yesterday; they comments I saw were all "This must be bullshit, Samsung would only inline upgrades", etc.
Imagine trying to ship a hardware product and then having these changes snuck in. Functionally it's no different than counterfeiting for companies with these SSD's in their BOM. Just plain fraudulent.
All articles I can find, are just "seemingly" reporting, none confirmed, almost all identical, all based upon one user, like the article from this post.
"Dear customers: Due to the current chip shortage we see ourselves in the position to have to swap the controller with one of lesser value which has degraded write speeds, they are 22% slower. Please consider this when modifying your raid setup and treat them as different products. Our pricing has been reduced to reflect this change."
vs
Let's change the controller and hope nobody notices.
The controller isn't slower in general though. I see it more as a problem of review sites not tracking revision numbers. It still reaches the advertised claims on the box. If you want to make decisions on information the manufacturer doesn't provide to you it is your responsibility to check that you are comparing the right revisions.
That's basically what is being done by auto manufacturers. If you buy certain car models right now, you will get a manually adjustable steering column and a promise to install the motors later.
That second part is what's important. Downgrading the feature without notifying the customer and arranging to correct the shortfall later is pretty terrible.
With is what WD are doing, having just been caught changing parts in a way that impacts performance
> A Western Digital spokesperson confirmed to Ars that the company had replaced the NAND flash and updated the firmware in the WD Blue SN550 beginning in June 2021 and updated the drive's data sheet to reflect the changes. "For greater transparency going forward, if we make a change to an existing internal SSD, we commit to introducing a new model number whenever any related published specifications are impacted"
I work in hardware design. Almost all ICs are very very limited supply. In our own case we had to move from 512K MCU to 256K simply because lead times are now until Dec 2022 on the 512K version.
I'm really surprised this hasn't bubbled-up to normies yet. I suspect this Christmas is going to be a shocker.
That is both unnecessarily rude, and missing the point the parent post is making.
Companies want to use whatever they can get that's vaguely compatible with the current design. Because it's that or selling nothing for 12 more months.
Sure. But that is not an excuse for misleading the customer. Couldn't find any components wich perform equally well? Then it's a different SKU. Slap a suffix on the name, adjust the price as needed, and don't mix inventory.
The only ones disadvantaged by this change are people who constantly write >42GB which I would think are video editors. (The old version would get a speed reduction to 1500MB/s after overflowing the SLC cache (42GB), the new one goes down to 800MB/s after overflowing (115GB))
P.S.: Not defending this, just clarifying because most posters here seem to believe it's a straight up downgrade. Should also be worth noting that Samsung changed the product box, product number, firmware version and the spec sheet for this change. So they're significantly better than the others who have done similar moves. That said, I still believe that they should have called this the 971 Evo+ or something, as it's genuinely different.
[0]: https://www.techpowerup.com/286008/et-tu-samsung-samsung-too...