Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel Refreshes 2nd Gen Xeon Scalable, Slashes Prices (wikichip.org)
149 points by rbanffy on Feb 29, 2020 | hide | past | favorite | 130 comments


I'm sort of amazed they haven't dropped prices more; I guess they can still charge a premium to anyone who wants their AVX512 (? I think that's it) performance to be as high as it can be. Otherwise, most of these processors have already had an epyc shit taken on them.

I was just again pricing processors and if that 7302 isn't the real killer in the Epyc lineup I dunno what is. It craps on pretty much anything and everything Intel has, and is only about $1100 right now (pricing is inflated, but meh).

It's pretty crazy how cheap hardware is compared to the cloud these days; sure, it's not in a data center or if it is that's a headache of its own... but it is really fucking cheap. I personally think a lot of mid-sized orgs could benefit from moving a lot of their non-production environments to an on-prem server. The VPN is probably already locking people out, causing connection headaches anyway, so it's not like you're gonna have less connectivity than you did ;-)


Public cloud is a giant scam to make sure 80% of VC money trickles back up to the big three. Most of these startups in Soma could run their product on a single threaded C++ server. But that's not cool and trendy. You're supposed to pretend that you could suddenly become Google overnight, so you need The Cloud(TM), microservices, and all kinds of redundant garbage.


Great. Show me how fast you go to market with a web application and mobile backend written as a single threaded C++ application. Also please tell me how long it took to secure and set up and maintain the server.

Startups do, and always should, focus on getting to market faster so they can get feedback from real customers.

I will take going to market faster over saving a few dollars on server costs everytime I'm given the choice.

Mature products' and companies' use cases are more nuanced.


>Show me how fast you go to market with a web application and mobile backend written as a single threaded C++ application. Also please tell me how long it took to secure and set up and maintain the server.

Let's not go crazy, Apache on metal is a very simple setup and at least as secure as S3 out of the box. Platform spread and simplicity are just lost concepts, that's all.


I'm not sure it'd even be possible to run Apache on bare metal; it has too many dependencies on OS facilities.

I'm curious if people are running bare metal web servers. I'd think there's enough lookup, modules etc that it wouldn't be worth it (small embedded applications like IoT frobs excepted)


Bare metal just means not a VM. The machine still has an OS.


No, it really does not mean that.


https://us.ovhcloud.com/dedicated-servers/

> OVHcloud Best Value is a great way for you to experience the advantages of bare metal over virtual servers at an unbeatable price. Our Best Value bare metal servers feature the most stable environment, making it perfect for processing large volumes of data.

The terminology is "bare metal" when buying up dedicated servers these days. Some places still use the term "dedicated", but since the advent of "the cloud", the term "bare metal" has come up, since most instances are ASSUMED to be VPS.


That doesn’t even make sense. Wouldn’t that be “non-virtualized”?

“Bare metal” has meant right on the bare hardware (silicon) since the 1960s. I don’t think I’ve ever heard anyone use that expression in any other way.


Not really a cloud developer, so I looked up a couple of links and I haven't been able to find any uses of the term other than referring to a single-tenant machine that you have complete control over, as opposed to virtualized solutions.

https://en.wikipedia.org/wiki/Bare-metal_server https://www.ibm.com/cloud/bare-metal-servers/hosting-solutio... https://www.ionos.com/digitalguide/server/know-how/bare-meta... https://www.rackspace.com/library/what-is-a-bare-metal-serve...


For web, yes. For embedded systems, "bare metal" means running without an OS.


That’s both correct and irrelevant because this entire thread is about running Apache, which is clearly about the web, considering Apache is a web server.


You know whats funny? If be excited to see Apache run embedded on hardware.

This comment is claiming way too many victims. Shame, it'd be a rather exciting experiment!


Bare metal is often used to mean using a CPU instead of a vCPU. But I can see why you would interpret it the way you did.


I disagree with the parent that the cloud is a scam. But I also disagree that getting to market faster is opposed to the operational simplicity that he's talking about.

Startups are very likely to fail before they need to scale. So that single-threaded C++ server, or whatever it is that the existing engineers are fastest at creating, is fine to get out there and get early market feedback.

The way I always balance this is to make the product manager put scaling and redundancy stories in the queue with feature stories. We'll build with scaling and redundancy in mind, but we don't do any extra work until the business decides those things are a higher priority than testing new hypotheses. And then I have them buy it in increments, so we can add automated load tests as we go. Then if we were to get into failwhale territory, it'd be a decision we all made together.


Strange, somehow that is exactly what we were doing in 1999, using Apache modules on UNIX.

We had a business in one year and were acquired by a major Portuguese company, with luck the company did survive the crash, and exists nowadays as Altitude Software.

Some of the founders of that startup, left after a couple of years and founded Outsystems.

No cloud, no scripting languages with kubernetes scaling to fix lack of performance, no containers, no internet scale document databases.

Just pure C and C++, with a bit of TCL thrown into the mix, and Apache.


And yet if you were to go out today and build the exact same product in the exact same way it wouldn't succeed. Strange.


Naturally, because it wouldn't fit the fashion driven buzzwords of today's Silicon Valley copy-cats.


Says who? Or is the only metric of success in your book the publishing of buzzword-filled "engineering" blog posts, parroting the endless nonsense coming from similar "start" "ups"?


Where does snap's $3 billion cloud spend land on this spectrum?


A lost bet. Their user growth stalled in the past years, they could have moved at least some parts of Google Cloud without losing scalability. On the other hand, it could have gone the other way around and they could have been actually in need of all the scalability Cloud Services offer. From what I have heard they messed up their interface some time after their IPO and that really hit them in the guts.


I’d be interested to see the startup that’s spending 80% of its budget on cloud. I’m my experience most of the startup budget goes to payroll, which seems like the sort of thing an anti-large-business person should appreciate.


> Startups do, and always should, focus on getting to market faster so they can get feedback from real customers.

Absolutely. But are you telling me that a $5 p/m LAMP stack from a basic web hosting company (I'm a Microsoft dev so not sure what the FoM in Linux-land is) isn't worth it as a first step? That you have to use a bunch of bells and whistles from AWS or Azure or something?

And, if you're in the MS camp, surely a $10 p/m IIS/SQL hosted option is much easier than anything the big three have...

I'm not seeing the value proposition of cloud here...


You don't even need C++. Even with Python, Node or PHP, a basic MySQL and Redis installation on a simple Debian server with a little tweaking, you can serve huge amounts of dynamic traffic with a $50 dedicated server from Hetzner.

It'll certainly take less time to manage than Kubernetes, that's fore sure.


The last thing you want is to architect your backend for a single hetzner server and then find out that it's not enough and have to rewrite everything. If your entire goal as a VC funded startup is to "get big or die" then optimizing for a single server is a waste of resources.


Totally disagree. When you're at the point where a $50 server won't be enough, you still have a long way to scale, just by adding more hardware.

> optimizing for a single server is a waste of resources

No, I think you should spend your time wisely and actually building your product. I don't think you should spend a huge amount of time to "optimize for a single server" (whatever that means), just that a single server can be enough for a large amount of traffic.

Many startups go all-in on complex architecture, microservices and Kubernetes from the get-go (or start splitting their monolith or whatever before it's necessary) and lose a huge amount of time setting all this up (when only Netflix-sized companies really need it) at a time where they should've focused on building their product.

The point here is that you can easily scale most apps in small steps without having to indebt yourself with complex architecture from the get go, which requires having to spend huge sums of cash on sysadmins and AWS bills that don't benefit your users.


The c++ example is indeed a bit too much. But setting up a Debian instance with nginx, python, nodejs and a bit of security would probably take max one evening.

For startups I personally believe in the more hybrid solution. I would use a cloud provider and manage the vms myself. For 100 dollars a month you can run 4 vms and a load balancer. Perfectly fine for even most scaleups. Aws, gcs and also azure are money pits. These companies to me resemble the oracle kind of company. The same btw goes for all these tooling companies like hubspot, Salesforce etc. Attractive in the beginning, but bloodsuckers after you become bigger.


Actually you run make, make install and copy the binary to a server. That would be pretty fast and quite simple to operate.


That's pretty much how it's done with Go.


> Public cloud is a giant scam to make sure 80% of VC money trickles back up to the big three.

This has to be hyperbole right? In any startup labor and rent is the dominant cost, not compute and other SaaS products.


Marketing is supposedly a large part of spend too, or should be from what Ive heard. Doesn't matter what your building if no one has ever heard of it, and I've heard some comapnies spend huge sums on marketing alone.


Labor is usually some multiple of rent -> the big cost is labor.


But but but ... running real servers requires that grouchy dude with scruffy facial hair and mountain boots. And he expects to be paid an absurd amount of money. I avoid these problems if I just put it in the cloud. </sarcasm>

This is just like when they got rid of secretaries. The work didn't go away, it just got moved to all the peons.

> Most of these startups in Soma could run their product on a single threaded C++ server.

Most could be run on a single threaded Python server.


And really it's a false dichotomy. You don't have to build your own data center.

Renting managed, dedicated servers is very inexpensive compared to cloud.


I’m on a very tight time budget for my multiplayer game. If I had to set up servers myself on bare metal, I would be making a single player game instead just due to time. The cloud is nice for some situations


cloud is not just hardware though. Its prebuilt infra that does all the boring parts for you. Yes hardware is one component but i would argue its less significant than software.


Using the current AMD inflated price, you could get an Intel Gold 5218R 20 for $1273. Even with the official pricing of 7302 closer to $900, you are paying only ~$370 for Intel's 4 More Cores and higher Turbos. Not to mention you still get overall higher IPC on Intel's part.

And once you factor in the price of the whole system with ECC RAM, SSD, Network Adaptor etc ( Which you get discount from Intel's part ). The whole package cost isn't so much in flavour of AMD.

So I dont see what is the real killer here. Intel ( And arguably AMD ) are still selling as many as they could make. The EPYC 2 was announced Mid 2019, and it has been 6+ months since it was launched, and yet Intel is still making record Datacenter revenue. With EPYC making minimal gains on a already dismally small base number. I.e Even If they had 100% increase in shipment from a Base Market Share of 1%, it would still only represent 2% of the market.

With the launch of a competitive Notebook APU, where Notebook represent 70% of today's PC market shipment, the assumption of more EPYC Orders with four HyperScaler, launch of two new console, Ray Tracing GPU, new GPGPU, AMD is forecasting 30% YoY revenue growth. Different people may have different perspective on this number. But I really dont see any "Real Killer" here.

And that is speaking as someone who really wants to see AMD to grow a lot more, but the Data suggest otherwise.


You need to also take into account that datacenters won't change hardware every cycle release, so Epyc needs more time to gain market share.

If you're doubting the value and performance, you must be blind.


>You need to also take into account that datacenters won't change hardware every cycle release, so Epyc needs more time to gain market share.

Define Change? EPYC has been sampling to all HyperScaler since early 2019. And Google has only just released an EPYC instance days ago.

>If you're doubting the value and performance, you must be blind.

I already gave a value analysis, including AMD's own projected growth. I will leave others to comment on other aspect of your comment.


Yes, the hyperscalers who received early samples all rolled out EPYC deployments - but that was still a small amount of the market.

The simple fact is the server market moves slowly - server roll outs are planned a year in advance, so you need to be patient.


A single Xeon has only 48 PCIe lanes which normally is not enough. So, you should really be comparing price of a dual Xeon system with the equivalent single socket AMD solution


> It's pretty crazy how cheap hardware is compared to the cloud these days

By “the cloud” I’m assuming you mean aws/gcp/azure, and comparing them simply with on-prem is a false dichotomy. There are plenty of other cloud and bare metal hosting providers who actually pass on the savings as hardware value improves.


> There are plenty of other cloud and bare metal hosting providers who actually pass on the savings as hardware value improves.

It would help them and us if you would list them. :)


Not GP, but OVH Cloud is really cheap. Some people have had issues with OVH reliability, but I've only had success. I use AWS for prod, and a mix of Digital Ocean and OVH for non-prod.


OVH bare metal is what we use when we need cheap raw horsepower. Mentioning them is usually controversial, which is why I refrained in my parent comment, but we’ve had nothing but success.


OVH, DigitalOcean, Hetzner (for EU).

Currently renting a dedicated 48-core + HT (so 96 virtual cores), 64GB RAM, 2TB SSD, 10Gb WAN right this moment for ~$500/month as a buildserver.

That would cost 10x on AWS/Azure/GCP.


They could benefit if they looked at infrastructure as a profit center instead of a cost center, and paid/outfitted their IT staff appropriately. But most won't, and AWS/Azure/Gcloud/etc is a way of offshoaring infrastructure to those that do view it as a profit center. In some ways it's positive, as non-technical leadership is coming around to the fact that they don't do technology that well, and can still look modern/ahead of the curve at conferences by using "the cloud".


>have already had an epyc shit taken on them

I hope to be able to come up with amazing wordplay like this one day.


$1,000 in CPU is nothing when in one server you are spending $32,000 on RAM and even more on disks.

Plus, if I have 16 more cores I'm just going to buy even more of the two, sometimes I can't fit any more in the server so I can't increase my density if I wanted to.


I'm not sure why you need 3 Terabytes of RAM in every server. Even if you were to replace every single application on your OS with a JVM based one (including things like bash) you wouldn't meaningfully hit that limit. If you have an application that does indeed need many Terabytes in the same address space then you probably have a very small number of servers. No one runs a 1000 node cluster where each node has 3TB RAM.


Look at some of the supercomputers coming out with even more RAM than your 1000x3TB (3PB) mentioned recently here on HN, as a counterpoint.


Isn’t the intel 6208U a strong competitor to the AMD 7302? At the same price and TDP it has higher clock speeds and a unified memory domain, compared to the AMD 4-way NUMA architecture. It seems like you can make a case for either, depending on your workload.


The AMD Rome chips (including the 7302) behave as one numa node I thought (and can find online). You also get quite a lot of PCIe4 as a bonus and a higher all core base frequency. Though your mileage may vary depending on workload as you already stated.


AMD's latest CPUs are one NUMA node per socket.

Here's the 64-core Threadripper's core to core communication latency: https://pbs.twimg.com/media/EQXru3WU8AAV3JC?format=png&name=...

Communication within a CCX is quicker, but everything else goes through the central IO die that has all the DRAM controllers.


What is core-to-core communication?

Cache is shared by the cores, but may be temporarily "assigned" to a core that recently wrote to it. Is the latency(x,y) the "# of cycles to reassign to x a cache page owned by y?"?


> Cache is shared by the cores,

Not really. All three levels of cache are split on Rome. L1 and L2 are per-core, and L3 is per-CCX (4 cores). If you have 1 thread with a working set larger than the 16MB L3 slice that each CCX gets, then you'll be hitting DRAM rather than spill over into the L3 of another CCX. But if you have cores on separate CCXs that are using the same region of memory, then the usual cache coherency semantics for separate chips applies.

The next version of AMD's Zen architecture is expected to increase the CCX size to 8 cores, so all 32MB of L3 on an 8-core chiplet will be unified and shared between all 8 cores, rather that being partitioned into two 16MB per-CCX chunks. I don't think it's practical for them to unify the L3 cache across multiple chiplets given the performance of their inter-die connections, and I don't think they have the die space on the central IO die for a fully unified L4 cache. (Shrinking the IO die to 7nm may make it possible to have some L4, but probably not enough to really help many workloads.)


> L3 is per-CCX (4 cores). If you have 1 thread with a working set larger than the 16MB L3 slice

Still, 4MB per core is a lot more than the paltry 1.3MB Intel's 9282 offers.


That’s an incredibly useful table! Do you know where the data in that table came from?



It's more complicated than that. There are still die-local memory controllers, but the penalty for remote access is vastly lower than earlier Epyc models — so much so that you plausibly could run your workload with naive UMA memory access and be just fine. AMD's ad copy says it's UMA, but really that's just marketing spin on improved remote memory perf.


Fwiw the latest xeons (Cascade lake) have the option of two numa nodes per socket available in the bios.


You can configure it in different ways in a BIOS but the physical reality remains that it is NUMA and some accesses take longer than others.


You're either talking about cache latency, or still talking about first-gen EPYC/Threadripper rather than the current generation codenamed Rome. On a cache miss, all chiplets on a single-socket Rome system have roughly equal latency for a DRAM fetch, regardless of which physical address is being fetched. Any differences are insignificant compared to inter-socket memory access or fetching from a different chiplet's DRAM on first-gen EPYC. And even if you wanted to treat each chiplet as a separate NUMA node, 4 isn't the right number for Rome.


"And even if you wanted to treat each chiplet as a separate NUMA node, 4 isn't the right number for Rome."

You can configure Rome systems with 1, 2, or 4 NUMA domains per socket (NPS1, NPS2, or NPS4, where NPS == "NUMA per socket".) Memory bandwidth is higher if you configure as NPS4, but it exposes different latencies to memory based on its location.

It's really impressive that you can get uniform latency to memory for 64 cores on the 7702 chips (when configured as NPS1).

https://www.dell.com/support/article/en-us/sln319015/amd-rom...


The underlying hardware reality is that the IO die is organized into quadrants instead of being a full crossbar switch between 8 CCXs and an 8-channel DRAM controller. Whether to enumerate it as 1, 2 or 4 NUMA domains per socket depends very much on what kind of software you plan to run.

Saying that memory bandwidth is higher when configured as NPS4 probably isn't universally true, because that setting will constrain the bandwidth a single core can use to just effectively dual-channel. For a benchmark with the appropriate thread count and sufficiently low core-to-core communication, NPS4 probably makes it easiest to maximize aggregate memory bandwidth utilization (this seems to be what Dell's STREAM Triad results show, with NPS4 and 1 thread per CCX as optimal settings for that benchmark).


I was responding to your claim that "And even if you wanted to treat each chiplet as a separate NUMA node, 4 isn't the right number for Rome", which was incorrect. 4 is one of the three possible options for the number of NUMA domains.


How does 4 nodes let you treat each of the 8 chiplets as a separate NUMA node?


Your comments about Rome are completely incorrect. There are four main memory controllers in this architecture and some of them are further from some CCDs than others. In the worst case, accessing the furthest-away controller adds 25ns to main memory latency.

You can put this part in "NPS1" mode which interleaves all channels into an apparently uniform memory region, however it is still the case that 1/4 of memory takes an extra 25ns to access and 1/2 of it takes an extra 10ns, compared to the remainder. Putting the part into NPS1 mode just zeroes out the SRAT tables so the OS isn't aware of the difference.

But don't take it from me. AMD's developer docs clearly state, and I am quoting, "The EPYC 7002 Series processors use a Non-Uniform Memory Access (NUMA) Micro- architecture."


> AMD's developer docs clearly state, and I am quoting,

Please quote something that's unambiguously supporting your claims. What you've quoted is insufficient.

What I said about a single-socket Rome processor is not "completely incorrect" under any reasonable interpretation. The latency and bandwidth limitations in moving data from one side of the IO die to another is much smaller than the inter-socket connections that were traditionally implied by NUMA, or the inter-chiplet communication in first-gen EPYC/Threadripper.

If you want to insist that NUMA apply to even the slightest measurable memory performance asymmetry between cores, please say so, so that we may know ahead of time whether the discussion is also going to lead to esoteric details like the ring vs mesh interconnects on Intel's processors.


If you're not sensitive to main memory latency, just say that. Don't try to tell me that 25ns is not relevant. It's ~100 CPU cycles and it's also about 25% swing from fastest to slowest.


Intel's server/workstation CPUs have had 2 memory controllers during the last several generations, so even if the whole CPU is seen as a single NUMA node by the software, the actual memory access latency has always been different from core to core, depending on the core position on the intercommunication mesh or ring.

So what ???

The initial posting was about the CPU being seen as a single or multiple NUMA node by the software, not about having an equal memory access latency for all cores, which has never been true for any server/workstation CPU, from any vendor, since many, many years ago.


You are referring to the last generation chips ...


It would be for me, but it's a single socket only processor. I like the 7302 specifically for the non-P variant. If I was going to stick to just one socket, I'd probably spend a bit more and go with the entry level Threadripper 3960x...

It's a nice looking processor though and probably the only one worth a damn in that line up.


I haven't seen a single review of the 6208U/6209U/6210U or anywhere that has them in stock, they might as well not exist.


Launch-day reviews are pretty uncommon for server processors, especially mid-cycle refreshes that don't introduce any fundamentally new tech. And retail stock the same week as the announcement is also not how this market segment usually works.


AMD doesn't support 4 way, nor do they support (well) ECC. Intel still has a huge advantage for serious/critical use cases.


It doesn't because it's not necessary when you can get 128 physical cores with two sockets. Do you really need 256 cores on a single board? If you do, wait a year or so and there will be 128-core packages available.

They also support ECC. What's not "well" about EYPC?


Also sort of interested in this comment. It can be difficult to make ECC useful. There's chipkill vs SECDED, for starters. On paper, EPYC Rome has chipkill. More important than paper features is integration with the board firmware and the OS kernel ... Linux RAS features are quite useless if the kernel fails to notice the errors. Whether this stuff is well-integrated depends a lot on your vendors.


An occasional 1 bit correction is very common compared to chipkill, so there is a huge benefit to ECC without chipkill. In fact, with 1000s of servers, I've never had chipkill give me any benefit. I guess I'm too small to see the effect from chipkill. But yes, I do see 1-bit corrections.


Yeah, not advocating for chipkill, but the OS has to know how to interpret the machine check syndromes, is all I was getting at. This has been a problem for me on Skylake-SP with Linux, to name one.


I've always had to go out of my way to find single-bit correction numbers in Linux. I suspect that once you find that, noticing a chipkill event is pretty easy. But I've never seen a chipkill event, despite having a lot of DRAM for a long time.


Too little, too late.

After years of ruthlessly milking us I just hope they loose a big market share and become equals with AMD. The consumers can only benefit from that.


I whole-heartedly welcome AMD's new offerings and have completely switched to buying servers with AMD CPUs, but I have to disagree with the characterization of "ruthless milking" by intel. There was no instituted monopoly here. Others were free to compete, but they failed. Intel's high prices motivated AMD to create something better and now they have. If anything, the high pricing was a screwup on intel's part and good for the consumer. Intel could have better protected their lead by charging less (but not as less as now) to discourage competition. I'm glad they didn't, even though it cost me for the last few years. Now that there is a valid competitor again, maybe we will see better conformance to Moore's Law.

Edit: To clarify, I will not argue that intel has a clean record of competitiveness. My point is that they didn't "ruthlessly milk" the consumer with high prices. Instead, they stuck a knife in their own back by charging excessively high prices without sufficiently innovating. This created an opportunity for AMD and motivated them to create something better.


> Others were free to compete, but they failed.

This is...not accurate. Extensive cross-licensing agreements between AMD and Intel formed over the years are the only thing that permit AMD to compete. New competitors are effectively locked out of the x86-64 market entirely at this point. There is a decided lack of freedom to compete in this market at the moment.

Intel has also been found to have illegally engaged in anticompetitive measures on a number of occasions in order to lock AMD out of competing in many market segments.


And it doesn't require an overly suspicious mind to note the per-core performance margin Intel were able to claim for most of the last decade is pretty much the drop in their processor performance when you disable all the security holes.


I've said it before, the whole speculation-exploit mess seems like chip design's answer to the real-estate-securitisation highjinks which caused the last financial crisis. https://news.ycombinator.com/item?id=16105385 And that's not an analogy which puts all the spotlight on Intel, by any means.

> I'm finding it hard to escape the suspicion that Spectre and (to a lesser extent) Meltdown is another failure of expertise on the same rough scale as the run-up to the financial crisis, the decades of bad or poorly-justified dietary advice, and the statistical problems in experimental psychology. On the face of it, it seems obvious that speculation + cache + protected mode was a combination likely ripe for exploitation, but the response seems to have been "nah, it'll be fine, probably"? And even if it for some reason wasn't obvious, it's now clear that it actually was the case. So the academic and industrial and bad-boy "security community" collectively more or less let the CPU manufacturers take a flyer on this for, what, a decade?


Sure, there's an imperfect competitive framework in place. It can be improved. Regardless, AMD failed to compete effectively in that framework over the last few years. The fact that AMD now has a superior offering attracting significant orders implies that competition is possible in spite of the unfairness. AMD probably would have done much better previously if they had produced better products.


So maybe it's time to create a successor to x86-64. Nobody is preventing that from happening.


Depending on what qualifies, ARM is there today, or will be soon if Apple has anything to say about it.


Apple's not really in the server market, though.


We are far past time for a successor. The x86 family has been an architectural mess since the 8086. It's time for someone to provide an alternative that motivates the market to dump the baggage and start with a clean slate focused on security and efficiency, as opposed to compatibility. Unfortunately, I see nothing on the horizon. Neither ARM nor RISC-V qualify in my view. We need something more radical that has dramatic advantages.


There is a lack of viable non x86-64 hardware. Most ARM server vendors have shutdown before they even released their hardware.


Sadly, POWER is going to get hurt in this Intel vs AMD price war. POWER 9 was until recently competitive in price/performance with high-end Xeons but, sadly, IBM would need to cut prices and that's something they are not going to do.


> Others were free to compete, but they failed.

Intel engaged in a lot of anti-competitive practices, lost a civil lawsuit over it and was fined 1bn by the EU. [1]

They almost killed AMD in the process.

Every company this size with a quasi-monopoly (in a certain segment) will squeeze the market and try to buy or push out competition any way they can get away with.

[1] https://en.wikipedia.org/wiki/Intel#Litigation_and_regulator...


>There was no instituted monopoly here

You're talking about the Intel that was used for monopoly practices, right?



Oh this is a tried-and-true strategy that both nVidia and Intel follow again and again. And it works like a charm.

Think of market share as a fragile glass or ceramic cup sitting right at the edge of the table. You're usually oblivious cz it hasn't fallen (or you're busy with other stuff). Then you notice it has tipped over and is now in free fall. If you have very good reflexes, as a big market share company, you catch it mid-air (you lower the prices to the point where it prevents the drop from breaking anything). If you succeed, now you have the cup in your hand; it didn't hit the floor, and it didn't shatter. Great. Now you slowly raise that cup back to the tabletop level (you raise the prices back to the same level in a few years) and place it at the table. Done.

- AMD revenue is between $5-10b,

- nVidia, between $10-20b

- Intel, over $50b.

Intel and nVidia can eat AMD alive anytime they want. Invidually or together. Once, or many times.


> Intel and nVidia can eat AMD alive anytime they want. Invidually or together. Once, or many times.

Except they can't.

1) Since I'm assuming you're talking about CPUs, nVidia can't make x86-64 processors. Only AMD and Intel can. Literally. They can't use that ISA. Maybe they can take them in the GPU market, but again, they probably still can't.

2) Intel looks like they can, and they do in fact have the funds to do it. But that's just not how it works. AMD has decoupled their tapping and design long ago. Intel still hasn't. The speed at which Intel can innovate is becoming very limited. So while they might catch up, and follow suit, it looks like Intel can't "eat" AMD at this point in time. At least not product wise. Maybe in 5 or 10 years, if AMD starts slacking off again.


I think what parent means is that Intel can do it to AMD over x86 chips and nVidia can do it over GPUs, where they have a lead in several market segments.


One of several of Intel's issues is that they can't do this to TSMC.


It's amazing that one of the best moves AMD ever made was ditching their own fab and throwing in with TSMC and others instead.

Intel's still going it on their own and struggling.


Isn’t competition great?


It's fabulous! I have a customers with slow/large databases, and now I can tell them that there is an incredibly cost-efficient hardware solution available. Sure, clever indexing schemes are often the best, but it certainly doesn't hurt to be able to throw 128 cores at a problem without having to check the organisation's credit rating. Even a single-socket AMD server with the cheaper 48-core CPU simply demolishes workloads that used to cause an eye-watering amount of money to handle.


That’s great!


You certainly can’t tell by the amount of people getting hyper sensitive in this thread.


Yeah super weird why people have such visceral reactions to this. It’s beyond my post grade. Likely something to do with tribalism and do sports teams with intel being one team and amd being another


My reaction is glee, so many years of being screwed over by Intel. Also I love rooting for the underdog!


Milan should be released within a month, ahead of the NDA roadmap like Rome was. I hope pricing is cut more on Rome when it's released, because I'm currently building a Supermicro home virtualization server/workstation.


I wonder how many "microservice in the cloud" applications could run on a 8-16 node raspberry cluster and 1 dedicated database server.


Competition works.


I can’t wait until this happens with last-mile internet access too, and, in a funny twist of circumstance, quality electric cars.

What other long-moated industries are going to crack open in the next half-dozen years?


Those are very different situations. Intel has held a technological advantage and, up until recently, no one could compete with them so they could command their price. With ISPs there is a huge capital investment cost, and with EVs there are supply chain issues limiting capacity and commanding higher costs.

With last-mile internet, the incumbents have made significant capital investments and competitors have so far been unwilling to engage in the sort of long term slow return investment necessary to build out infrastructure. Google Fiber has tried all means to lower the cost of entry and has thus far failed to deliver significant gains and they're unwilling to make the huge capital investments necessary to truly compete because ensuing price wars would make realizing a return near impossible.

With BEVs the problem is manufacturing cost, primarily the cost of batteries needs to drop but the demand for EVs has not warranted dramatic ramp ups in battery manufacturing necessary for that to occur. The supply of suitable batteries is so limited that Jaguar and others have actually had to stop production.


> With last-mile internet, the incumbents have made significant capital investments and competitors have so far been unwilling to engage in the sort of long term slow return investment necessary to build out infrastructure.

I wish it were that simple. Unfortunately, the incumbents in a lot of places have done so much to make it appear like there is competition when there really isn’t, to prevent localities from allowing new entrants.

It’s a huge problem, at least in the US.


Honest question: do you think Teslas are, on the whole, overpriced? I am neither an owner nor an expert in electric cars, but I have been routinely impressed with the rate at which the price has come down. I don’t doubt that the $35k Model 3 configuration is probably non-sensical (at $35k minimum spec seems silly) but also it seems pretty competitive for an electric car that can replace a gas car. My personal hunch has always been that the price for relatively cheap Tesla configurations that make sense probably are limited by volume and initial cost of R&D and scaling up manufacturing. (Of course almost any company does some market segmentation for better or worse, so I’m sure the prices of options are non-sense.)

Not to say the prices won’t go down. But I actually didn’t think the reason they were high was due to mature, moated companies. My personal belief is that Tesla would be foolish to rest on their success in such a way when other companies with a lot more resources are surely going to be competing more competently soon.


No, I don’t think they are super overpriced, I just think the quality isn’t there yet. Hopefully we can see higher quality all-electric options at similar or lower price points.

Tesla’s service reputation is the main reason I still currently drive an ICE. (Before that, it was their laggy touchscreen - my gas guzzler has an iPad on a bracket on the dash, which does not lag.). Also I’m never going to drive something that spies on me; I had the GSM radio transceiver removed from my current vehicle.

AIUI if you do that to a Tesla you never receive future updates, even to features you paid for in advance.

More competition in the EV market is going to be awesome. Right now it’s just a bloodbath where Tesla can basically do whatever they want, because until very recently every other EV was simply hot garbage.


> Also I’m never going to drive something that spies on me; I had the GSM radio transceiver removed from my current vehicle.

i'd like to do the same to my VW Golf. i bought a VCDS tool to go in and fiddle with the settings to essentially put the gsm module into airplane mode (i think it's verizon telematics). dunno if this will be sufficient though.

i read that just ripping it out causes a bunch of the infotainment system misbehave, etc.


You've never owned an EV but you're perpetuating the notion that they're of unspecified poor quality compared to ICE vehicles?

I have a Nissan Leaf and a Tesla Model 3 and both are of comparable quality to ICE vehicles. They are in two different categories in terms of materials finish but neither is bad quality. The fit and finish of both is average to above average in the industry.


I’ve driven several different instances of every model of Tesla and I’ve toured their California factory. I am well acquainted with what Teslas are and are not.


You made a blanket statement about EVs as a whole and didn't qualify it in any way.

So what's poor quality about Tesla's? An what's your basis for comparison?


I doubt you could convince me that there is $40k more value in a Model X than a Model 3.

In terms of $ per "car" I'd also say yes, but that comes down to my own personal preferences being categorically outside of the luxury car market.


I also am not in the luxury car market - my personal daily driver is a humble Civic. I’m actually moving in the direction of less dependence on owning a motor vehicle, though, with no real need to drive to work anymore.


Similar with me. Once I moved to an area that supports less dependence on cars, their negative effects have become much more apparent.


The renault ZOE's most expensive configuration costs $35k. I don't think Tesla is going to win price wars.


If this is really your impression of the industry, you must be really young. It’s not “long-moated”. Intel was in a similar situation when the Opteron was current, and before they had to go head-to-head with Sun and IBM and before that nobody would even think of putting intel in a server at all. The period during which Intel’s products were the obviously superior choice lasted maybe 5-8 years at most. And it wasn’t some nefarious plot, either. Just good product.


The first computer I built myself was an 800MHz Athlon, so I am somewhat familiar with Intel’s history of competition.

Moats don’t have to be malicious or nefarious. Sometimes there are just high barriers to entry to compete.

See tractors with DRM, for instance. Apparently building tractors and a tractor repair/parts network is a very expensive undertaking.


A moat implies barriers against new entrants, but none of the parties in the current market are new. They are all well within the moat.


I'm pretty sure that the concept of a "moat" applies to more than new entrants. For example, this definition [1] refers to competitors, not new entrants.

1: https://www.investopedia.com/terms/e/economicmoat.asp


Residential Power, Solar will hit a tipping point soon for home installs. And I'd expect the proliferation of microgrids around renewable sources mixed in with batteries.


Like the Tesla battery?


Actually I sort of think the Telsa battery is crazy risky. I'd really love to see suburbs have collocated batteries so I didn't have to have to have strip of lithium ion batteries attached to my wood house.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: