this annoys me as well, why would the app decide to log me out, when it cannot connect to the backend? I have playlists downloaded for this exact reason...
Same, it logged me out and can't log back in. I have dozens of hours of music downloaded on my phone for this exact reason... And I bought Spotify premium for this exact reason, to listen to offline music. What a shame.
That comes to 23. I know “a couple” is sometimes used to mean more than two, but… not that much more than two.
“A couple” is just flat-out wrong; I’d guess that he’s misinterpreting ancient figures, taking the figures from no later than about 2013 about how many web servers (ignoring other types, which are presently more than half) they needed to cope with the load (ignoring the lots more servers that they have for headroom, redundancy and future-readiness).
One interesting aspect is that the number of servers is much higher than what would actually be needed to run the site, most servers run at something like 10% CPU or lower. Most of the duplication is for redundancy. As far as I remember they could run SO and the entire network on two web servers and one DB server (and I assume 1 each of the other ones as well).
If someone says SO runs on a couple servers this might be about the number actually necessary to run it with full traffic, not the number of servers they use in production. This is a more useful comparison if the question is only about performance, but not that useful if you're comparing operating the entire thing.
IIRC, without emergency redeploying, they might have issue running less than 4 - not sure if the tag server can coexist with web server anymore for example, redis is still a dependency, so is haproxy, separated SQL and IIS, etc.
Then there's support services (iirc, all of elasticsearch was non-functional requirements stuff and technically could be run without?) and HA.
That is still doable with mid-90s era hand management of servers (all named after characters in lord of the rings).
Not that you should, but you could.
And the growth rate must be very low and pretty easy to plan out your O/S upgrade and hardware upgrade tempo.
And it was actually possible to manage tens of thousands of servers before containers. The only thing you really need is what they now call a "cattle not pets" mentality.
What you lose is the flexibility of shoving around software programmatically to other bits of hardware to scale/failover and you'll need to overprovision some, but even if half of SOs infrastructure is "wasted" that isn't a lot of money.
And if they're running that hardware lean in racks in a datacenter that they lease and they're not writing large checks to VMware/EMC/NetApp for anything, then they'd probably spend 10x the money microservicing everything and shoving it all into someone's kubernetes cloud.
In most places though this will fail due to resume-driven design and you'll wind up with a lot of sprawl because managers don't say no to overengineering. So at SO there must be at least one person in management with a cheap vision of how to engineer software and hardware. Once they leave or that culture changes the footprint will eventually start to explode.
Most of that is extra unused capacity. They've shared their load graphs and past anecdotes where it's clear the entire site runs very lean.
Also 23 is very much a couple for a company and application of that size. It's not uncommon to see several hundred or thousands of nodes deployed by similar sites.
> "They aren't magically more efficient than other sites"
It's certainly not magic but good architecture decisions and solid engineering. This includes choosing SQL Server over other databases (especially when they started), using ASP.NET server-side as a monolithic app with a focus on fast rendering, and yes, scaling vertically on their own colo hardware. The overall footprint for the scale they serve is very small.
It's the sum of all these factors together, and it absolutely makes them more efficient than many other sites.
Exactly. That twitter thread is just pure rage based on no data. Sum up resources from that page - we are talking around 6500GB* of RAM worth of servers. That is no homelab.
* Maybe a bit more/less, because it's not clear to me if DB RAM is per server, or per cluster. Likely server, as on other servers. There is also no data on how big is their haproxy.
No one needs k8s. Bringing up their infrastructure in a k8s troubleshooting how-to was a weird thing to do in the first place. It's comparing apples and chandelier - makes no sense.
They have a typical vertically scaled infrastructure, most services have just two nodes, one active. The biggest ones are databases which in many companies are handled in "the classic way" anyway. Clearly it's not designed as microservices and doesn't need dynamic automation at all. Why on earth would they even bring k8s up in their plans?
Nevertheless, it is true that Stack Overflow has focused on backend performance and scaled vertically a long way, further than is fashionable. Just not so far as only using two servers for everything.
I'm curious. I saw a similar comment earlier, surely the surely windows licensing is just a drop in the bucket compared to the rest of the infrastructure costs?
I've not really looked at hosting anything on windows before, do they have unusual licensing terms in such a way that it would be a significant cost?
But adding a new server includes having to buy new licenses, which is a consideration you don't have with OSes that under licensing. It costs extra money, and used to be per socket when their infrastructure was conceived.
So what? Licenses are not expensive, especially compared to all of their other costs like the dozens of staff, and paying an invoice isn't complicated. They maintain their own hardware in colocation facilities so they'll get a new license way before they even get the hardware shipped out.
Why does this make scaling out "never an easy option for them"?
It is impressive, but it's not a raspberry pi kind of setup.
Just two of those "couple" are hot and standby DB servers with 1.5TB RAM. That infrastructure is scaled A LOT vertically.
Which makes no sense if the developers are the users...
But in the spirit of "there are only two business models - bundling and unbundling", I guess there are only two marketing tricks: use an old term for a new thing, and introduce a new term for an old thing...
Or more charitably, while the UX/DX distinction isn't very meaningful for this project, there are lots of products (e.g., payments) that target both (non-developer) end users and developers. It helps to be able to separate the personas.
I don't think this holds. As the rule goes, there are only three cardinal numbers: zero, one, many. What are the chances you product has only two meaningful personas - "users" and "developers"?
Take that payment system. People who buy things are obviously users, people who write CMS plugins for that system are obviously developers. But what about, say, analysts studying reports from that system? Accountants making sure the money flows where it should? Sysadmins keeping the backend components running? These are all users too.
Where UX makes sense as a broad concept, giving a separate acronym to one small subset of potential users... doesn't make sense.
>random three-letter combination that is pronounceable, and not actually used by any common UNIX command.
>actually
Many native French speakers use 'actually' when they mean 'currently' because of the 'actuellement' false-cognate. This looks like the same mistake but neither Swedish nor Finnish have a word that looks like 'actually' when I machine-translate 'currently'.
I know nothing of Finnish, but, in poking around on Google translate, I found 'nykyinen', commonly translated as 'currently', but sometimes as 'existing'. To rephrase the sentence to say "there is no existing use..." would be a little awkward in English, but would convey the same message.
I felt that in this particular sentence, neither 'actually' nor 'currently' are necessary, but to be sure I wanted to check the context, only to find that this sentence is not currently to be found in the article.
Finn here. I don't think the use of "actually" comes from any Finnish expression specifically but it might be some sort of literary habit that stems from the desire to emphasize how things turned out to be. It's somewhat common in Finnish to say how things turned out, rather than that someone (or you) made it so.
Thinking about it, I might've used the word in a similarly redundant fashion myself occasionally.
I think you’re overthinking it. The first definition in the Oxford Dictionary for “actually” equates it with “really”, a substitution which works fine here.
1. as the truth or facts of a situation; really.
"we must pay attention to what young people are actually doing"
That's definitely a possibility I didn't give the proper consideration. On the other hand, if I wrote that sentence with that intention, I would strip 'actually' it for being unnecessary.
Overthinking? Well I can hardly characterise this tangent as important.
On the other hand, they didn't need to create the company, the product and distribution network from scratch in a competitive environment. They had R&D and testing facilities, engineer workforce, factories and supply chains in place.
They are just excusing themselves out of this blunder. They sat on their asses changing the grill shapes and let an upstart undercut them.
It's easy to undercut another company if you're willing to make a loss.
If Tesla reaches profitability you can bet there will be a lot of other EVs on the road within two years from that point precisely for the reasons you listed in your first paragraph.
There been some dry spells for Porsche as well, when it was losing on the order of billions within a year. This does not reflect to maturity of internal combustion engine technology or viability of luxury vehicle business of Porsche.
That statement connected two non contradicting issues. Porsche could make that premium electric car, and it could stay in black.
Yes, they probably could. But the question really is whether or not they'd make a profit on that premium electric car. In other words, if they'd cross-subsidize from other income then quite probably Porsche as a whole would still be in the black but that doesn't mean they're turning a profit on the EVs, you'd have to break it out in order to establish that.
Based on the quote from the article my guess would be that they would not be making a profit on the EVs.
Tesla is as much a manufacturing upstart as it is a charger network upstart. Both parts can grow side by side. Shoehorning a charger network that is starting from zero into an existing, massive manufacturing and development organization that is perfectly tuned to the well trodden path would face a whole class of difficulties that simply don't exist in the all-new company.
> Shoehorning a charger network that is starting from zero into an existing, massive manufacturing and development organization that is perfectly tuned to the well trodden path would face a whole class of difficulties that simply don't exist in the all-new company.
But Mercedes would be ideal for that.
They own shares in most taxi companies in Europe, so they can start by electrifying them.
And the taxi companies HQ could each get a bunch of superchargers.
Which would immediately create the densest supercharger network in the world. And it would be profitable from day one.
>They sat on their asses changing the grill shapes and let an upstart undercut them
Or...maybe no one has figured out how to produce an economically viable electric car yet, Tesla included?
I mean, if already having the company, product and distribution is such a huge advantage, then the competitors have massive leverage, no? Or do you think Tesla has "already won"?
They did have massive leverage, the whole point of my post was they did not manage use it and are blame-shifting.
Tesla loses money due to continuing massive capital investment into production facilities, they still sell each car for more than what it costs to produce.
It is a rapid growth problem, certainly one of those things that Porshe executives are not familiar with.
But isn't always the game with ground-breaking technologies though? At first you don't make as much money, but you're laying the road for the big thing. And when the revolution begins, you're the one making the big bucks.
Not always at all. Also often, the forerunner disappears when other companies jump in at a later, proven, profitable stage of the technology, when both technology and the market are ready for mass production.
This. Ever heard of Myspace? Altavista? When making a market, watch out for well equipped and funded upstarts going to get you. Next victims of their own success: Docker.
I think in those cases, it was a matter of a better-designed, clearer-headed product taking space from a not-so-well-designed product. That's not what would happen here, because Tesla is already the creme de la creme. In fact, Tesla executing so well is the only reason EVs have seen their recent "resurgence" in the first place. And remember that Tesla isn't the first company to make electric cars.
I also think people underestimate by a lot how hard making new things is. It's not just a matter of coming in with more money. It's almost deceptive on Tesla's part that the cars seem simple. But if it really were so simple the Model S would already have competition. And when you're talking about someone who has raised a company that lands rockets in the middle of the ocean, competitors are fooling themselves if they think even just a superficial copy is going to be easy. This is something they have to bring their A game to at all levels.
Look back at 100 years of automobile manufacturing. The big companies we have today are the survivors. As recently as this century, Rover Group, a company that was around for a century and made a great product ceased to be.
The tax perks in many countries that have buoyed up Tesla sales are ending. So perhaps manufacturers don't see now as a good time to enter a market containing risk. Better to wait it out and plan accordingly.
They had some excellent ones too. Rover 600 & 75 were good cars. Many major manufacturers have had life-threatening failures in recent years resulting in recalls. Some surviving brands were much, much worse than Rover Group at quality. Lancia still exist, Alfa Romeo exist. Fiat in the 1980s, no thanks, I'm not paying for rust.
The Italian cars did rust at the slightest hint of damp weather but you soon forgot about that when you got behind the wheel. They had flair, style and driver engagement in abundance.
When I look back at the Rovers my dad drove they were as dull as dishwater and very poor quality.
They own Here maps, have laser-mapped all of Europe, work on self-driving cars, and own several european Uber-competitors and have stakes in many smaller taxi companies.
It transliterates C to Rust all right, but the Rust isn't any safer than the C that goes in. Note the representation of an null-terminated string - it's an unsafe pointer to a byte. That's what it was in C, transliterated unsafely to Rust. Some safe Rust representation for C arrays is needed.
From the description of how it translates a FOR loop, it does so by compiling it down to the primitive operations and tests. A Rust FOR loop does not emerge. That needs idiom recognition for the common cases including, at least, "for (i=0; i<n; i++) {...}".
This is a big job, but it's good someone started on it.
A Rust module that exactly captures the semantics of a C source file is a Rust module that doesn't look very much like Rust. ;-) I would like to build a companion tool which rewrites parts of a valid Rust program in ways that have the same result but make use of Rust idioms. I think it should be separate from this tool because I expect it to be useful for other folks, not just users of Corrode. I propose to call that program "idiomatic", and I think it should be written in Rust using the Rust AST from syntex_syntax.
Not to look a gift horse in the mouth, but it seems like Corrode misses some other chances to use idiomatic Rust:
1. Rust fn:main doesn't need to return something.
2. The arguments to main aren't mutated, so Rust doesn't need to declare them as mutable.
3. Ditto for the argument to printf.
Anyone know how easy it is to recognize and code for such cases in the transpiler?
Edit: It looks like they might have opposite design goals [1]: "Corrode aims to produce Rust source code which behaves exactly the same way that the original C source behaved, if the input is free of undefined and implementation-defined behavior. ... If a programmer went to the trouble to put something in, I want it in the translated output; if it's not necessary, we can let the Rust compiler warn about it." (Edit2: cleaned up and numbered)
I think that keeping an exact one-to-one mapping makes this tool a lot more useful. There's no telling what code depends on C idioms that would be broken by using a Rust idiom instead. Generating 100% equivalent code means that programmers can make intelligent decisions about when to switch over to Rust idioms as they continue developing the program.
Yeah, once you've got equivalent Rust, the rest is just optimization that should probably be implemented in the Rust compiler. No reason to put that stuff in the niche transpiler.
> Anyone know how easy it is to recognize and code for such cases in the transpiler? Edit: It looks like they might have opposite design goals
Yes the author has explicitly noted that they want a compiler as syntax-directed as possible, semantics change would go against that grain. In that spirit, idiomatic alterations would be the domain of rust-land fixers and linters (e.g. `cargo wololo` or `cargo clippy | rustfix`)
So you could chain Corrode with one of those to get a C-to-idiomatic-Rust converter?
FWIW, I googled those; Clippy and rustfix just seemed to be linters that can't detect things like "you're not mutating this so drop `mut`", and I couldn't find wololo.
No, most real-world C code will expect a C `int` to be 32 bits, while `isize` is often 64 bits.
On the other hand, at least for Unix systems `long` is often equivalent to Rust's `isize`: 32 bits for 32-bit architectures, and 64 bits for 64-bit architectures, so it would make sense to convert `long` to `isize`.
They're different types. isize is ssize_t (well, intptr_t), in that it is tied to the size of the address space, while C's int is not constrained. In fact, it is usually 32 bits, even on 64-bit architectures, where isize is 64 bits.
Wow. So I did some sleuthing and apparently in Rust the maximum size of an object must fit in isize, not usize. That means on 32-bit architectures you can't have arrays larger than 2GB, whereas on Linux and similar systems 32-bit processes have access to 3GBs and even the full 4GBs of address space. It actually matters for things like mmap'ing files.
Technically, C's int is constrained. C defines a minimum range of values for all the datatypes. The minimum range for int is -32767 to +32767. long is -2147483647 to +2147483647. Though the discerning pendant will claim, ex post, to target something like POSIX (which increases the bound on int, defines char as 8 bits, etc) if you point out improper use of int.
One irony of criticisms against C is that people argue it's too low level, but that's often because people treat it as too low-level. For example, novice C programmers think of C integer types in terms of bit representations and infer value ranges. Good C programmers think of C integer types in terms of representable values, understand that bit representation (specifically, hardware representation) is almost always irrelevant, and understand how to leverage the unspecified upper bounds on value ranges to improve the longevity and portability of their software.
Languages which emphasize fixed-width integers are, in some sense, a retrogression. The real problem with C integer types is you won't see the folly in poor assumptions until it's too late. Languages like Ada addressed this with explicit ranges. But I guess that was too burdensome. Fixed-width integers is an appeasement of lazy programming. I admit to being lazy and using fixed-width integers in C more than I should, but at least I feel dirty about it.
Many of the compromises Rust makes are clearly informed by the _particular_ experiences of the core team. For example, the fact that most Rust developers are of the belief that malloc failure is not recoverable (a big hold-up in adding catch_unwind) is a reflection of their experience with large desktop software. Desktop software has very complex, interdependent, and less fine-grained transaction-oriented state. Recovering from malloc failure is very hard and of little benefit. Most server software, by contrast, has more natural and consistent transactional characteristics. Logical tasks have less interdependent state, so it's both easier and more beneficial to be able to recover from malloc failure.
I think some of the choices wrt integer types is similarly informed.
> the fact that most Rust developers are of the belief that malloc failure is not recoverable
This is untrue. The true statement is similar, but has different implications -- malloc failure is usually not recoverable, and nonrecoverable malloc failure should be the default, for the problem space Rust targets (which encompasses more than low-level things). You can recover from malloc in Rust, it just requires some extra work.
> Because the project is still in its early phases, it is not yet
> possible to translate most real C programs or libraries.
It is currently trying to port over semantics exactly, so the Rust code is far from idiomatic Rust. Doesn't mean it's not useful, just saying that it's trying to be 1:1.
I guess the next stage would involve translating common non-idiomatic patterns into idiomatic Rust. Looks like this could be a job for a community-managed database!
On the rust subreddit someone tongue-in-cheek suggested `cargo clippy | rustfix` to be used in conjunction with this tool for better rust code.
But that actually could work! Clippy has a ton of lints that make your code more idiomatic, and rustfix basically takes diagnostic output and applies suggestions (still WIP).
Clippy is geared towards making human-written unidiomatic code better, so it might not catch some silly things in this tool's output but or certainly could be extended to do that.
It tells you about places where you can improve your code. Possible pitfalls, style issues, documentation issues, unidiomatic code, everything.
Its a developer tool so you can use rustup to switch to nightly to run clippy (and use stable otherwise) and not impose nightly on the rest of the people who use the project. We have plans for making clippy a tool that you can fetch via rustup without requiring nightly.
This is best handled on a per-project or per-organization basis. I would have such a project concentrate on the tooling for maintaining and developing such databases.
Thanks for pointing it out.
When I reached the homepage late last night, honestly I wasn't going to give it much thought.
The code snippet seemed uninteresting and to lack anything original, the pitch wasn't really selling much, and to top it off there were some text blurbs that seemed unfinished.
Then I saw your comment and read the tutorial. This was actually a pretty interesting read, even without actually following along in a console.
I find it assumes a bit too much so I don't think a total beginner would like it, but for someone used to toying around with new languages it's very well presented I think.