Hacker Newsnew | past | comments | ask | show | jobs | submit | mpixel's commentslogin

The thing is, information leaks. You can go and read the Windows source code, if you want to, you just would get busted if you actually use it to build your own Windows or something.

Likewise source code is a shared a lot in my experience, i.e. the important software that's closed but has the important customer, the important customer gets the access when they ask for it. This depends on who you are and what you sell, of course.

So in practice, the difference isn't that huge.

That said, you are right too, THERE IS a difference. And it's better than entirely closed source.

So I will prefer the source-available over completely-closed-source, but I'm not going to be grateful about it.

And then there's realities, if I'm writing business critical software that my life depends on, I'll make it closed source in all likelihood. I'm not Stallman.

So I don't blame anyone for caring about their interests, it's entirely fair.


> the important software that's closed but has the important customer, the important customer gets the access when they ask for it.

End users are rarely that important customer, so for end users, there is a difference.

> So in practice, the difference isn't that huge.

Except that with SAMS, you still can share. I said as much. And you can actually get the source if you're not a VIP.

That matters because sometimes, even with FOSS, end users do not get the source if the source is not copyleft enough.


Ultimately, I think we disagree on the essence of the issue and many consider the 'almost enough' option to be worse than 'enough' which is the crust of this issue.

That said, I respect your opinion -- I've seen your comments on the thread and I see where you are coming from.


Fair enough.


The tradeoff is that the app runs faster, looks better, works better -- in quite the indirect way.

Now that the developers on the core part don't need to spend time on compatibility -- or, just dont want have to make the base choice of being a runtime dependency -- they can spend time on other things instead.

This seems like a net negative at a glance, on the surface it means the apps are less compatible, so the second level is forced onto the older iterations, in practice, since each iteration has to worry about a lot less, the older iterations are _also_ a lot better instead.

It is of no surprise to me these Apple or Apple-like systems tend to be better overall, as opposed to the other philosophy of Android.

It leaks into all the levels. In the Java app, it is usual to see a deprecated warning that keeps working and it is maintained, and someone pays for that. The negative side is that there's no reason to get rid of the said dependency, either.

My point is that lowering the maintenance cost of _any_ app or systems in general, leaves room for improvement in all the other areas, as long as you don't fall behind -- if you are allowed to fall behind, you can afford to, if not, the end result is better given enough time.


> It is of no surprise to me these Apple or Apple-like systems tend to be better overall

This might have been true in the past, but it's been getting worse over the last decade.

For instance: The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings window written in SwiftUI, which UX wise is a joke compared to the old one, even though the old settings windows wasn't all that great either - case in point: try adding two DNS servers, searching for 'DNS server' only allows adding one item, then the DNS server panel closes and cannot be opened again without repeating the entire search, no idea how this mess made it through QA).

If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?


> The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings windows written in SwiftUI, which UX wise is a joke compared to the old one

Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.

> If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?

Because Apple are institutionally incapable of writing software at a sustainable pace and things gradually get worse and worse until somebody high up enough at Apple gets fed up and halts development to catch up with all the quality issues. This isn’t anything new to Swift; they took two years off from feature development to release Snow Leopard with “zero new features” because things had gotten too bad, which happened years before Swift. They are just far enough along in the current cycle that these problems are mounting up again.


> they took two years off from feature development to release Snow Leopard with “zero new features” because things had gotten too bad

This is not an accurate characterization of Snow Leopard. See "The myth and reality of Mac OS X Snow Leopard": https://lapcatsoftware.com/articles/2023/11/5.html See also: https://en.wikipedia.org/wiki/Mac_OS_X_Snow_Leopard

There were many significant changes to the underlying technologies of Snow Leopard. Moreover, Snow Leopard was not, despite the common misconception, a "bug fix release". Mac OS X 10.6.0 was vastly buggier than Mac OS X 10.5.8.


I was also (somewhat indirectly) responding to the claim in the parent.

> The tradeoff is that the app runs faster, looks better, works better.

I haven't noticed any of that so far in new macOS versions, and it is indeed not something where the programming language should matter at all.


> Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.

Swift the language strongly informed SwiftUI, which in turn strongly informed the applications written in it. The path of least resistance defines the most likely implementation. If I have to go the extra mile to do something, I probably will not, so worse UX (by some metric) is a direct consequence of that constraint.


There’s not really anything wrong with the SwiftUI API. The implementation is just terrible, especially on macOS.

Jetpack Compose is a similar API on Android, except the implementation is good, so apps using it are good.


And the vice-versa... features were added to Swift the language to make certain SwiftUI syntax possible.


I really hope that the new version of the settings app causes enough backlash that Apple starts fixing SwiftUI on the Mac...


The weirdest thing about System Settings is that SwiftUI already supports much more Mac-like idioms. They deliberately chose to use the odd-looking iOS-style switches, bizarre label alignment, and weird unique controls. While also keeping the annoying limitations of the old System Preferences app, such as not being able to resize the window.


> no idea how this mess made it through QA

I assume that Apple does not have traditional QA that tries to break flows that aren’t the new hotness. The amount of random UX breakage of boring old OS feature is quite large. Or maybe Apple doesn’t have a team empowered to fix the bugs?

To be somewhat fair to Apple, at least they try to keep settings unified. Windows is an utter mess.


That setting window.. whilst I don't really think swift UI has anything to do with.. it's just so awful, lifted right out of ios, where it is also just awful. As an android user, I don't understand how people put up with that app


The issue has got nothing to do with programming languages or UI toolkits, it's just that before there were more people with more attention to details, or now the management is so broken that there is no QA and no time to fix things.


A good operating system UI framework should enforce the operating system's UX standards though and make it hard to create an UI which doesn't conform to the rules.

But yeah, in the end, software quality needs to be tackled on the organizational level.


Really?

Just at the moment that Swift and, later, SwiftUI get introduced, and entirely coincidentally, management breaks?


It's been a gradual process, looks at the Music app, how it has continuously got worse and buggier over the year, even without Swift and SwiftUI. You can't blame SwiftUI for that.


I see the whole thing a bit more holistically. Swift and SwiftUI are both symptoms of the more general malaise, and then contribute back to it.

We had this in hardware, with machines getting worse and worse and Apple getting more and more arrogant about how perfect they were. In hardware, they had their "Come to Jesus" moment, got rid of Jonathan Ive (who had done great things for Apple, but seemed to be getting high on his own fumes), pragmatically fixed what was wrong and did the ARM transition.

With software, they are still high on their own fumes. The software is getting worse and worse at every level, and they keep telling us and apparently themselves how much better it is getting all the time. Completely delusional.


Ultimately -- the thing is, if anyone is both capable and willing, they can, and sometimes do, fix it.

Granted, this combination is rather rare. Most people aren't capable. Of those who are, they have better things to do and they probably have very well paying jobs they could be focusing on instead.

With that being said, Linux is _still_ more efficient than Windows.

I don't want to say Linux is free, in practice it's not, those who are running the big powerful machines are using RHEL and paying hefty licenses.

Which are still better than any other alternative.


> I don't want to say Linux is free, in practice it's not, those who are running the big powerful machines are using RHEL and paying hefty licenses.

Google/Amazon/MS isn't paying RHEL licenses.

Reason for using RHEL is basically "we don't want to hire experts", which makes a lot of sense in small/midsized company but in big one it's probably mostly the ability to blame someone else if something is fucked up, and the fact some of the software basically says "run it on this distro or it is unsupported".


That's basically it. As a sysadmin, I supported over hundreds of Linux servers (mostly virtual) pretty much by myself. We paid the license so that if the shit hit the fan (and we really had no way of fixing it) we can call on Red Hat. At least, I could be able to tell my bosses that things were being handled.

This never happened, of course. But it's CYA.


I've used RHEL support too, believe it or not, while working as a DevOps engineer at Red Hat on www.redhat.com. And the support specialist assigned to my issue was very sharp and solved the issue quickly. Easy in hindsight but I was stuck so I reached out, and I was glad I did. It involved a segfault and grabbing a core dump and analyzing it. ++Red Hat Support


A product I worked on got real popular, so scaling up the support department became a real pain since the devs were running all kinds of different distros and so we had CI and package support for Arch, Debian, Ubuntu, CentOS... except that all of those come in a bunch of different variants and customers expected the product to run on them. At one point more than 90% of the tickets were dealing with distribution-specific weirdness. There was an epic internal wiki page that must have weighted in at a couple megabytes of raw text that was an encyclopedia of run-time linker caveats for hundreds of distributions, complete with troubleshooting flowcharts and links to our internal test farm to get one-click access to a VM reproducing the scenario described. I still wish I had that, it would have been so handy for some of the Rust and MIPS porting I'm doing now.

We couldn't containerize it (it's a client for a security monitor), we couldn't ship it as a VM, so eventually we all agreed to ditch everything except RHEL and everyone else would be unsupported (we grandfathered all the existing ones in, and it took a year and a half before everything worked.).


There are well-paying jobs that allow smart people to focus on exactly this.


> capable and willing

This is the sort of research that scientific grants are _supposed_ to be targeting for the public good. Supposed to be.


As a supporting point, right now people have decent GPUs but as people get a taste of the streaming approach on a laptop for example, and with the GPU prices increasing, they might not want to spend a whole lot of money on it. A small subset.

And then the AAA-studio will see the hyper-paced style of game would be a bad experience and reduce the revenue, being more likely to prefer the styles that are more comfortable with the latency.

This in turn will make the approach itself more viable let alone the improvements in the area, and when the most popular games ensure they are viable on these platforms, you won't need to buy a 4XXX card.

People I know are looking into 3XXX and 4XXX cards at the moment when they are building a PC or buying a pre-built, 4XXX for the latter scenario, not necessarily 4090 or anything, really just the ones closer to the entry point, and honestly they aren't great value.

I don't like this situation but the writing is kind of on the wall, for the laptops I already see the streaming becoming common.


And if you tell 2 experts to build something with W3Schools as the resource against better documentation, I'm sure the results will be the opposite.

That's not the point though, I'm not going to say different strokes for different folks.

Instead I'd say, if that beginner uses the 'slower' path instead, that will pay dividends in time -- they are better of learning to use the docs than getting paid peanuts for delivering that project which depends on the juniors faster.


I disagree with you. "Learning" to use a particular documentation website (which, mindyou, changes its layout frequently) is not very time consuming.

So a beginner should use w3schools while they see fit, and then move on to MDN, etc. when they find it to be unsuitable for their purposes.


That's exactly what the progression was for me. W3S laid the foundations, and then I went on to MDN and other sources once I felt more comfortable.


An expert wouldn't go to W3schools to begin with. They would have far more specific problems that would be better explored on Stack Overflow.

A beginner on the other hand would have their SO questions locked/closed within minutes because it would likely break any one of the thousand little rules SO has (to reduce duplicates) that are unknown to beginners.

Horses for courses; I read the official MDN and PHP docs now, but I didn't when I began because those sites assume an enormous amount of prior knowledge about programming.


This exactly. I still look to w3school over mdn if I can. The information is just friendlier. I'll use "proper" documentation if needed and never post anything to SO for the reasons you state.


Bitwise operators only work on the first 32-bits, so it isn't as easy as `instead of doing the Math.floor operations, check for edge cases and just the same operation as ~~`


They don't spend 30 million per year, they spend 30 million over what they generate.

I don't think they'd generate peanuts so as to not even cover hardware costs.


> they spend 30 million over what they generate

That is my point. I do not understand how they total costs can be over 30M. It is mostly static content that begs for CDN cache. It is also not mission critical.


Probably they do a lot of content moderation, which takes human beings at least at some stage.


It's almost as if a site for tech has bias for tech commonly enjoyed by the readers of the said site.

I'm shocked, truly.


> I'd wish it not be repeated, because we both know you don't really believe it.

I'm reasonably confident most people here understand the semantic behind the meme, rather than taking it at the face value as if it's a statement in a program.


Don't take it too far, there's people advocating for proto-democracy in this thread and not being treated like an idiot such as I am.


I agree with all your points and also agree that honestly, these scenarios aren't far off from real world tasks.

I get the main issue, which is you could adjust the workload by 10% and achieve a 50% performance loss when you do this at the point where we cross the cache threshold and whatnot.

However I see CPUs unique in that I rank them _for_ these scenarios. A particular might be ranked unfairly, but as long as the test is equal, the better one is infact better, just not by the 50% the test might show but it's still going to be 5% better. I expect my GPU to be idle when it isn't training AI or rendering frames, but for the CPU, it's general purpose in real life, and anything goes.


>just not by the 50% the test might show but it's still going to be 5% better.

Sort of, indeed. Yet, when you see any promotional/marketing material - you see all those phallic bar graphs, and how much bigger it is. Other than that - heavy cache utilization hides inferior memory subsystem (latency/throughput), the latter tends to be quite important in the real world. Overall benchmarks/tests that feature handful of MB as datasets, and run in 100s of ms - should not be used as representative... for most use cases.

That was my initial point - 'don't trust'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: