I know we're so defeated as consumers that we can hardly imagine it, but you could just...charge for the customers' access to social media network. Kinda like every other service that charges money.
It would have the side effect of making the whole business less ghoulish and manipulative, since the operators wouldn't be incentivized to maximize eyeball hours.
It's impossible to imagine this because government regulation is so completely corrupted that a decades-long anticompetitive dumping scheme is allowed to occur without the slightest pushback.
Unlike most business, social media relies on having a high market saturation to provide value. So having a subscription model doesn’t work very well.
Of course perhaps it’s a bit different now since most people consume content from a small set of sources, making social media largely the same as traditional media. But then traditional media also has trouble with being supported by subscriptions.
Seems like Mastadon is just the Kitchen Aid of socials. Anyone can have their product(s), but not everyone can use them the same way. Those that use them better stand out from the rest to the point others might just stop using and the product just takes up space
1. Amazon blink is an interesting hardware platform. With a power-optimized SoC, they achieve several years of intermittent 1080P video on a single AA battery. A similar approach and price point for body cam / dash cam would free users from having to constantly charge.
2. If you're designing cameras to protect human rights, you'll have to carefully consider the storage backend. Users must not lose access to a local copy of their own video because a central video service will be a choke point for censorship where critical evidence can disappear.
Uh it is specifically and exactly what it proved. It changed the entire planets awareness of needs and has since changed the direction of humanity.
Many don’t need to commute, purchase lunch food, shop downtown, buy commuter consumables etc. we do so because we have in the past. And I say this as a life long lover of the classic pre covid city life.
Now we discovered most people are content at home with some food, entertainment and a few hobbies. The rest of the consumption is a mix of boredom and ritual.
And more than half the population would be out of a job without that
> Besides word processors, Microsoft also has security solutions, cables, servers in data centers, access control, SharePoint, and AI across all of this,” De Jong explains. “So simply replacing Microsoft isn't an option.”
This seems much less like a "monopoly" sort of situation and more of a "you explicitly chose to put all of your eggs in one basket" kind of deal.
And it shows how silly the idea is. gcc still sees plenty of forks from vendors who don't upstream, and llvm sees a lot more commercial participation. Unfortunately the Linux kernel equivalent doesn't exist.
It's also nakedly hypocritical behaviour on Stallman's part. Hoping (whether in vain or not) that GCC being Too Big to Fork ( https://news.ycombinator.com/item?id=6810259 ) will keep people from having access to the AST interface really isn't substantially different from saying "why do you need source code, can't you just disassemble the binary hahaha".
I wouldn't call Linux's stance silly. A working OS requires drivers for the hardware it will run on and having all the drivers in the kernel is a big reason we are able to use Linux everywhere we can today. Just like if they had used a more permissive license, we wouldn't have the Linux we do today. Compare the hardware supported by Linux vs the BSDs to see why these things are important.
Linux's position is more like "your out-of-tree code is not our problem". Linus didn't go out of his way to make out-of-tree modules more difficult to write.
LLVM wasn't the first modularization of codegen, see Amsterdam Compiler Kit for prior art, among others.
GCC approach is on purpose, plus even if they wanted to change, who would take the effort to make existing C, C++, Objective-C, Objective-C++, Fortran, Modula-2, Algol 68, Ada, D, and Go frontends adopt the new architecture?
Even clang with all the LLVM modularization is going to take a couple of years to move from plain LLVM IR into MLIR dialect for C based languages, https://github.com/llvm/clangir
Somewhat. Stallman claims to have tried to make it modular,[0] but also that he wants to avoid "misuse of [the] front ends".[1]
The idea is that you should link the front and back ends, to prevent out-of-process GPL runarounds. But because of that, the mingling of the front and back ends ended up winning out over attempts to stay modular.
>> The idea is that you should link the front and back ends, to prevent out-of-process GPL runarounds.
Valid points, but also the reason people wanting to create a more modular compiler created LLVM under a different license - the ultimate GPL runaround. OTOH now we have two big and useful compilers!
When gcc was built most compilers were proprietary. Stallman wanted a free compiler and to keep it free. The GPL license is more restrictive, but it's philosophy is clear. At the end of the day the code's writer can choose if and how people are allowed to use it. You don't have to use it, you can use something else or build you own. And maybe, just maybe Linux is thriving while Windows is dying because in the Linux ecosystem everybody works together and shares, while in Windows everybody helps together paying for Satya Nadellas next yacht.
> At the end of the day the code's writer can choose if and how people are allowed to use it.
If it's free software then I can modify and use it as I please. What's limited is redistributing the modified code (and offering a service to users over a network for Afferro).
That sounds like Stallman wants proprietary OSS ;)
If you're going to make it hard for anyone anywhere to integrate with your open source tooling for fear of commercial projects abusing them and not ever sharing their changes, why even use the GPL license?
Good lord Stallman is such a zealot and hypocrite. It's not open vs. closed it's mine vs. yours and he's openly declaring that he's nerfing software in order to prevent people from using it in a way he doesn't like. And refusing to talk about it in public because normal people hate that shit "misunderstanding" him.
--- From the post:
I let this drop back in March -- please forgive me.
> Maybe that's the issue for GCC, but for Emacs the issue is to get detailed
> info out of GCC, which is a different problem. My understanding is that
> you're opposed to GCC providing this useful info because that info would
> need to be complete enough to be usable as input to a proprietary
> compiler backend.
My hope is that we can work out a kind of "detailed output" that is
enough for what Emacs wants, but not enough for misuse of GCC front ends.
I don't want to discuss the details on the list, because I think that
would mean 50 messages of misunderstanding and tangents for each
message that makes progress. Instead, is there anyone here who would
like to work on this in detail?
He should just re-license GCC to close whatever perceived loophole, instead of actively making GCC more difficult to work with (for everyone!). RMS has done so much good, but he's so far from an ideal figure.
Not anymore. Modularization is somewhat tangential, but for awhile Stallman did actively oppose rearchitecting GCC to better support non-free plugins and front-ends. But Stallman lost that battle years ago. AFAIU, the current state of GCC is the result of intentional technical choices (certain kinds of decoupling not as beneficial as people might think--Rust has often been stymied by lack of features in LLVM, i.e. defacto (semantic?) coupling), works in progress (decoupling ongoing), or lack of time or wherewithal to commit to certain major changes (decoupling too onerous).
Personally, I think when you are making bad technical decisions in service of legal goals (making it harder to circumvent the GPL), that's a sure sign that you made a wrong turn somewhere.
Some in the Free Software community do not believe that making it harder to collaborate will reduce the amount of software created. For them, you are going to get the software and the choice is just “free” or not. And they imagine that permissively license code bases get “taken” and so copyleft licenses result in more code for “the community”.
I happen to believe that barriers to collaboration results in less software for everybody. I look at Clang and GCC and come away thinking that Clang is the better model because it results in more innovation and more software that I can enjoy. Others wonder why I am so naive and say that collaborating on Clang is only for corporate shills and apologists.
You can have whatever opinion you want. I do not care about the politics. I just want more Open Source software. I mean, so do the others guys I imagine but they don’t always seem to fact check their theories. We disagree about which model results in more software I can use.
I am not as much on the bandwagon for “there is no lack of supply for software”.
I think more software is good and the more software there is, the more good software there will be. At least, big picture.
I am ok with there being a lot of bad software I do not use just like I am ok with companies building products with Open Source. I just want more software I can use. And, if I create Open Source myself, I just want it to get used.
This argument has been had thousands of times across thousands of forums and mailing lists in the preceding decades and we're unlikely to settle it here on the N + 1th iteration, but the short version of my own argument is that the entire point of Free Software is to allow end users to modify the software in the ways it serves them best. That's how it got started in the first place (see the origin story about Stallman and the Printer).
Stallman's insistence that gcc needed to be deliberately made worse to keep evil things from happening ran completely counter to his own supposed raison d'etre. Which you could maybe defend if it had actually worked, but it didn't: it just made everyone pack up and leave for LLVM instead, which easily could've been predicted and reduced gcc's leverage over the software ecosystem. So it was user-hostile, anti-freedom behavior for no benefit.
> the entire point of Free Software is to allow end users to modify the software in the ways it serves them best
Yes?
> completely counter to his own supposed raison d'etre
I can't follow your argument. You said yourself, that his point is the freedom of the *end user*, not the compiler vendor. He has no leverage on the random middle man between him and the end user other than adjusting his release conditions (aka. license).
I'm speaking here as an end user of gcc, who might want e.g. to make a nice code formatting plugin which has to parse the AST to work properly. For a long time, Stallman's demand was that gcc's codebase be as difficult, impenetrable, and non-modular as possible, to prevent companies from bolting a closed-source frontend to the backend, and he specifically opposed exporting the AST, which makes a whole bunch of useful programming tools difficult or impossible.
Whatever his motivations were, I don't see a practical difference between "making the code deliberately bad to prevent a user from modifying it" and something like Tivoization enforced by code signing. Either way, I as a gcc user can't modify the code if I find it unfit for purpose.
I have no idea what you think "gcc's leverage" would be if it were a useless GPL'd core whose only actively updated front and back ends are proprietary. Turning gcc into Android would be no victory for software freedom.
Yes, the law made a wrong turn when it comes to people controlling the software on the devices they own. Free Software is an ingenious hack which often needs patching to deal with specific cases.
Over the years several frontends for languages that used to be out-of-tree for years have been integrated. So both working in-tree & outside are definitely possible.
> But that will usually be a smaller part of the code base.
Testing how IO composes makes up most of what you want to test because it's such a difficult problem. Reasoning about this in terms of size of the codebase doesn't make sense.
Thing is though, most of the high level languages have that part "solved" via libraries... Hence lots of people don't see the need to test it, as they expect the library to have sufficiently tested already. That leaves your tests mostly centered around your domain/business logic
Personally I'm just doing web api dev/backend atm and have to say that at least in this domain, pretty much everything actually hard is solved.
The only difficulty is the "interesting" architecture decisions people like the OP introduce, making inherently trivial problems into multi week endeavors which need coordination between dozens if not hundreds of devs
> Thing is though, most of the high level languages have that part "solved" via libraries...
> ...
> Personally I'm just doing web api dev/backend atm and have to say that at least in this domain, pretty much everything actually hard is solved.
That seems a bit absurd. Surely many parts of the game won't likely have bits of code that interact with architecture in unique ways. Especially if you wrote the game in relatively portable code to begin with (as WoW almost certainly was).
I mean idk, maybe windows arm64 is a uniquely nasty target. But i'm skeptical.
> Surely many parts of the game won't likely have bits of code that interact with architecture in unique ways.
I came across a performance-killing bug that made the game unplayable (less than 1fps on a Mac Studio). It happened in a couple of dungeons (I spotted 2). From my tests it was caused by a specific texture in the field of view at a certain distance. There was no problem on Intel Macs, AFAICT. My old MBP was terrible but did not get any performance hit.
This is what can happen any time you don’t test even a tiny corner of the game. Also, bear in mind that this depends on graphics settings and you get a nightmare of a test matrix.
On a Mac Studio it’s kind of the same thing. It’s the GPU core that was in all M1 chips. I could not reproduce it with AMD GPUs, but I also did not try very hard. I remember being annoyed because I always needed to remember to look away when we were doing these instances otherwise we’d fail it because of the time it took to get out of it.
The core issue is that something slipped through the cracks. I don’t blame them, it’s a huge game and testing takes quite a lot of time. But testing does matter.
reply