Hacker Newsnew | past | comments | ask | show | jobs | submit | antiloper's commentslogin

If you don't want to pay for Kagi: https://duckduckgo.com/

Let me add a few more search providers

brave search ecosia (environment focused) qwant mojeek (a little bit obscure) swisscows (once again obscure)

There is also searx and searxng and their public instances which actually are a mix of many of the search providers I listed about.

Giving references to anyone whose interested perhaps but duckduckgo is a good option to start out and its something that I myself use.


> If you don't want to pay for Kagi: https://duckduckgo.com/

It is the same shit. Dear duckduckgo, when I ask for results in English (or in other language), I mean it.

It looks like those days Google and duckduckgo are also hallucinating. You aske some terms, in a specific order, and get something which has no relation, whatsoever, with the search query.


Was anyone actually affected by this? Is this package a dependency of some popular package?

I assume the answer is no because this is clearly clickbait AI slop but who knows.


Content-Security-Policy: default-src 'none'


Raises the obvious question of why not using the wayland protocol (on another socket, not on the compositor socket). It has mature implementations in many languages, an IDL with compilers, and every GUI application is already going to link to libwayland anyway.

(Or perhaps even COM)


vaxry has a beef with Drew Devault, who called him out and exposed his behavior when not writing code.

Considering a sizeable part of Wayland's low-level stuff is written by Drew back in the day, vaxry won't touch anything written by Drew.

This "protocol" effort is to further decouple Hyprland from Wayland infrastructure.

Like everything, this effort is again driven by ego and spite.


Now that's the info I missed, I knew the hyperland devs have a bad reputation so I was wondering what was not being said , thank you good sir.


Peak FOSS experience


> Make clients store a cookie or something and only reply if they prove ownership of it

Unix domain socket authentication is stronger and doesn't require storing cookies on the client side.

> what the hell is your threat model here? The attacker is just going to ptrace firefox and read all the secrets anyway.

Which is why you can (and people do, e.g. flatpak) run applications where ptrace or global filesystem access is blocked. Which is why portals exist and why there shouldn't be a "get all secrets via dbus" escape hatch.

> I _want_ other programs to be able to read secrets (e.g. keyring administrators, .netrc-style shared secrets, etc.)

Then don't use it? Secure defaults matter for most users.

> Do you hate a{sv}? If you propose JSON as alternative, you are going to make me laugh.

Find the *kwargs here: https://wayland.app/protocols/xdg-shell

Etc. etc. This isn't the 90s anymore.


> Unix domain socket authentication is stronger and doesn't require storing cookies on the client side.

And pointless here, since everything runs under the same uid. You need to authenticate this is the same browser that stored this secret, not that this is the same uid (useless), or the same pid, or any other concept that unix domain socket authentication understands.

> Which is why you can (and people do, e.g. flatpak) run applications where ptrace or global filesystem access is blocked. Which is why portals exist and why there shouldn't be a "get all secrets via dbus" escape hatch.

In which case they do not connect to the same D-Bus "bus", and the problem is again non-existent. See how flatpak sandoxing does it.

> Then don't use it? Secure defaults matter for most users.

Right until they notice they can no longer view the keyring contents, or any other stupid limitation most desktop users couldn't care about.

In fact, if you do not need a shared secrets service, and your applications are containerized... why do you need a secrets IPC at all? Just let each program store its secrets in some of its supposedly private storage...

> Find the *kwargs here: https://wayland.app/protocols/xdg-shell

Much better to have a million non-extendable protocols competing with each other. To this day there are two protocols (at least) for exposing the address of the DbusMenu service of a surface, one for gnome-shell and one for kwin. So much for the uglyness of X atoms. And this has nothing really to do with the design of the IPC mechanism itself...


> In fact, if you do not need a shared secrets service, and your applications are containerized... why do you need a secrets IPC at all? Just let each program store its secrets in some of its supposedly private storage...

If I store my secrets in KWallet, which purports to _storage for secrets_, I absolutely do not expect every application on the desktop to have access to those secrets, whether I want to share them or not.

I can't believe you're suggesting this is sanely defensible.


KWallet has never provided any security guarantee, so I dunno what is your surprise here. Its entire premise is centralization and sharing (i.e. not having to type each individual password over and over in each program).


It's literally how it's always worked, and not just on Linux - this is standard across desktop operating systems. Except MacOS, and very recently

Kwallet is for encryption at rest, so an attacker can't read your secrets if they steal your computer. It IS NOT protection from your own applications running as the same user.

That's just not how Linux desktop works. It's a desktop operating system, it's not iOS. All apps running as your user have your users permissions.

Is it an outdated security model? Yes, enter sandboxing and newer kernel features. If you're not doing that though then you won't get that.

Just run your shit in flatpak, problem solved. Or better yet, don't install malware and only download trusted open source software from trusted repositories.


Since docker, we know how to do pretty good isolation (some of the tech is shared by Flatpak etc. sandboxes) - just put stuff into different namespaces with some auth api allowing processes to 'mount' the necessary stuff.

The closer you stick to the kernel security model, the more likely your app will be safe and performant, and the less likely other devs will reject it in favor of their hand rolled stuff.


> And pointless here, since everything runs under the same uid. You need to authenticate this is the same browser that stored this secret, not that this is the same uid (useless), or the same pid, or any other concept that unix domain socket authentication understands.

I disagree. With UNIX domain sockets it is absolutely possible to determine the PID of the process that you are talking to and use pidfd to validate where it is coming from. Would be entirely possible to use this for policy.

> In fact, if you do not need a shared secrets service, and your applications are containerized... why do you need a secrets IPC at all? Just let each program store its secrets in some of its supposedly private storage...

And how exactly does the app container service store something encrypted securely on disk? That's literally the point of a secrets service on a modern desktop. It usually gets keymatter in the form of a user password carried to it from PAM, in order to allow on-disk encryption without needing separate keyring passwords. (And yeah, sure, this could use TPM or something else to avoid the passwords, but the point is literally no different, it shouldn't be each application's job individually to manage their own way of getting secure storage, that's a recipe for data loss and confusion.)

> Much better to have a million non-extendable protocols competing with each other. To this day there are two protocols (at least) for exposing the address of the DbusMenu service of a surface, one for gnome-shell and one for kwin. So much for the uglyness of X atoms. And this has nothing really to do with the design of the IPC mechanism itself...

That's a problem that occurs because the protocols have multiple distinct implementations. Most of the dbus services don't have to deal with that problem at all. (And the ones that do, tend to have problems like this. There are plenty of weird incompatibilities with different XDG desktop portal implementations.)

I'm pretty sure the point of bringing up xdg-shell is because the new bus is inspired by the Wayland protocol. For all of the incessant bitching about Wayland online, Wayland protocols are only about 1000x nicer to use than dbus. You can actually do decent code generation for it without having to have like 5 competing ways to extend the XML to add basic things like struct member annotations (and then have things like Qt's own DBus code generator unable to actually handle any real DBus service definitions. Try throwing the systemd one at it, doesn't fucking work. The code doesn't even compile.)


> determine the PID of the process that you are talking to and use pidfd to validate where it is coming from.

The pidfd_open() man page doesn't list many things that can be done with a pidfd. What sort of validation do you have in mind?

I would love to have a reasonably snoop-proof secret storage service whose security model works with normal programs (as opposed to requiring Flatpaks or the like).


My reasoning behind the pidfd thing would just be as a way to try to avoid race conditions, though on second thought maybe it's not needed. I think you can take your pick on how exactly to validate the executable. My thought was to go (using /proc/.../exe) check that the file is root owned (and in a root owned directory structure) and then use its absolute path as a key. Seems like it would be a decent start that would get you somewhere on any OS.

I think it would also be feasible to add code signatures if we wanted to, though this would add additional challenges. As I noted elsewhere any scheme that wants to provide a true security boundary here would need to deal with potential bypasses like passing LD_PRELOAD. Still, I think that it has to be taken one step at a time.


> With UNIX domain sockets it is absolutely possible to determine the PID of the process that you are talking to and use pidfd to validate where it is coming from.

Validate what? You're just moving the responsibility to whatever answer you give here. If you say "validate the exec name is firefox-bin" then the next person who comes in will say "I hate $your_new_fangled_ipc, you can make it dump all your secrets by renaming your exec to firefox-bin". (This is just an example).

> And how exactly does the app container service store something encrypted securely on disk? That's literally the point of a secrets service on a modern desktop.

The more I think of it, the less sense this makes. If you already have a system where applications cannot read each other's data, what is the point of secret service? What is the security advantage?

If you want to encrypt with TPM, fingerprint, or anything else, that's encryption, which is separate from storage (you can encrypt the password with say a PCR but the application gets to store the encrypted password in any way they want).

Password encryption in the desktop keyrings are for the situation for when every application can read each other's data files easily (again, as in the desktop). In which case, it may make sense to use encryption so that such data is not (trivially) accessible from any other application (otherwise https://developer.pidgin.im/wiki/PlainTextPasswords applies) .

If your applications are already running sandboxed, a keyring sounds to me like useless complexity? Just make each application store its data into its sandbox. What's the threat vector here, that super-user-that-can-escape-sandbox can read into the sandboxes and extract the password?

> You can actually do decent code generation for it without having to have like 5 competing ways to extend the XML to add basic things like struct member annotations (and then have things like Qt's own DBus code generator unable to actually handle any real DBus service definitions. Try throwing the systemd one at it, doesn't fucking work. The code doesn't even compile.)

Yes sure, another problem resulting from the lack of standarization. But my point was -- standarize (write a spec), instead of adding more to the problem by creating yet another competing standard which will obviously NOT solve the problem of lack of standarization.


> Validate what? You're just moving the responsibility to whatever answer you give here. If you say "validate the exec name is firefox-bin" then the next person who comes in will say "I hate $your_new_fangled_ipc, you can make it dump all your secrets by renaming your exec to firefox-bin". (This is just an example).

I'm genuinely kind of surprised people are tripping up on this. Obviously, what you validate is up to you, but you can. Why stick to just the base name? Why not the absolute path? Bonus points for ensuring it's a root owned file in root owned paths. You could special case Flatpak, or specific mount points, or go crazy and add signatures to binaries if you want. The policy would obviously vary strongly depending on the system, but if you were dealing with a secure booted system with dm-verity, or something similar, well then this mechanism should be fairly watertight. It's not really the end of the world if there are systems with different security characteristics here.

You can really get creative.

(It is worth noting, though, that this could be bypassed various ways trivially, like with LD_PRELOAD, so to be a true security boundary it would need more thought. Still, this could definitely be made improved numerous ways.)

> The more I think of it, the less sense this makes. If you already have a system where applications cannot read each other's data, what is the point of secret service? What is the security advantage?

Well, the obvious initial benefit is the same thing that DPAPI has had for ages, which is that it's encrypted on-disk. Of course that's good because it minimizes the number of components that will see the raw secret and ensures that even other privileged processes can't just read user secrets. Defense in depth suggests that it is a feature, not a problem, if multiple security mechanisms overlap. Bonus points if they'd both be sufficient enough to prevent attacks on their own.

An additional case worth considering is when the home folder is stored elsewhere over a network filesystem, as in some more enterprise use cases.

> If you want to encrypt with TPM, fingerprint, or anything else, that's encryption, which is separate from storage (you can encrypt the password with say a PCR but the application gets to store the encrypted password in any way they want).

It would be ill-advised to have each application deal with how to encrypt user data. They can store keymatter in the keyring instead of the data itself if they want to handle storage themselves. (I'm pretty sure this is actually being done in some use cases.)

> Password encryption in the desktop keyrings are for the situation for when every application can read each other's data files easily (again, as in the desktop). In which case, it may make sense to use encryption so that such data is not (trivially) accessible from any other application (otherwise https://developer.pidgin.im/wiki/PlainTextPasswords applies) .

That page exists to explain why they don't bother, but part of that is that there just isn't an option. If there actually was an option, well, it would be different.

> If your applications are already running sandboxed, a keyring sounds to me like useless complexity? Just make each application store its data into its sandbox. What's the threat vector here, that super-user-that-can-escape-sandbox can read into the sandboxes and extract the password?

The threat vector is whatever you want it to be, there are plenty of things this could be useful for. The reality is that Linux desktops do not run all programs under a sandbox and we're not really headed in a direction where we will do that, either. This is probably in part because on Linux most of the programs you run are inherently somewhat vetted by your distribution and considered "trusted" (even if they are subject to MAC like SELinux or AppArmor, like in SuSE) so adding a sandbox feels somewhat superfluous and may be inconvenient (i.e. file access in Bottles is a good example.) But, even in a world where all desktop apps are running in bubblewrap, it's still nice to have extra layers of defense that compliment each other. And even if something or someone does manage to access your decrypted home folder data, it's nice if the most sensitive bits are protected.

> Yes sure, another problem resulting from the lack of standarization. But my point was -- standarize (write a spec), instead of adding more to the problem by creating yet another competing standard which will obviously NOT solve the problem of lack of standarization.

The reason why people don't bother doing this (in my estimation) is because DBus is demoralizing to work on. DBus isn't a mess because of one or a couple of issues, it is a mess because from the ground up, it was and is riddled with many, many shortcomings.

And therein lies the rub: if you would like to have influence in how these problems get solved, you are more than welcome to go try to improve the DBus situation yourself. You don't have to, of course, but if you're not interested in contributing to solving this problem, I don't see why anyone should be all that concerned about your opinion on how it should be fixed.


> I'm genuinely kind of surprised people are tripping up on this. Obviously, what you validate is up to you, but you can. Why stick to just the base name? Why not the absolute path? Bonus points for ensuring it's a root owned file in root owned paths.

Because you do not get it: this is not Android. There is no fixed UIDs. There is no fixed absolute paths. The binaries are not always root-owned. There is no central signing authority (thank god!). You really do not get it: _anything_ you could validate from a PID would be absolutely pointless in desktop Linux.

> You could special case Flatpak, or specific mount points, or go crazy and add signatures to binaries if you want.

Or, if you are assuming Flatpak, you could simply do not allow access to the session bus and instead allow access only to a filtered bus that only allows talking to whichever services Flatpak provides. Which is how Flatpak does it and you sideline the entire problem of having to authenticate clients on the bus, which is a nightmare. The entire process tree descending from original Flatpak session gets access to this bus and only to this bus.

> hat even other privileged processes can't just read user secrets. Defense in depth suggests that it is a feature, not a problem, if multiple security mechanisms overlap. Bonus points if they'd both be sufficient enough to prevent attacks on their own.

I really do not see the point of this. Of course I want privileged processes to be able to see my passwords; this is _my_ desktop.

I do not see why you'd have your "sandboxes apps" store their private data but then have another storage that is "more secure" for whatever your definition of secure is. You'd just put the data in the "more secure" storage to begin with.

What you're describing is not another layer of security, it is just pointless complication. As I said, the more I think of it, the less reason I see for a secret service which does not really share secrets.

You reach stupid conclusions like having to design a key-value DB server that only returns values to the process that inserted them in the first place, like what TFA is doing. Why? Just why??? Have multiple totally separate, private instances! And you already have one storage for that: the app's private storage. Why do you even need IPC for this?

> It would be ill-advised to have each application deal with how to encrypt user data.

Why? You do not put the reason why not. Every application does this _today_, and no IPC has ever been needed for this (e.g. openSSL is a library, not a service).

> The reality is that Linux desktops do not run all programs under a sandbox and we're not really headed in a direction where we will do that, either.

In which case, my entire remark does not apply and there is some (minor) benefit to a keyring.

> DBus isn't a mess because of one or a couple of issues, it is a mess because from the ground up, it was and is riddled with many, many shortcomings.

This is a circular argument. D-Bus is a mess because it is a mess. Even if I would agree, it is a pointless argument.

> you would like to have influence in how these problems get solved, you are more than welcome to go try to improve the DBus situation yourself. You don't have to, of course, but if you're not interested in contributing to solving this problem, I don't see why anyone should be all that concerned about your opinion on how it should be fixed.

I am answering to a guy that says that D-Bus sucks then proceeds to create an alternative instead of fixing it. I have not only contributed to D-Bus through decades, I am also part of the reason it is used in some commercial deployments outside traditional desktop Linux (or was a decade ago). My opinion is still as important as his, or yours, which is : nothing at all.


> Because you do not get it: this is not Android. There is no fixed UIDs.

Uhhh... I didn't say anything about fixed UIDs.

> There is no fixed absolute paths.

There is if your distribution says there is.

> The binaries are not always root-owned.

They are if your distribution says they are.

> There is no central signing authority (thank god!).

I mean, that's not even really 100% true right now. What major distribution doesn't sign packages in some form? Yeah, fine, the binaries themselves lack a signature attached to them, but if they can sign the packages they sure as shit can sign an ELF binary provided a mechanism to do so.

But anyway. There is if your distribution has one.

> You really do not get it: _anything_ you could validate from a PID would be absolutely pointless in desktop Linux.

There is if your distribution makes there a way to validate something from the PID.

The point is that the mechanism would be different per each system, the same way that OpenSuSE may have a MAC in enforcing mode by default and Arch might not. That's how desktop Linux really works. You're not forced into any specific policy, but that doesn't mean policy is pointless. There are plenty of people running Secure Boot and Lockdown mode too, it's not automatically pointless.

Immutable distros exist right now.

> Or, if you are assuming Flatpak, you could simply do not allow access to the session bus and instead allow access only to a filtered bus that only allows talking to whichever services Flatpak provides. Which is how Flatpak does it and you sideline the entire problem of having to authenticate clients on the bus, which is a nightmare. The entire process tree descending from original Flatpak session gets access to this bus and only to this bus.

This doesn't fix anything. Like half of the shit I have installed via Flatpak needs direct session bus access anyways.

> I really do not see the point of this. Of course I want privileged processes to be able to see my passwords; this is _my_ desktop.

Do you know what "principle of least privilege" is? My printer driver is "privileged" but that doesn't mean it needs to be able to capture my screen and read all of my passwords. It would be much better if Linux was capabilities-based. Hey, maybe someone should attempt to implement an RPC framework that does that.

> What you're describing is not another layer of security, it is just pointless complication. As I said, the more I think of it, the less reason I see for a secret service which does not really share secrets.

That's because your thought process is going in one direction and not taking any new input. You've been running the same exact narrative the entire time. Yes it's true that if you assume the system must be watertight-secure under all circumstances in every single existing desktop Linux setup, then it can't be done. But that's irrelevant. The question is could it be used as a primitive to construct a more secure Linux desktop, and the answer is a resounding, "Well, duh".

Again. Immutable distros exist right now. They already start with many of the necessary security properties, you mostly need to find a way to deal with insecure linker behavior.

> You reach stupid conclusions like having to design a key-value DB server that only returns values to the process that inserted them in the first place, like what TFA is doing. Why? Just why??? Have multiple totally separate, private instances! And you already have one storage for that: the app's private storage. Why do you even need IPC for this?

sigh

You do realize that this is how many apps currently use GNOME Keyring, right? They literally use it to store their own passwords for no other purposes. That's literally already a thing. The intent is not so they can share your password across the system, it is to provide a mechanism to securely store data. Sometimes it is also used to share data between different programs, but I don't even think that is most of the time.

Checking my kdewallet, I can see the following applications:

- KRDC

- Remmina

- KRDP

- Chromium

- krfb

- xdg-desktop-portal

... And then the "Passwords" folder, which contains the passwords saved for e.g. SMB shares.

Of those... I think the only one that is ever even accessed by anything else is the Chromium one possibly, for browser migration? The rest are only ever stored for themselves. So yes, the wallet is being used as a dumb key-value store. One that is encrypted automatically without needing the application to do key management.

> Why? You do not put the reason why not. Every application does this _today_, and no IPC has ever been needed for this (e.g. openSSL is a library, not a service).

Not cryptography itself, but key management. Although you can do key management by proxy by providing a cryptography API too, sort of how using the TPM for this purpose works. That is the approach taken by DPAPI on Windows, in contrast to the keyring approach taken on Linux and macOS (where you get a key value store from the app perspective and do neither cryptography nor key management in the app.)

And the "Why?" is very simple. You want a secure key that doesn't get lost. The user already has a password, ergo it already provides a perfectly good passphrase to wrap a key; individual applications can't access that. And that way, the OS can take care of whether user data is encrypted with a TPM or using a passphrase-wrapped key. Having this control centralized could become important if it's ever required by regulation to be handled a certain way.

> This is a circular argument. D-Bus is a mess because it is a mess. Even if I would agree, it is a pointless argument.

It's not circular at all... man, you really need to learn how to read. What I am saying is that it is not only a mess, but deeply flawed. Even if you clean it up, what you will wind up with is an aggressively polished turd. There is no world in which that is logically worth the effort.

> I am answering to a guy that says that D-Bus sucks then proceeds to create an alternative instead of fixing it. I have not only contributed to D-Bus through decades, I am also part of the reason it is used in some commercial deployments outside traditional desktop Linux (or was a decade ago). My opinion is still as important as his, or yours, which is : nothing at all.

Well, it's at least good that you also recognize the value I place in your opinion at this point, so that we can agree on at least one thing.

But honestly outside of being a smarmy dickhead (sorry, but I never have any reservations firing back when someone is doing it to me) the point I'm making is that if you want DBus to get fixed instead, well, good for you. I would like a Threadripper for Christmas, while we're at it. Just don't act like it's weird when someone who is actually doing something about the trainwreck that is the Linux desktop decides to do something else instead given they have no reason to care about the peanut gallery here.

(And it really doesn't matter to me what Windows or macOS does here honestly. I care about the Linux desktop, not how it compares to whatever crapware Microsoft and Apple are pushing. They haven't really meaningfully improved the desktop in the past 10 years anyways.)


> There is if your distribution says there is.

You STILL do not get it: This Is Not Android. Even ignoring that most distributions are not going to do what you want them to do (that's the entire point), the distribution has zero power over what users do. Firefox _is_ still distributed as a simple tarball that you can unpack in your home directory and run.

If flatpak or your distribution suddenly start requiring _root_ so that users can run programs that do not really require root, this not only would be a net security degradation rather than improvement, it would prevent me from using that distribution altogether. And yes, these programs need access to the desktop, the theming service, accessibility, input management, etc. for obvious reasons.

This is why even the current method is superior to anything you are proposing here, and the SCM_CREDENTIALS stuff just does not rightly fit in the Linux desktop model.

> I mean, that's not even really 100% true right now. What major distribution doesn't sign packages in some form? Ye

ANY that is source based, for example. Actually, I have not used _any_ distribution that would sign even packages in ages, much less binaries.

Go and check how much love you are going to get if you start asking around for people to sign their own binaries to run them in their machine, to implement a sandboxing mechanism that there are a million other, way less intrusive ways to implement.

> The point is that the mechanism would be different per each system, the same way that OpenSuSE may have a MAC in enforcing mode by default and Arch might not.

It doesn't really matter. A user is then going to mount a $HOME through NFS into a serverbox and expect to run graphical applications from there.

> Like half of the shit I have installed via Flatpak needs direct session bus access anyways.

A problem I fully acknowledge. But how would _any_ D-Bus replacement fix that? You'd need to replace _all_ IPC servers to offer some form of "no longer trust the clients" API (more or less what Wayland supposedly did to X11), and this is a decades-long task _at best_. A different IPC system doesn't really help, and may even delay this!.

> Do you know what "principle of least privilege" is? M

Irrelevant when my point is that I want _my_ privileged process to manage the password keyring itself.

> You do realize that this is how many apps currently use GNOME Keyring, right? They literally use it to store their own passwords for no other purposes. That's literally already a thing. The intent is not so they can share your password across the system, it is to provide a mechanism to securely store data.

The entire raison-d'être for KWallet was to store AND SHARE passwords amongst programs, as a glorified netrc replacement. The "secure" storage part comes later, and few if any people enable the auto-locking mechanism which is the only thing that can offer any sensible protection to it. As you are aware of, KWallet and gnome-keyring have never provided any level of actual secure storage.

> And the "Why?" is very simple. You want a secure key that doesn't get lost. The user already has a password, ergo it already provides a perfectly good passphrase to wrap a key; individual applications can't access that. And that way, the OS can take care of whether user data is encrypted with a TPM or using a passphrase-wrapped key. Having this control centralized could become important if it's ever required by regulation to be handled a certain way.

why do you make this distinction for secrets, and not for literally the rest of extremely critical private data that the app store? Why would user want to backup the passwords but not the private app documents?

"You want a private data storage that doesn't get lost. The user already has a password, ergo it already provides a perfectly good passphrase to wrap such private data; [other] individual applications can't access that. And that way, the OS can take care of whether user data is encrypted with a TPM or using a passphrase-wrapped key."

I mean, from your description, you want KWallet to provide protection so that say the mail client cannot access the browser passwords (something I already disagree with). But don't you also want, in this containerized world, for the mail client to NOT be able to read your browser's history, for example? If you already provide secure, containerized storage for apps, I fail to see any specific distinction between "private app data" and "private app secret" (I would still make the distinction if it was "shared"). You'd want your history to be protected (from other programs) as much as any website password.

I do not even know why you bring up the word "regulation" in here.

> What I am saying is that it is not only a mess, but deeply flawed. Even if you clean it up, what you will wind up with is an aggressively polished turd

This is not a reading problem: you are still saying that it is a mess because it is a mess. If you want to break the circular reasoning, you have to introduce an actual reason into the loop.

> Just don't act like it's weird when someone who is actually doing something about the trainwreck that is the Linux desktop decides to do something else instead given they have no reason to care about the peanut gallery here.

You keep framing this as if we were not improving the Linux desktop too, which just shows your bias.

My point: Just don't act like it's weird when the actions only result in a LARGER and more explosive trainwreck rather than an improvement.


The funny/sad detail is that the 80s (at the latest) already solved the problem. Have a look as XDR and Sun RPC. It has both strongly typed APIs and versioning built in. You would have to come up with your authentication mechanism for applications e.g. have them send a cookie file (descriptor).


Make faster websites:

> we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications.

Why is the Next.js limit 1 MB? It's not enough for uploading user generated content (photographs, scanned invoices), but a 1 MB request body for even multiple JSON API calls is ridiculous. There frameworks need to at least provide some pushback to unoptimized development, even if it's just a lower default request body limit. Otherwise all web applications will become as slow as the MS office suite or reddit.


The update was to update it to 3MB (paid 10MB)


a) They serialize tons of data into requests b) Headers. Mostly cookies. They are a thing. They are being abused all over the world by newbies.


> Is the output of your C compiler the same every time you run it?

Yes? Because of actual engineering mind you and not rolling the dice until the lucky number comes up.

https://reproducibility.nixos.social/evaluations/2/2d293cbfa...


It's not true for a place-and-route engine, so why does it have to be true for a C compiler?

Nobody else cares. If you do, that's great, I guess... but you'll be outcompeted by people who don't.



That's an advertisement, not an answer.


Did you really read and understand this page in the 1 minute between my post and your reply or did you write a dismissive answer immediately?


Eh, I'll get an LLM to give me a summary later.

In the meantime: no, deterministic code generation isn't necessary, and anyone who says it is is wrong.


Copilot is so useless compared to the rest, using Windows really is like trying to evade having shit smeared in your face all the time.


But they aren't the first. Google is the first frontier model lab to go public.


Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).

Also:

> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.

https://www.ey.com/en_us/insights/ipo/trends


VTI and VT, two of the largest index funds, DO invest in unprofitable companies.

And for the rest (SP 500 etc), these companies are going to fake profits using some sort of financial engineering to be included.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: