I'm genuinely confused about all this. Can someone help me out?
I've been buying and playing games from GOG on Linux for a very long time with no need for GOG Galaxy -- which is a thing I know nothing about. Since this announcement, I've been trying to figure out why I'd need it.
It seems like it's just a convenience application and social connection point (leaderboards, etc.). In which case, it's not something of interest to me. However, I've also seen references to Galaxy that imply that it's necessary to play games -- which is obviously untrue in general, but perhaps there are some games that require it?
If native software was routinely available, launchers might not feel necessary.
But I sure as hell don't want to invest howevermany weekend days figuring out how to make games from other platforms as easy to play as Steam games on SteamOS.
I imagine this is that - give me "download" and "play" buttons that let me run GOG games on Linux, even if the binaries were authored for Windows.
Cloud saves and achievements and all that are nice (and expected from something like GOG), but even just a normal launcher feels essential on Linux.
> If native software was routinely available, launchers might not feel necessary.
> But I sure as hell don't want to invest howevermany weekend days figuring out how to make games from other platforms as easy to play as Steam games on SteamOS.
For games that are licensed under terms that allow it, Debian's Game Data Packager has already automated that work. And- as your comment suggests- a native port is much better than running on a wine shim, which will always be second-rate.
Does that effectively replace the .exe parts of a Proton game with an equivalent Linux engine, while letting Steam et. al. manage the artwork/levels/etc?
No, it packages open source game data (which can't be distributed because it is copyrighted) so that it can be installed and will work with the games that already have debian packages.
So in the case of quake (for example) it makes a .deb file, which when installed will create the directory structure in the correct place and put the .pak files, config files, etc. where debian's quake engine package(s)[0] will look for them. This .deb file for the quake game data won't do anything on its own. You need to also install a quake engine, which debian includes.
You can create the game data packages from the installation CD, from a working install directory, or from a Good Old Games installer.
Convenience is 100% Steam’s most important feature. Finding games, installing them, updating, auto-login, cloud saves, probably more that I can’t think of right now.
Yeah, I'm remembering the time immediately before Steam launched, getting a computer set up with games for a LAN party or whatever, someone sharing a folder of installers/updates from their HDD so everyone could be on the same version and whatnot. .. and that was the best-case scenario. Sometimes you just don't play a certain game because half the people have a different version or whatever haha
People always say that Heroic or the other one (I forget the name) is seamless, but I needed to do troubleshooting with a few of the games I owned. In one case, the best solution was to install via steam as a non-steam game. So, I'm hoping for better support and compatibility.
There are games distributed by GOG which rely on the Galaxy client for multiplayer functions. For example, the GOG version of Grim Dawn needs the Galaxy client being loaded to enable multiplayer. Solo play works without Galaxy.
Gotcha. I don't play multiplayer games or want the other features that people here have mentioned, so my current understanding is that it's safe for me to ignore the Galaxy application.
It's just a convenience app, but it's a pretty nice one. When I moved my main PC from Windows to Linux, I was definitely sad to lose the ecosystem of nice launcher apps (GOG Galaxy but also others like Playnite, Launchbox, etc). The dream for me is to have all my games in one cohesive library, and that's what these sorts of apps offer. On Linux I use Lutris for this and it's fine enough, but I'll definitely be taking a look at Galaxy when it comes to Linux.
Galaxy is purely convenience. If you want to see all your games from all storefronts (Epic, Steam, GOG, etc) in one place, Galaxy lets you do that. (Along with the social stuff)
You can still play GOG games without any launcher, which is how it's intended to work.
Some people really like having a launcher to keep track of everything, so this isn't a nothing burger. It's one more convenience to help convince people to move over.
also I believe it helps you track save games. I have multiple Linux boxes I play GOG games on using Heroic launcher and save game tracking is a big issue (maybe there's a way to do this with Heroic, idk). But I think Galaxy would help here.
I don't do this to force me to take breaks, but it does that as a side-effect. I am constantly drinking plain water while I'm working, which makes me get up to relieve myself every couple of hours.
> Where do you think the line should be drawn between AI assistance and human judgment?
I 100% don't want genAI to be making any medical decisions, nor do I want a doctor who just accepts what genAI says as reliable fact.
But for me, when it comes to this kind of thing, that's not even the question. There's no chance I'd be willing to trust my sensitive personal information to a genAI system in the first place out of security/privacy concerns. I don't want it to even take notes because that would require it to be given sensitive information.
So I don't get far enough for "where is the line" questions to be important to me.
I honestly don't think that Microsoft even knows what a good "overall experience of Windows" consists of. That's the charitable take. The uncharitable take is that revenue generation will trump user experience every day of the week.
> revenue generation will trump user experience every day of the week
The iron law of encrapification - I doubt Microsoft (or Apple or Google or any other company) can overcome it, as the business incentives (at least in the short term) are too great.
My feeling is that the operator of the fleet is responsible for what the cars in the fleet do. If a Waymo car is violating traffic/parking laws, then Waymo should get a ticket for that. Accumulating enough of those tickets should result in the same consequences as if a human had accumulated them.
> “The AI hallucinated. I never asked it to do that.”
> That’s the defense. And here’s the problem: it’s often hard to refute with confidence.
Why is it necessary to refute it at all? It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.
The distinction some people are making is between copy/pasting text vs agentic action. Generally mistakes "work product" as in output from ChatGPT that the human then files with a court, etc. are not forgiven, because if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own that a "reasonable person" would not have expected it to. Often we forgive those kinds of software bloopers.
"Agentic action" is just running a script. All that's different is now people are deploying scripts that they don't understand and can't predict the outcome of.
It's negligence, pure and simple. The only reason we're having this discussion is that a trillion dollars was spent writing said scripts.
If I hire an engineer and that engineer authorizes an "agent" to take an action, if that "agentic action" then causes an incident, guess whose door I'm knocking on?
Engineers are accountable for the actions they authorize. Simple as that. The agent can do nothing unless the engineer says it can. If the engineer doesn't feel they have control over what the agent can or cannot do, under no circumstances should it be authorized. To do so would be alarmingly negligent.
This extends to products. If I buy a product from a vendor and that product behaves in an unexpected and harmful manner, I expect that vendor to own it. I don't expect error-free work, yet nevertheless "our AI behaved unexpectedly" is not a deflection, nor is it satisfactory when presented as a root cause.
This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.
It is very much true for dogs in that case: (1) it is your dog (2) it is your car (3) it is your responsibility to make sure your car can not be started by your dog (4) the pedestrian has a reasonable expectation that a vehicle that is parked without a person in it has been made safe to the point that it will not suddenly start to move without an operator in it and dogs don't qualify.
what if your car was parked in a normal way that a reasonable person would not expect to be able to be started by a dog, but the dog did several things that no reasonable person would expect and started it anyway?
You can 'what if' this until the cows come home but you are responsible, period.
I don't know what kind of drivers education you get where you live but where I live and have lived one of the basic bits is that you know how to park and lock your vehicle safely and that includes removing the ignition key (assuming your car has one) and setting the parking brake. You aim the wheels at the kerb (if there is one) when you're on an incline. And if you're in a stick shift you set the gear to neutral (in some countries they will teach you to set the gear to 1st or reverse, for various reasons).
We also have road worthiness assessments that ensure that all these systems work as advertised. You could let a pack of dogs loose in my car in any external circumstance and they would not be able to move it, though I'd hate to clean up the interior afterwards.
I agree. The dog smashed the window, hot–wired the ignition, released the parking brake, shifted to drive, and turned the wheel towards the opposite side of the road where a mother was pushing a stroller, killing the baby. I know, crazy right, but I swear I'm not lying, the neighbor caught it on camera.
Who's liable?
I think this would be a freak accident. Nobody would be liable.
Well at that point we might as well say it's gremlins that you summoned, so who knows, there are no laws about gremlins hot-wiring cars. If you summoned them, are they _your_ gremlins, or do they have their own agency. How guilty are you, really... At some point it becomes a bit silly to go into what-if scenarios, it helps to look at exact cases.
> I agree. The dog smashed the window, hot–wired the ignition,
> released the parking brake, shifted to drive, and turned the
> wheel towards the opposite side of the road where a mother was
> pushing a stroller, killing the baby. I know, crazy right, but
> I swear I'm not lying, the neighbor caught it on camera.
> Who's liable?
You are. It's still your dog. If you would replace dog with child the case would be identical (but more plausible). This is really not as interesting as you think it is. The fact that you have a sentient dog is going to be laughed out of court and your neighbor will be in the docket together with you for attempting to mislead the court with your AI generated footage. See, two can play at that.
When you make such ridiculously contrived examples turnaround is fair play.
You would not be guilty of a crime, because that requires intent.
But you would be liable for civil damages, because that does not. There are multiple theories for which to establish liability, but most likely this would be treated as negligence.
What if you have an email in your inbox warning you that 1) this specific bush attracts bats and 2) there were in fact bats seen near you bush and 3) bats were observed almost biting a child before. And you also have "how do I fuck up them kids by planting a bush that attracts bats" in your browser history. It's a spectrum you know.
Well, if it was a bush known to also attract children, it was on your property, and the child was in fact attracted by it and also on your property, and the presence of the bush created the danger of bat bites, the principal of “attractive nuisance” is in play.
To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime. "It's my robot, it wasn't me" isn't a compelling defense - if you can prove that it behaved significantly outside of your informed or contracted expectations, then maybe the AI platform or the Robot developer could be at fault. Given the current state of AI, though, I think it's not unreasonable to expect that any bot can go rogue, that huge and trivially accessible jailbreak risks exist, so there's no excuse for deploying an agent onto the public internet to do whatever it wants outside direct human supervision. If you're running moltbot or whatever, you're responsible for what happens, even if the AI decided the best way to get money was to hack the Federal Reserve and assign a trillion dollars to an account in your name. Or if Grok goes mechahitler and orders a singing telegram to Will Stancil's house, or something. These are tools; complex, complicated, unpredictable tools that need skillfull and careful use.
There was a notorious dark web bot case where someone created a bot that autonomously went onto the dark web and purchased numerous illicit items.
They bought some ecstasy, a hungarian passport, and random other items from Agora.
>The day after they took down the exhibition showcasing the items their bot had bought, the Swiss police “arrested” the robot, seized the computer, and confiscated the items it had purchased. “It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited, by destroying them,” someone from !Mediengruppe Bitnik wrote on their blog.
In April, however, the bot was released along with everything it had purchased, except the ecstasy, and the artists were cleared of any wrongdoing. But the arrest had many wondering just where the line gets drawn between human and computer culpability.
that darknet bot one always confuses me. The artists/programmers/whatever specifically instructed the computer, through the bot, to perform actions that would likely result in breaking the law. It's not a side-effect of some other, legal action which they were trying to accomplish, it's entire purpose was to purchase things on a marketplace known for hosting illegal goods and services.
If I build an autonomous robot that swings a hunk of steel on the end of a chain and then program it to travel to where people are likely to congregate and someone gets hit in the face, I would rightfully be held liable for that.
> To me, it's 100% clear - if your tool use is reckless or negligent and results in a crime, then you are guilty of that crime.
For most crimes, this is circular, because whether a crime occurred depends on whether a person did the requisite act of the crime with the requisite mental state. A crime is not an objective thing independent of an actor that you can determine happened as a result of a tool and then conclude guilt for based on tool use.
And for many crimes, recklessness or negligence as mental states are not sufficient for the crime to have occurred.
For negligence that results in the death of a human being, many legal systems make a distinction between negligent homicide and criminally negligent homicide. Where the line is drawn depends on a judgment call, but in general you're found criminally negligent if your actions are completely unreasonable.
A good example might be this. In one case, a driver's brakes fail and he hits and kills a pedestrian crossing the street. It is found that he had not done proper maintenance on his brakes, and the failure was preventable. He's found liable in a civil case, because his negligence led to someone's death, but he's not found guilty of a crime, so he won't go to prison. A different driver was speeding, driving at highway speeds through a residential neighborhood. He turns a corner and can't stop in time to avoid hitting a pedestrian. He is found criminally negligent and goes to prison, because his actions were reckless and beyond what any reasonable person would do.
The first case was ordinary negligence: still bad because it killed someone, but not so obviously stupid that the person should be in prison for it. The second case is criminal negligence, or in some legal systems it might be called "reckless disregard for human life". He didn't intend to kill anyone, but his actions were so blatantly stupid that he should go to prison for causing the pedestrian's death.
That idea is really weird. Culpa (and dolus) in occidental law is a thing of the mind, what you understood or should have understood.
A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.
>A database does not have a mind, and it is not a person. If it could have culpa, then you'd be liable for assault, perhaps murder, if you took it apart.
We as a society, for our own convenience can choose to believe that LLM does have a mind and can understand results of it's actions. The second part doesn't really follow. Can you even hurt LLM in a way that is equivalent to murdering a person? Evicting it off my computer isn't necessarily a crime.
It would be good news if the answer was yes, because then we just need to find a convertor of camel amounts to dollar amounts and we are all good.
Can LLM perceive time in a way that allows imposing an equivalent of jail time? Is the LLM I'm running on my computer the same personality as the one running on yours and should I also shut down mine when yours acted up? Do we even need the punishment aspect of it and not just rehabilitation, repentance and retraining?
It's only a hallucination if you are the only one seeing it. Otherwise the line between that, a social construct and a religious belief is a bit blurry.
Yeah - I'm pretty sure, technically, that current AI isn't conscious in any meaningful way, and even the agentic scaffolding and systems put together lack any persistent, meaningful notion of "mind", especially in a legal sense. There are some newer architectures and experiments with the subjective modeling and "wiring" that I'd consider solid evidence of structural consciousness, but for now, AI is a tool. It also looks like we can make tools arbitrarily intelligent and competent, and we can extend the capabilities to superhuman time scales, so I think the law needs to come up with an explicit precedent for "This person is the user of the tool which did the bad thing" - it could be negligent, reckless, deliberate, or malicious, but I don't think there's any credibility to the idea that "the AI did it!"
At worst, you would confer liability to the platform, in the case of some sort of blatant misrepresentation of capabilities or features, but absolutely none of the products or models currently available withstand any rational scrutiny into whether they are conscious or not. They at most can undergo a "flash" of subjective experience, decoupled from any coherent sequence or persistent phenomenon.
We need research and legitimate, scientific, rational definitions for agency and consciousness and subjective experience, because there will come a point where such software becomes available, and it not only presents novel legal questions, but incredible moral and ethical questions as well. Accidentally oopsing a torment nexus into existence with residents possessed of superhuman capabilities sounds like a great way to spark off the first global interspecies war. Well, at least since the Great Emu War. If we lost to the emus, we'll have no chance against our digital offspring.
A good lawyer will probably get away with "the AI did it, it wasn't me!" before we get good AI law, though. It's too new and mysterious and opaque to normal people.
If the five year old was a product resulting from trillions of dollars in investments, and the marketability of that product required people to be able to hand guns to that five year old without liability, then we would at least be having that discussion.
No purveyors of agentic AI are taking on liability for the consequences of their users deploying it. It's literally not in any of their terms of use or licensing or whatever document. That's just in the imaginations of the "AI did it" excuse makers. It may be that the imagination is stirred up by marketing and surrounding hype, but it's not in the binding wording. Nobody is going to do that; it would be crazy. Like to owe money to anyone claiming to have lost their files, or lost clients, or whatever other harm.
> If the five year old was a product resulting from trillions of dollars in investments
In weird way, that's actually true. It's a highly- (soon to be fully-) autonomous giga-swarm of the most complicated nanobots in existence, the result of investments over hundreds of thousands of years.
That said, we don't really get to choose which ones we own, although we do have input on their maintenance. :p
> if you signed the document, you own its content. Versus some vendor-provided AI Agent which simply takes action on its own
Yeah that's exactly the I think we should adopt for AI agent tool calls as well: cryptographically signed, task scoped "warrants" that can be traceable even in cases of multi-agent delegation chains
> Agent Trace is an open specification for tracking AI-generated code. It provides a vendor-neutral format for recording AI contributions alongside human authorship in version-controlled codebases.
Similar space, different scope/Approach.
Tenuo warrants track who authorized what across delegation chains (human to agent, agent to sub-agent, sub-agent to tool) with cryptographic proof & PoP at each hop.
Trace tracks provenance. Warrants track authorization flow.
Both are open specs. I could see them complementing each other.
Why does it need cryptography even? If you gave the agent a token to interact with your bank account, then you gave it permission. If you want to limit the amount it is allowed to sent and a list of recipients, put a filter that sits between the account and the agent that enforces it. If you want the money to be sent only based on the invoice, let the filter check that invoice reference is provided by the agent. If you did neither of that and the platform that runs the agents didn't accept the liability, it's on you. Setting up filters and engineering prompts it's on you too.
Now if you did all of that, but made a bug in implementing the filter, then you at least tried and wasn't negligible, but it's on you.
Tokens + filters work for single-agent, single-hop calls. Gets murky when orchestrators spawn sub-agents that spawn tools. Any one of them can hallucinate or get prompt-injected.
We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.
>We're building around signed authorization artifacts instead. Each delegation is scoped and signed, chains are verifiable end-to-end. Deterministic layer to constrain the non-deterministic nature of LLMs.
Ah, I get it. So the token can be downscoped to be passed, like the pledge thing, so sub agent doesn't exceed the scope of it's parent. I have a feeling, that it's like cryptography in general -- you get one problem and reduce it to key management problem.
In a more practical sense, if the non-deterministic layer decides what the reduced scope should be, all delegations can become "Allow: *" in the most pathological case, right? Or like play store, where a shady calculator app can have a permission to read your messages. Somebody has to review those and flag excessive grants.
Right, the non-deterministic layer can't be the one deciding scope. That's the human's job at the root.
The LLM can request a narrower scope, but attenuation is monotonic and enforced cryptographically. You can't sign a delegation that exceeds what you were granted. TTL too: the warrant can't outlive its parent.
So yes, key management. But the pathological "Allow: *" has to originate from a human who signed it. That's the receipt you're left holding.
You're poking at the right edges though. UX for scope definition and revocation propagation are what we're working through now. We're building this at tenuo.dev if you want to dig in the spec or poke holes.
>So yes, key management. But the pathological "Allow: *" has to originate from a human who signed it. That's the receipt you're left holding.
Sure, But I generally speaking want my agent to send out emails, so I explicitly grant email reading and email writing. I also want to it to pay for invoices but with some semantic condition.
Then I give it the instruction to do something that implicitly requires only email reading. At which point is the scope narrowed to align my explicit permissions granted before with implicit one for this operation? It's not really a problem cryptography is helpful for to solve.
Should it be the other way around maybe -- only read permission is granted first and then it has to request additional permissions for send?
Yep ... that's exactly the direction. Think "default deny + step-up," not "grant everything up front."
You keep a coarse cap (e.g. email read/write, invoice pay) but each task runs under a narrower, time-boxed warrant derived from that cap. Narrowing happens at the policy/UX layer (human or deterministic rules), not by the LLM. The LLM can request escalation ("need send"), but it only gets it via an explicit approval / rule.
Crypto isn't deciding scope. It's enforcing monotonic attenuation, binding the grant to an agent key, and producing a receipt that the scope was explicitly approved.
For a single-process agent this might be overkill. It matters more when warrants cross trust boundaries: third-party tools, sub-agents in different runtimes, external services. Offline verification means each hop can validate without calling home
Not every access token is a (public) key or a signed object. It may be, but it doesn't have to. It's not state of the art, but also not unheard of to use a pre-shared secret with no cryptography involved and to rely on presenting the secret itself with each request. Cookie sessions are often like that.
I feel like you have missed the point of this. It isn't to completely absolve the user of liability, it's to prove malice instead of incompetence.
If the user claims that they only authorized the bot to review files, but they've warranted the bot to both scan every file and also send emails to outside sources, the competitors in this case, then you now have proof that the user was planning on committing corporate espionage.
To use a more sane version of an example below, if your dog runs outside the house and mauls a child, you are obviously guilty of negligence, but if there's proof of you unleashing the dog and ordering the attack, you're guilty of murder.
Yeah. Legal will need to catch up to deal with some things, surely, but the basic principles for this particular scenario aren't that novel. If you're a professional and have an employee acting under your license, there's already liability. There is no warrant concept (not that I can think of right now, at least) that will obviate the need to check the work and carry professional liability insurance. There will always be negligence and bad actors.
The new and interesting part is that while we have incentives and deterrents to keep our human agents doing the right thing, there isn't really an analog to check the non-human agent. We don't have robot prison yet.
> It shouldn't matter, because whoever is producing the work product is responsible for it, no matter whether genAI was involved or not.
I hate to ask, but did you RTFA? Scrolling down ever so slightly (emphasis not my own)
| *Who authorized this class of action, for which agent identity, under what constraints, for how long; and how did that authority flow?*
| A common failure mode in agent incidents is not “we don’t know what happened,” but:
| > We can’t produce a crisp artifact showing that a specific human explicitly authorized the scope that made this action possible.
They explicitly state that the problem is you don't know which human to point at.
> They explicitly state that the problem is you don't know which human to point at.
The point is "explicitly authorized", as the article emphasizes. It's easy to find who ran the agent(article assumes they have OAuth log). This article is about 'Everyone knows who did it, but did they do it on purpose? Our system can figure it out'
The workflow of starting dozens or hundreds of "agents" that work autonomously is starting to gain traction. The goal of people who work like this is to completely automate software development. At some point they want to be able to give the tool an arbitrary task, presumably one that benefits them in some way, and have it build, deploy, and use software to complete it. When millions of people are doing this, and the layers of indirection grow in complexity, how do you trace the result back to a human? Can we say that a human was really responsible for it?
Maybe this seems simple today, but the challenges this technology forces on society are numerous, and we're far from ready for it.
When orchestrators spawn sub-agents spawn tools, there's no artifact showing how authority flowed through the chain.
Warrants are a primitive for this: signed authorization that attenuates at each hop. Each delegation is signed, scope can only narrow, and the full chain is verifiable at the end. Doesn't matter how many layers deep.
Except for the fact that that very accountability sink is relied on by senior management/CxO's the world over. The only difference is that before AI, it was the middle manager's fault. We didn't tell anyone to break the law. We just put in place incentive structures that require it, and play coy, then let anticipatory obedience do the rest. Bingo. Accountability severed. You can't prove I said it in a court of law, and skeevy shit gets done because some poor bloke down the ladder is afraid of getting fired if he doesn't pull out all the stops to meet productivity quotas.
AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.
Did I give the impression that the phenomena was unique to software? Hell, Boeing was a shining example of the principle in action with 737 MAX. Don't get much more "people live and die by us, and we know it (but management set up the culture and incentives to make a deathtrap anyway)." No one to blame of course. These things just happen.
Licensure alone doesn't solve all these ills. And for that matter, once regulatory capture happens, it has a tendency to make things worse due to consolidation pressure.
>AI is just better because no one can actually explain why the thing does what it does. Perfect management scapegoat without strict liability being made explicit in law.
AI is worse in that regard, because, although you can't explain why it does so, you can point a finger at it, say "we told you so" and provide the receipts of repeated warnings that the thing has a tendency of doing the things.
You're right, they should be responsible. The problem is proving it.
"I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.
And when sub-agents or third-party tools are involved, liability gets even murkier. Who's accountable when the action executed three hops away from the human?
The article argues for receipts that make "I didn't authorize that" a verifiable claim
There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases.
This isn't a legal argument and these conversations are so tiring because everyone here is insistent upon drawing legal conclusions from these nonsense conversations.
We're taking about different things. To take responsibility is volunteering to accept accountability without a fight.
In practice, almost everyone is held potentially or actually accountable for things they never had a choice in. Some are never held accountable for things they freely choose, because they have some way to dodge accountability.
The CEOs who don't accept accountability were lying when they said they were responsible.
That's when companies were accountable for their results and needed to push the accountability to a person to deter bad results. You couldn't let a computer make a decision because the computer can't be deterred by accountability.
Now companies are all about doing bad all the time, they know they're doing it, and need to avoid any individual being accountable for it. Computers are the perfect tool to make decisions without obvious accountability.
That's an orthodoxy. It holds for now (in theory and most of the time), but it's just an opinion, like a lot of other things.
Who is accountable when we have a recession or when people can't afford whatever we strongly believe should be affordable? The system, the government, the market, late stage capitalism or whatever. Not a person that actually goes to jail.
If the value proposition becomes attractive, we can choose to believe that the human is not in fact accountable here, but the electric shaitan is. We just didn't pray good enough, but did our best really. What else can we expect?
> "I asked it to summarize reports, it decided to email the competitor on its own" is hard to refute with current architectures.
If one decided to paint a school's interior with toxic paint, it's not "the paint poisoned them on its own", it's "someone chose to use a paint that can poison people".
Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.
>Somebody was responsible for choosing to use a tool that has this class of risks and explicitly did not follow known and established protocol for securing against such risk. Consequences are that person's to bear - otherwise the concept of responsibility loses all value.
What if I hire you (instead of LLM) to summarize the reports and you decide to email the competitors? What if we work in the industry where you have to be sworn in with an oath to protect secrecy? What if I did (or didn't) check with the police about your previous deeds, but it's first time you emailed competitors? What if you are a schizo that heard God's voice that told you to do so and it's the first episode you ever had?
The difference is LLMs are known to regularly and commonly hallucinate as their main (and only) way of internal functioning. Human intelligence, empirically, is more than just a stochastic probability engine, therefore has different standards applied to it than whatever machine intelligence currently exists.
> otherwise the concept of responsibility loses all value.
Frankly, I think that might be exactly where we end up going. Finding a responsible person to punish is just a tool we use to achieve good outcomes, and if scare tactics is no longer applicable to the way we work, it might be time to discard it.
A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.
It's scary that a nuclear exit starts looking like an enticing option when confronted with that.
I saw some people saying the internet, particularly brainrot social media, has made everyone mentally twelve years old. It feels like it could be true.
Twelve–year–olds aren't capable of dealing with responsibility or consequence.
>A brave new world that is post-truth, post-meaning, post-responsibility, and post-consequences. One where the AI's hallucinations eventually drag everyone with it and there's no other option but to hallucinate along.
That value proposition depends entirely on whether there is also an upside to all of that. Do you actually need truth, meaning, responsibility and consequences while you are tripping on acid? Do you even need to be alive and have a physical organic body for that? What if Ikari Gendo was actually right and everyone else are assholes who don't let him be with his wife.
Ultimately the goal is to have a system that prevents mistakes as much as possible adapts and self-corrects when they do happen. Even with science we acknowledge that mistakes happen and people draw incorrect conclusions, but the goal is to make that a temporary state that is fixed as more information comes in.
I'm not claiming to have all the answers about how to achieve that, but I am fairly certain punishment is not a necessary part of it.
Found it. is was this line [0] specifically. "rm -rf /usr /lib/nvidia-current/xorg/xorg" instead of "rm -rf /usr/lib/nvidia-current/xorg/xorg", which will delete all of /usr and then fail to delete a non-existent directory at /lib/nvidia-current/xorg/xorg
"Our tooling was defective" is not, in general, a defence against liability. Part of a companys obligations is to ensure all its processes stay within lawful lanes.
"Three months later [...] But the prompt history? Deleted. The original instruction? The analyst’s word against the logs."
One, the analysts word does not override the logs, that's the point of logs. Two, it's fairly clear the author of the fine article has never worked close to finance. A three month retention period for AI queries by an analyst is not an option.
SEC Rule 17a-4 & FINRA Rule 4511 have entered the chat.
Agree ... retention is mandatory. The article argues you should retain authorization artifacts, not just event logs. Logs show what happened. Warrants show who signed off on what
In my workplace, I have just a laptop. It's not ideal, but it's fine. I have a large external monitor, an external keyboard, and a crazy large hard drive, so it's OK.
I've been buying and playing games from GOG on Linux for a very long time with no need for GOG Galaxy -- which is a thing I know nothing about. Since this announcement, I've been trying to figure out why I'd need it.
It seems like it's just a convenience application and social connection point (leaderboards, etc.). In which case, it's not something of interest to me. However, I've also seen references to Galaxy that imply that it's necessary to play games -- which is obviously untrue in general, but perhaps there are some games that require it?
Anyway, I'm tremendously confused by all this.
reply