Hacker Newsnew | past | comments | ask | show | jobs | submit | puilp0502's commentslogin

The author wanted people to be able to just "ssh mygame", no? In that sense, ssh was a design requirement.

I didn't think about such throwback to the 80ies. Could be, yes. But then he cannot control the ssh option, and with 2000 users, maybe 10 would set it. I don't think so.

I think the parent is talking about the people who post to LinkedIn that "SWE as a profession is dead" non-stop. I fully agree with you that it massively lowered the cost to create, but I'd argue that the people who's saying that SWE is dead wouldn't be able to go past the complexity barrier that most of us are accustomed to handling. I think the real winners would be the ones with domain expertise but didn't have the capacity to code (just like OP and you).


Correct. I think "real" software requires real development and architecture.

And to be honest, even the tiny apps I'm doing I wouldn't have been able to do without some background in how frontend / backend should work, what a relational database is, etc. (I was an unskilled technical PM in the dotcom boom in the 2000s so at least know my way around a database a little. I know what these parts of tech CAN do, but I didn't have the skills to make them do it myself.)


Yes, you're not who the GP was talking about ;-)


Few questions:

* How do you manage the key for encrypting IDs? Injected to app environment via envvar? Just embedded in source code? I ask this because I'm curious as to how much "care" I should be putting in into managing the secret material if I were to adopt this scheme.

* Is the ID encrypted using AEAD scheme (e.g. AES-GCM)? Or does the plain AES suffice? I assume that the size of IDs would never exceed the block size of AES, but again, I'm not a cryptographer so not sure if it's safe to do so.


> How do you manage the key for encrypting IDs?

The same way we manage all other secrets in the application. (Summarized below)

> Is the ID encrypted using AEAD scheme (e.g. AES-GCM)? Or does the plain AES suffice? I assume that the size of IDs would never exceed the block size of AES, but again, I'm not a cryptographer so not sure if it's safe to do so.

I don't have the source handy at the moment. It's one of the easier to use symmetric algorithms available in .Net. We aren't talking military-grade security here. In general: a 32-bit int encrypts to 64-bits, so we pad it with a few unicode characters so it's 64-bits encrypted to 128 bits.

---

As far as managing secrets in the application: We have a homegrown configuration file generator that's adapted to our needs. It generates both the configuration files, and strongly-typed classes to read the files. All configuration values are loaded at startup, so we don't have to worry about runtime errors from missing configuration values.

Secrets (connection strings, encryption keys, ect,) are encrypted in the configuration file as base64 strings. The certificate to read/write secrets are stored in Azure Keyvault.

The startup logic in all applications is something like:

1: Determine the environment (production, qa, dev)

2: Get the appropriate certificate

3: Read the configuration files, including decrypting secrets (such as the primary key encryption keys) from the configuration files

4: Populate the strongly-typed objects that hold the configuration values

5: These objects are dependency-injected to runtime objects


But it's much easier to say "orthogonal" than "linearly independent", no? As you mentioned, I think the word "orthogonal" has already lost its meaning of "dot product equals zero", and bears the meaning of "linearly independent" (i.e. dim(N) > 1) in casual speech.


Guilty until proven innocent.


Is there a word for a feeling that there's gotta be a German word for this niche feeling?



You mean like when the Wortzusammensetzungsverdacht just hits you? (yeah, I just made that up, that's the beauty)


I would. Heck, I bet half of HN would be interested in what kind of insanity lies under those behemoths.


I work with music streaming, it is mostly just a lot of really banal business rules that become an entangled web of convoluted if statements. Where to show a single button might mean hitting 5 different microservices and checking 10 different booleans


Every time I encounter these kinds of policy, I can't help but wonder how these policies would be enforced: The people who are considerate enough to abide by these policies, are the ones who would have "cared" about the code qualities and stuff like that, so the policy is a moot point for these kinds of people. OTOH, the people who recklessly spam "contributions" generated from LLMs, by their very nature, would not respect these policies in very high likelihood. For me it's like telling bullies to don't bully.

By the way, I'm in no way against these kinds of policy: I've seen what happened to curl, and I think it's fully in their rights to outright ban any usage of LLMs. I'm just concerned about the enforceability of these policies.


I think it's a discouragement more than an enforcement --- a "we will know if you submit AI-generated code, so don't bother trying." Maybe those who do know how to use LLMs really well can submit code that they fully understand and can explain the reasoning of, in which case the point is moot.


> I can't help but wonder how these policies would be enforced

One of the parties that decided on Gentoo's policy effectively said the same thing. If I get what you're really asking... the reality is, there's no way for them to know if a LLM tool was used internally, it's honor system. But I mean enforcement is just ban the contributor if they become a problem. They've banned or otherwise restricted other ones for being disruptive or spamming low quality contributions in the past.

It's worded the way it is because most of the parties understand this isn't going away and might get revisited eventually. At least one of them hardline opposes LLM contributions in any form and probably won't change their mind.


I see. So if I'm understanding correctly, then this policy serves as a kind of "legal ground" from which the maintainers can take action against perpetrators, right?

To add a bit more context, when I was writing the original comment, I was mainly thinking of first-time contributors that don't have any track records, and how the policy would work against them.


> I see. So if I'm understanding correctly, then this policy serves as a kind of "legal ground" from which the maintainers can take action against perpetrators, right?

They aren't government and it's not that bureaucratic. As with any group, if you break the guidelines/rules they just won't want to work with you.

> To add a bit more context, when I was writing the original comment, I was mainly thinking of first-time contributors that don't have any track records, and how the policy would work against them.

No matter what, somebody has to review the contribution. First time contributors get feedback, most of them correct their mistakes and some go on to be regular contributors (like me). Others never respond, and still more others make the same mistakes over and over again.

On the topic, Gentoo has projects like GURU where users can contribute new packages that maybe aren't ready for main tree or where a full developer wouldn't be interested, it's a good place to learn if interested in working towards becoming a developer: https://wiki.gentoo.org/wiki/Project:GURU


If nothing else, it gives maintainers a sign to point to when closing PRs with prejudice, and that's not nothing. Bad faith contributors will still likely complain when their PRs are closed, and having an obviously applicable policy to cite makes it harder for them to keep complaining without getting banned outright.


You just stop accepting contributions from them?

There is nothing inherently different about these policies that make them more or less difficult to enforce than other kinds of polices.


You enforce them by pointing out the policy and closing the issue/patch request whenever you're concerned about the quality of the submission.

If it turns out to be incorrectly called out, well that sucks, but I submit that patches have been refused before LLMs came to be.


Sometimes PR contains objective evidence, such as LLM responses left in comments, or even something like " Generated with [Claude Code](https://claude.ai/code)" in commit message (notable example: https://github.com/OpenCut-app/OpenCut/pull/479/commits).


You cannot prevent cheating with other policies like the Developer Certificate of Origin either. Yet no one brought up the potential cheating at the time these policies were discussed.

Several projects have rejected "AI" policies using your argument even though those projects themselves have contributor agreements or similar.

This inconsistency makes it likely that the cheating argument, when only used for "AI" contributions, is a pretext and these projects are forced to use or promote "AI" for a number of reasons.


It's often quite easy to distinguish LLM-generated low-effort slop and it's far easier to point to the established policy than to explain why the PR is a complete garbage. On Github it's even easier to detect by inspecting the author's contribution history (and if it's private it's an automatic red flag).

Of course, if someone has used LLM during development as a helper tool and done the necessary work of properly reviewing and fixing the generated code, then it can be borderline impossible to detect, but such PRs are much less problematic.


If someone uses an LLM to help them write good code that is indistinguishable from human written code, you are right, it's not enforceable. And that's what most people that are using LLMs should be doing. Unfortunately sometimes it is possible to tell the difference between human and LLM generated code (slop). Policies like this just make it clear and easy to outright reject them.


We do tell bullies not to bully, and then hopefully when they are caught, they are punished. It’s not a perfect system, but better than just ignoring bullying happens.


https://xkcd.com/810/

To me the point is that I want to see effort from a person asking me to review their PR. If it's obvious LLM generated bullshit, I outright ignore it. If they put in the time and effort to mold the LLM output so that it's high quality and they actually understand what they're putting in the PR (meaning they probably replace 99% of the output), then good, that's the point


What happened to curl? The comment is referring to how the curl project is being overwhelmed by low-quality bug/vulnerability reports generated (or partially generated) by AI (“AI slop”), so much so that curl maintainers are now banning reporters who submit such reports and demanding disclosure, because these sloppy reports cost a lot of time and drain the team.

[generated by ChatGPT] Source: https://news.ycombinator.com/item?id=45217858


Isn't that what APIs are for?


It can be. But I would like the option to toggle to 'unfiltered unmemory'd unsafety'd LLM' to just get the straight answer.


It's going to be interesting if ChatGPT actually hooks up with SSPs and dumps a whole "user preference" embedding vector to the ad networks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: