It's only a problem if I have my browser set to use dark theme or system theme and my system theme is dark if I switch it to light theme. Everything looks good. So most likely he's using some kind of CSS framework that's automatically responding to the dark theme, but other styles that he's hand coded are not compatible with it
I checked in Firefox and Chrome (on Linux) and code samples look OK to me. What browser/OS are you using. Maybe send me a screenshot at janko dot itm at gmail.
Regardless of if you agree with the US Constitution's perspective on self-evident rights, your point here does not negate what they said, simply indicates that the Russian government is not constrained in the same way the US government is.
Yes, however it still means that the browser is phoning home to somewhere. To be able to make use of that API key, it has to send some data out. Is that data routed over TOR? Does it even matter given that an API key can be used to deanonymize you?
My understanding (and this may have changed), is that you have to initiate the AI features each time (e.g. clicking on "Summarize This").
But yes, your point is valid. For Tor, if you enter an API key, you could be identified. Still, does the Tor Browser prevent you from installing addons which are no more secure than these AI features? It didn't years ago - not sure if that's changed.
I made the decision a few months back to go all in on self-hosting, and my own infrastructure. At least once a week I run into something that makes me realize I made the right decision. It's that time of the week again.
it's not about pre-war. it's about pre-trinity-nuclear tests. which means uncontaminated by atmospheric radioactive isotopes.
it happened at the end of ww-ii but that is not the point.
> Low-background steel, also known as pre-war steel and pre-atomic steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s.
I agree. I use LLMs heavily for gruntwork development tasks (porting shell scripts to Ansible is an example of something I just applied them to). For these purposes, it works well. LLMs excel in situations where you need repetitive, simple adjustments on a large scale. IE: swap every postgres insert query, with the corresponding mysql insert query.
A lot of the "LLMs are worthless" talk I see tends to follow this pattern:
1. Someone gets an idea, like feeding papers into an LLM, and asks it to do something beyond its scope and proper use-case.
2. The LLM, predictably, fails.
3. Users declare not that they misused the tool, but that the tool itself is fundamentally corrupted.
It in my mind is no different to the steam roller being invented, and people remaking how well it flattens asphalt. Then a vocal group trying to use this flattening device to iron clothing in bulk, and declaring steamrollers useless when it fails at this task.
>swap every postgres insert query, with the corresponding mysql insert query.
If the data and relationships in those insert queries matter, at some unknown future date you may find yourself cursing your choice to use an LLM for this task. On the other hand you might not ever find out and just experience a faint sense of unease as to why your customers have quietly dropped your product.
I’ve already seen people completely mess things up. It’s hilarious. Someone who thinks they’re in “founder mode” and a “software engineer” because chatgpt or their cursor vomited out 800 lines of python code.
The vileness of hoping people suffer aside, anyone who doesn’t have adequate testing in place is going to fail regardless of whether bad code is written by LLMs or Real True Super Developers.
What vileness? These are people who are gleefully sidestepping things they don't understand and putting tech debt onto others.
I'd say maybe up to 5-10 years ago, there was an attitude of learning something to gain mastery of it.
Today, it seems like people want to skip levels which eventually leads to catastrophic failure. Might as well accelerate it so we can all collectively snap out of it.
The mentality you're replying to confuses me. Yes, people can mess things up pretty badly with AI. But I genuinely don't understand why the assumption that anyone using AI is also not doing basic testing, or code review.
Right, which is why you go back and validate code. I'm not sure why the automatic assumption that implementing AI in a workflow means you blindly accept the outputs. You run the tool, you validate the output, and you correct the output. This has been the process with every new engineering tool. I'm not sure why people assume first that AI is different, and second that people who use it are all operating like the lowest common denominator AI slop-shop.
In this analogy are all the steamroller manufacturers loudly proclaiming how well it 10x the process of bulk ironing clothes?
And is a credulous executive class en masse buying into that steam roller industry marketing and the demos of a cadre of influencer vibe ironers who’ve never had to think about the longer term impacts of steam rolling clothes?
Thank you for mentioning that! What a great example of something an LLM can pretty well do that otherwise can take a lot of time looking up Ansible docs to figure out the best way to do things. I'm guessing the outputs aren't as good as someone real familiar with Ansible could do, but it's a great place to start! It's such a good idea that it seems obvious in hindsight now :-)
Exactly, yeah. And once you look over the Ansible, it's a good place to start and expand. I'll often have it emit hemlcharts for me as templates, then after the tedious setup of the helm chart is done, the rest of it is me manually doing the complex parts, and customizing in depth.
Plus, it's a generic question; "give a helm chart for velero that does x y and z" is as proprietary as me doing a Google search for the same, so you're not giving proprietary source code to OpenAI/wherever so that's one fewer thing to worry about.
Yeah, I tend to agree. The main reason that I use AI for this sort of stuff is it also gives me something complete that I can then ask questions about, and refine myself. Rather than the fragmented documentation style "this specific line does this" without putting it in the context of the whole picture of a completed sample.
I'm not sure if it's a facet of my ADHD, or mild dyslexia, but I find reading documentation very hard. It's actually a wonder I've managed to learn as much as I have, given how hard it is for me to parse large amounts of text on a screen.
Having the ability to interact with a conversational type documentation system, then bullshit check it against the docs after is a game changer for me.
that's another thing! people are all "just read the documentation". the documentation goes on and on about irrelevant details, how do people not see the difference between "do x with library" -> "code that does x", and having to read a bunch of documentation to make a snippet of code that does the same x?
I'm not sure I follow what you mean, but in general yes. I do find "just read the docs" to be a way to excuse not helping team members. Often docs are not great, and tribal knowledge is needed. If you're in a situation where you're either working on your own and have no access to that, or in a situation where you're limited by the team member's willingness to share, then AI is an OK alternative within limits.
Then there's also the issue that examples in documentation are often very contrived, and sometimes more confusing. So there's value in "work up this to do such and such an operation" sometimes. Then you can interrogate the functionality better.
https://news.ycombinator.com/item?id=46346796
reply